id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.11501 | Intrinsic superconducting diode effects in tilted Weyl and Dirac
semimetals | We explore Weyl and Dirac semimetals with tilted nodes as platforms for
realizing an intrinsic superconducting diode effect. Although tilting breaks
sufficient spatial and time-reversal symmetries, we prove that -- at least for
conventional $s$-wave singlet pairing -- the effect is forbidden by an emergent
particle-hole symmetry at low energies if the Fermi level is tuned to the
nodes. Then, as a stepping stone to the three-dimensional semimetals, we
analyze a minimal one-dimensional model with a tilted helical node using
Ginzburg-Landau theory. While one might naively expect a drastic enhancement of
the effect when the node turns from type-I to type-II, we find that the
presence of multiple Fermi pockets is more important as it enables multiple
pairing amplitudes with indepedent contributions to supercurrents in opposite
directions. Equipped with this insight, we construct minimal lattice models of
Weyl and Dirac semimetals and study the superconducting diode effect in them.
Once again, we see a substantial enhancement when the normal state has multiple
Fermi pockets per node that can accommodate more than one pairing channel. In
summary, this study sheds light on the key factors governing the intrinsic
superconducting diode effect in systems with asymmetric band structures and
paves the way for realizing it in topological semimetals. | Kai Chen, Bishnu Karki, Pavan Hosur | 2023-09-20T17:59:55Z | http://arxiv.org/abs/2309.11501v1 | # Intrinsic superconducting diode effects in tilted Weyl and Dirac semimetals
###### Abstract
We explore Weyl and Dirac semimetals with tilted nodes as platforms for realizing an intrinsic superconducting diode effect. Although tilting breaks sufficient spatial and time-reversal symmetries, we prove that - at least for conventional \(s\)-wave singlet pairing - the effect is forbidden by an emergent particle-hole symmetry at low energies if the Fermi level is tuned to the nodes. Then, as a stepping stone to the three-dimensional semimetals, we analyze a minimal one-dimensional model with a tilted helical node using Ginzburg-Landau theory. While one might naively expect a drastic enhancement of the effect when the node turns from type-I to type-II, we find that the presence of multiple Fermi pockets is more important as it enables multiple pairing amplitudes with independent contributions to supercurrents in opposite directions. Equipped with this insight, we construct minimal lattice models of Weyl and Dirac semimetals and study the superconducting diode effect in them. Once again, we see a substantial enhancement when the normal state has multiple Fermi pockets per node that can accommodate more than one pairing channel. In summary, this study sheds light on the key factors governing the intrinsic superconducting diode effect in systems with asymmetric band structures and paves the way for realizing it in topological semimetals.
## I Introduction
In recent years, there has been a growing interest in the field of electronics and superconductivity due to the fascinating observation of superconducting diode effects (SDEs). These effects involve the ability of certain materials and structures to exhibit nonreciprocal superconducting transport, effectively blocking electric current flow in one direction while allowing it to pass in the opposite direction. This behavior resembles that of a diode, making SDEs crucial for devising rectifiers and switches.
A seminal experimental study by Ando et al. [1] demonstrated the presence of SDEs in an artificial superlattice [Nb/V/Ta]. This observation was achieved by breaking the inversion symmetry of the structure and introducing time-reversal symmetry breaking through the application of an external magnetic field. Since then, the study of SDEs has become an active area of research in the field of superconductivity, owing to the significant potential of nonreciprocal critical supercurrent in various applications, such as electronics, spintronics, phase-coherent charge transport, direction-selective charge transport, and quantum computation using superconductor qubits [2; 3; 4; 5; 6; 7].
Experimental investigations have explored SDEs in diverse materials and structures. For instance, SDEs have been observed in magic angle twisted graphenes [8; 9; 10], in few layer NbSe\({}_{2}\)[11]. Furthermore, Josephson supercurrent diode effects have been demonstrated in highly transparent Josephson junctions fabricated on InAs quantum wells [12], in van der Waals heterostructures and symmetric Al/InAs-2DEG/Al junctions [13], in a three-terminal Josephson device based upon an InAs quantum well [14] and Josephson junctions containing single magnetic atoms [15]. The thin superconducting films made of niobium and vanadium indicate a robust SDE when exposed to an extremely low magnetic field of 1 Oe. Furthermore, when a layer of EuS is introduced, the SDE is amplified [16]. For asymmetric vortex motion, which exposes the mechanism underpinning the superconducting vortex diode phenomenon, has been reported in the layered structure of Nb/EuS (superconductor/ferromagnet) [17]. SDE has also been observed in topological insulator/superconductor [18; 19; 20] and superconductor nanowire/topological Dirac semimetal [21] hybrid systems.
The intriguing experimental findings have stimulated theoretical efforts to understand the underlying mechanisms of SDEs. The Rashba-Zeeman-Hubbard model has been proposed as a theoretical framework to explain SDEs, and established a close relationship between SDE and Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) states [22; 23]. In the FFLO state, Cooper pairs form with finite center-of-mass momenta due to opposite spin states on Zeeman-split Fermi surfaces [24; 25]. Numerical calculations and Ginzburg-Landau (GL) theory have provided further support and insights into the understanding of SDEs [23; 26]. Among extrinsic mechanisms, SDE behavior has been predicted in topological insulators and Rashba nanowires [27] as well as general metallic wires with asymmetric dispersion, with the latter expected to show the theoretically maximum SDE in a range of parameters [28]. Moreover, researchers have investigated the influence of disorder on SDEs by using the quasi-classical Eilenberger equation [29]. The disorder effect is crucial in comprehending the behavior of SDEs in realistic and practical scenarios. Theoretical studies have also focused on the Josephson diode effect, revealing its universality and potential applicability in various contexts [27; 30; 31; 32; 33].
This work explores intrinsic SDEs in Weyl and Dirac semimetals. These semimetals are characterized by gapless points between their valence and conduction bands, known as Weyl and Dirac points, respectively [34; 35; 36; 37; 38; 39]. They possess several favorable properties that make them promising platforms for the SDEs. For instance, the density of states near the nodes is low, which facilitates breaking of time-reversal, inversion and spatial symmetries necessary for enabling the SDE. These materials also typically have multiple Fermi pockets centered at different points in momentum space, which enhances the possibility of FFLO states [40; 41; 42; 43]. Moreover, Fermi pockets centered around the origin can also develop finite mo
mentum pairing if the dispersion is tilted. There are two different types of Weyl/Dirac semimetals: type I, with point-like Fermi surfaces, and type II, defined by electron and hole pockets touching at the Weyl nodes [44; 45; 46]. Tilting the dispersion around the node induces the transition from type-I to type-II. In this study, we shed light on the key factors that enhance the SDE in tilted semimetals. In particular, we show that multiple inequivalent pairing channels can enhance the intrinsic SDEs and are more important than the band tilting.
The outline of this paper is as follows. In Section II, we delve into the symmetries beyond time reversal and inversion symmetry that need to be broken in order to support SDEs. We explore how tuning the chemical potential impacts these symmetries, shedding light on the underlying symmetry breaking responsible for SDEs and offering potential avenues for experimental control and manipulation of these effects. In Section III, we employ the Ginzburg-Landau theory to investigate a one-dimensional model characterized by an asymmetric band structure. Our analysis reveals that this simple yet insightful model can indeed support a ground state with Cooper pairs possessing finite momentum, thus providing a compelling platform to observe and study SDEs. Building on the insights gained from the 1D model, we extend our study to lattice modes of tilted Weyl semimetals and Dirac semimetals in sections IV and V, respectively. Our numerical simulations reveal the existence of nonreciprocity in the depairing critical current, the key requirement for SDEs in these intriguing materials, and support the heuristic that multiple inequivalent pairing channels are more important than band asymmetry for a large SDE.
## II Symmetry and the role of chemical potential \(\mu\)
In general, necessary conditions for realizing the SDE are the violation of time-reversal (\(\mathcal{T}\)), inversion (\(\mathcal{I}\)) and spatial symmetries under which current in the desired nonreciprocal direction is odd. These conditions ensure the breaking of reciprocity in the system, meaning that the response of the superconductor to external perturbations is different for perturbations applied in opposite directions. In most cases, these violations suffice to guarantee a SDE; however, a chiral or particle hole symmetry in the normal state, common found at low energies near band intersections, can suppress the SDE for singlet pairing as shown below.
Consider a Bloch Hamiltonian \(H(\mathbf{k})\). The Bogoliubov-de Gennes (BdG) Hamiltonian for generic pairing in the basis \(\left(c_{\mathbf{k}+\mathbf{q}/2},c^{\dagger}_{-\mathbf{k}+\mathbf{q}/2} \right)^{T}\) is
\[H^{\text{BdG}}(\mathbf{k},\mathbf{q},\Delta_{\mathbf{k}})=\begin{pmatrix}H( \mathbf{k}+\mathbf{q}/2)&\Delta_{\mathbf{k}}\\ \Delta^{\dagger}_{\mathbf{k}}&-H^{*}(-\mathbf{k}+\mathbf{q}/2)\end{pmatrix} \tag{1}\]
where we have allowed for pairing with finite momentum \(\mathbf{q}\) and fermion antisymmetry ensures \(\Delta_{\mathbf{k}}=-\Delta^{T}_{-\mathbf{k}}\). \(H^{\text{BdG}}(\mathbf{k},\mathbf{q},\Delta)\) obeys particle-hole symmetry
\[\tau_{x}\mathbb{K}H^{\text{BdG}}(\mathbf{k},\mathbf{q},\Delta_{\mathbf{k}}) \mathbb{K}\tau_{x}=-H^{\text{BdG}}(-\mathbf{k},\mathbf{q},\Delta_{\mathbf{k}}) \tag{2}\]
where \(\tau_{x}\) is a Pauli matrix in Nambu space and \(\mathbb{K}\) denotes complex conjugation.
Suppose the normal state also has a chiral unitary symmetry \(Q\):
\[QH(\mathbf{k})Q^{\dagger}=-H(\mathbf{k}) \tag{3}\]
or a chiral anti-unitary or particle-hole symmetry \(Q\mathbb{K}\):
\[Q\mathbb{K}H(\mathbf{k})\mathbb{K}Q^{\dagger}=-H^{*}(-\mathbf{k}) \tag{4}\]
Under \(Q\) and \(Q\mathbb{K}\), \(H^{\text{BdG}}(\mathbf{k},\mathbf{q},\Delta_{\mathbf{k}})\) transforms into \(-H^{\text{BdG}}(\mathbf{k},\mathbf{q},-\tilde{\Delta}_{\mathbf{k}})\) and \(-\tau_{x}H^{\text{BdG}}(-\mathbf{k},\mathbf{q},\tilde{\Delta}_{\mathbf{k}}) \tau_{x}\), respectively, where \(\tilde{\Delta}_{\mathbf{k}}=Q\Delta_{\mathbf{k}}Q^{\dagger}\). Along with the BdG particle-hole symmetry Eq. (2), these two symmetries in the normal state ensure that \(H^{\text{BdG}}(\mathbf{k},\mathbf{q},\Delta_{\mathbf{k}})\) is related to \(H^{\text{BdG}}(\mathbf{k},-\mathbf{q},-\tilde{\Delta}_{\mathbf{k}})\) and \(H^{\text{BdG}}(-\mathbf{k},-\mathbf{q},\tilde{\Delta}_{\mathbf{k}})\) by anti-unitary and unitary operations.
Assuming the electrons experience an attractive Hubbard interaction (\(g>0\))
\[H_{\text{int}}=-g\sum_{\mathbf{k},\mathbf{k}^{\prime},\mathbf{q}}c^{\dagger}_{ \mathbf{k}+\frac{\mathbf{q}}{2}\uparrow}c^{\dagger}_{-\mathbf{k}+\frac{ \mathbf{q}}{2}\downarrow}c_{-\mathbf{k}^{\prime}+\frac{\mathbf{q}}{2}\downarrow }c_{\mathbf{k}^{\prime}+\frac{\mathbf{q}}{2}\uparrow}, \tag{5}\]
where \(g\) represents the strength of attraction. Within the mean field approximation, we get the Ginzburg-Landau free energy density:
\[f[\mathbf{q},\Delta]=\int_{\mathbf{k}}\frac{\text{tr}(\Delta_{\mathbf{k}} \Delta^{\dagger}_{\mathbf{k}})}{g}-T\text{Tr}\log\left[1+e^{-H_{\text{BdG}}( \mathbf{k},\mathbf{q},\Delta_{\mathbf{k}})/T}\right] \tag{6}\]
where \(\int_{\mathbf{k}}\equiv\int\frac{d^{D}k}{(2\pi)^{D}}\), \(D\) is the spatial dimension of the system, \(\text{tr}\,(\dots)\) runs over spin and orbitals while \(\text{Tr}\,[\dots]\) runs over spin, orbital and Nambu degrees of freedom. Clearly, \(f(\mathbf{q},\Delta)\) only depends on the energy eigenvalues of \(H_{\text{BdG}}(\mathbf{k},\mathbf{q})\) and is unaffected under the change \(\mathbf{k}\rightarrow-\mathbf{k}\) of the integration variable. Moreover, \(U(1)\) gauge symmetry mandates \(f(\mathbf{q},\Delta)\) to be unchanged under the transformation \(\Delta_{\mathbf{k}}\to e^{i\phi_{\mathbf{k}}}\Delta_{\mathbf{k}}\) for arbitrary \(\phi_{\mathbf{k}}\). Thus, if \(\Delta_{\mathbf{k}}\) equals \(\tilde{\Delta}_{\mathbf{k}}\) (\(\tilde{\Delta}_{-\mathbf{k}}\)) upto a phase when the normal state possesses the symmetry \(Q\) (\(Q\mathbb{K}\)), \(f(\mathbf{q},\Delta)\) is even in \(\mathbf{q}\): \(f(\mathbf{q},\Delta)=f(-\mathbf{q},\Delta)\). The above condition on \(\Delta_{\mathbf{k}}\) is clearly obeyed by ordinary spin singlet \(s\)-wave pairing, \(\Delta_{\mathbf{k}}=\Delta\sigma_{y}\) with \(\sigma_{y}\) a spin Pauli matrix. Henceforth, we take pairing to be of this form and assume \(\Delta_{\mathbf{k}}\equiv\Delta\) independent of \(\mathbf{k}\). Note, \(\Delta\) can still pair electrons with non-zero center-of-mass momentum \(\mathbf{q}/2\).
The SDE can be calculated by minimizing \(f[\mathbf{q},\Delta]\) with respect to \(\Delta\) for fixed \(\mathbf{q}\) to obtain the condensation energy \(f[\mathbf{q},\Delta(\mathbf{q})]\equiv f(\mathbf{q})\) at that \(\mathbf{q}\), followed by extremizing the supercurrent \(j(\mathbf{q})\equiv 2\partial_{\mathbf{q}}f(\mathbf{q})\) over \(\mathbf{q}\). Positive and negative currents of largest magnitudes represent critical currents in opposite directions, \(j^{\pm}_{c}\), and the SDE is characterized by the quality factor
\[\eta=\left|\frac{j^{+}_{c}-j^{-}_{c}}{j^{+}_{c}+j^{-}_{c}}\right|\in[0,1] \tag{7}\]
If \(f(\mathbf{q})=f(-\mathbf{q})\), critical currents in opposite directions have the same magnitude and the SDE is absent (\(\eta=0\)) while the largest SDE occurs if either \(j_{c}^{+}\) or \(j_{c}^{-}\) vanishes.
Point nodes in band structures enjoy at least one of chiral or particle-hole symmetries at low energies when the chemical potential is tuned to the node. For instance, in the absence of tilting, massless 2D Dirac nodes enjoy the chiral symmetry \(Q\), 3D Weyl nodes respect \(Q\mathbb{K}\), and 3D Dirac nodes possess both \(Q\) and \(Q\mathbb{K}\). Crucially, while \(Q\) is immediately violated by a tilt in the dispersion, \(Q\mathbb{K}\) survives. Therefore, to obtain a SDE with \(s\)-wave, singlet pairing in tilted Weyl and Dirac semimetals, the chemical potential must be tuned away from the node to break the particle-hole symmetry \(Q\mathbb{K}\) in the normal state.
Note that a finite chemical potential is not merely a density of states requirement for superconductivity to occur in the first place. Indeed, type-II semimetals already possess finite Fermi surfaces and hence, a superconducting instability with appropriate interactions. Instead, a finite chemical potential is symmetry requirement for the SDE that goes beyond the usual mandates of broken \(\mathcal{T}\), \(\mathcal{I}\) and other spatial symmetries that reverse the supercurrent.
## III SDE in a minimal 1D model with asymmetric bands
In this section, we focus on a one-dimensional (1D) model with asymmetric bands. This will yield insight that will be useful for understanding the SDE for 3D Weyl and Dirac fermions. In particular, we will gradually develop the following intuition: when multiple pairing channels are present, it is possible for critical currents in opposite directions to be dominated by different channels and can therefore be vastly different, resulting in a large SDE.
A minimal model can be described by
\[H_{1D}(k)=(1+\alpha k^{2})k\sigma_{z}-\lambda k-\mu, \tag{8}\]
where \(\mu\) is the chemical potential and \(\sigma_{z}\) is the Pauli-Z matrix in spin space. The parameter \(\lambda\) creates a tilt in the dispersion around \(k=0\) while \(\alpha>0\) ensures that the tilt is undone at finite \(k\). \(H_{1D}\) has two qualitatively different regimes separated by a critical value of \(\lambda\),
\[\lambda_{c}=\left|1+3\left(\frac{\mu^{2}|\alpha|}{4}\right)^{1/3}\right| \tag{9}\]
for given \(\alpha\) and \(\mu\). For \(|\lambda|<\lambda_{c}\), there are only two Fermi points and one momentum channel for Cooper pairing, while \(|\lambda|>\lambda_{c}\) results in four Fermi points and three channels as sketched in Fig. 3(a,d).
For singlet superconductivity with Cooper pair momentum \(q\), the appropriate BdG Hamiltonian is
\[H_{1D}^{\text{BdG}}(k,q)=\begin{pmatrix}H_{1D}(k+q/2)&-i\sigma_{y}\Delta\\ i\sigma_{y}\Delta&-H_{1D}^{*}(-k+q/2).\end{pmatrix}, \tag{10}\]
At \(\mu=0\), \(H_{1D}\) satisfies a particle-hole symmetry, \(\sigma_{y}H_{1D}^{*}(k)\sigma_{y}=-H_{1D}(-k)\), which suppresses the SDE as described in Sec. II with \(Q\equiv\sigma_{y}\). At non-zero \(\mu\), we calculate the diode coefficient \(\eta\) in three different ways with increasing amount of analytical input and physical insight.
First, we directly compute the free energy density
\[f[q,\Delta]=\frac{|\Delta|^{2}}{g}-T\int\frac{dk}{2\pi}\text{Tr}\log\left(1+e ^{-\frac{H_{1D}^{1}(Q,q)}{T}}\right), \tag{11}\]
minimize it with respect to \(\Delta\) to obtain \(\Delta(q)\) upto a phase and \(f(q)\equiv f[q,\Delta(q)]\), and compute the current \(j(q)=2\partial_{q}f(q)\). All steps are carried out numerically and the results are shown in Fig. 1. For weak tilting, \(|\lambda|<\lambda_{c}\), we see a single minimum in \(f(q)\) close to \(q=0\) and a small diode coefficient \(\eta\approx 3.2\%\) [Fig. 1(a,b)]. Strong tilting unsurprisingly produces a larger \(\eta\approx 12\%\). However, the enhancement is not merely quantitative; we observe qualitatively new features in \(f(q)\) in the form of two inequivalent local minima away from \(q=0\) and a large corresponding asymmetry in \(j(q)\) [Fig. 1(c,d)], suggesting that the change in Fermiology plays an important role in enhancing the SDE.
To analyze this point further, we focus on \(T\) close to the critical temperature \(T_{c}\) where \(\Delta\) is small and \(f[q,\Delta]\) can be approximated as
\[f[q,\Delta]=A(q)\Delta^{2}+\frac{B(q)}{2}\Delta^{4}, \tag{12}\]
In this regime, the main role of \(B(q)\) is to ensure physical stability by lower bounding \(f[q,\Delta]\), allowing us to safely take it to be a positive constant, \(B(q)\approx b>0\), (we set \(b=1\) throughout this work). In contrast, the physics of the system depends sensitively on \(A(q)\). For instance, minimizing \(f[q,\Delta]\) yields a superconducting ground state with \(|\Delta(q)|=\sqrt{-A(q)/b}\) only if \(A(q)<0\), while the supercurrent an be expressed as \(j(q)=2\frac{\partial}{\partial q}f(q)=|A(q)|\frac{\partial}{\partial q}A(q)\).
Figure 1: (a, b): Free energy density and supercurrent with parameter \(\lambda=2\). (c, d): Free energy density and supercurrent with parameter \(\lambda=4.4\). Other parameters are \(\alpha=16\), \(\mu=0.4\) and \(g=3\pi\), which yield \(\lambda_{c}\approx 3.58\) and \(T_{c}\approx 0.46\), and we set \(T=0.1\).
Thus, we explicitly calculate \(A(q)\) following [47] as:
\[A(q) =-T\int\frac{dk}{2\pi}\sum_{n}\mathrm{tr}[G(k+q,\epsilon_{n})G(-k,- \epsilon_{n})] \tag{13}\] \[+T_{c}\int\frac{dk}{2\pi}\sum_{n}\mathrm{tr}[G(k,\epsilon_{n})G(- k,-\epsilon_{n})]_{T=T_{c}},\]
where the Matsubara Green's function \(G(k,\epsilon_{n})=[i\epsilon_{n}-H_{1D}(k)]^{-1}\) with \(\epsilon_{n}=(2n+1)\pi T\). The second term in Eq. 13 reduces to just \(1/g\), which determines the value of the critical temperature \(T_{c}\). The momentum integral is carried out numerically and \(A(q)\) hence obtained is used to reevaluate \(f(q)\) using Eq. 12. The results, shown in Fig. 2, are qualitatively consistent with the fully numerical results presented earlier. In particular, we see that \(f(q)\) exhibits a single minimum, resulting in a diode quality factor of \(\eta\approx 18\%\) in the weak tilting regime with \(\lambda=2\), which is less than \(\lambda_{c}\approx 3.58\) [Fig. 2 (a, b)]. In contrast, a strong tilt of \(\lambda=4.4>\lambda_{c}\) shows two local minima in \(f(q)\) and yields \(\eta\approx 21\%\) [Fig. 2 (c, d)]. Clearly, the change in Fermiology is correlated with a substantial enhancement of the SDE. The quantitative values are different because we set \(T=0.1\), which is quite far from \(T_{c}\), for numerical stability.
To unearth the connection between Fermiology and the SDE more precisely, we analytically calculate \(A(q)\) in Eq. 13 in the weak pairing limit, valid for \(T\) near \(T_{c}\). In this limit, Cooper pairs predominantly form from electrons near the Fermi points. This allows us to analytically perform the Matsubara summation and momentum integral to obtain the following expression:
\[A(q)=-\sum_{i=1,2}\rho_{F}^{(i)}\left[\frac{T_{c}-T}{T_{c}}-\frac{7\zeta(3)}{1 6\pi^{2}T_{c}^{2}}\delta_{i}^{2}(q)\right], \tag{14}\]
where \(\delta_{i}(q)=(-1)^{i}\alpha q^{3}+(-1)^{i+1}3p_{F}^{(i)}\alpha q^{2}-(\lambda +(-1)^{i+1}+(-1)^{i+1}3(p_{F}^{(i)})^{2}\alpha)q+2\lambda p_{F}^{(i)}\), and \(\rho_{F}^{(i)}\) is the density of states at the \(i\)-th Fermi point. For values of \(|\lambda|<\lambda_{c}\), the densities of states are given by:
\[\rho_{F}^{(1)} =\left[2\pi\left(3\alpha[p_{F}^{(1)}]^{2}+(1-\lambda)\right) \right]^{-1},\] \[\rho_{F}^{(2)} =\left[2\pi\left(3\alpha[p_{F}^{(2)}]^{2}+(1+\lambda)\right) \right]^{-1}, \tag{15}\]
where Fermi momentum \(p_{F}^{(1,2)}\) are
\[p_{F}^{(1)} =\left[\frac{\mu}{2\alpha}+\sqrt{\frac{\mu^{2}}{4\alpha^{2}}+ \frac{(1-\lambda)^{3}}{27\alpha^{3}}}\right]^{1/3}\] \[\quad+\left[\frac{\mu}{2\alpha}-\sqrt{\frac{\mu^{2}}{4\alpha^{2}} +\frac{(1-\lambda)^{3}}{27\alpha^{3}}}\right]^{1/3},\] \[p_{F}^{(2)} =\left[-\frac{\mu}{2\alpha}+\sqrt{\frac{\mu^{2}}{4\alpha^{2}}+ \frac{(1+\lambda)^{3}}{27\alpha^{3}}}\right]^{1/3}\] \[\quad+\left[-\frac{\mu}{2\alpha}-\sqrt{\frac{\mu^{2}}{4\alpha^{2} }+\frac{(1+\lambda)^{3}}{27\alpha^{3}}}\right]^{1/3}. \tag{16}\]
If \(p_{F}^{(1)}+p_{F}^{(2)}\neq 0\), electrons at two Fermi points can form Cooper pairs with a finite momentum \(q_{*}\approx p_{F}^{(1)}+p_{F}^{(2)}\), where the supercurrent \(j(q_{*})=0\). However, for \(|\lambda|>\lambda_{c}\), there exist three possible Fermi momenta near \(p_{F,j=1,2,3}^{(2)}\), each corresponding to a density of states \(\rho_{F,j=1,2,3}^{(2)}\) for spin-up states. As illustrated in Fig. 3(d), this leads to three potential pairing channels with electrons having Fermi momentum near \(p_{F}^{(1)}\) and spin-down, which leads to additional structure in the free energy density.
In general, the quality factor of the SDE depends on the model's parameters. In our 1D model, two relevant
Figure 3: (a) and (d): Schematics of Cooper pairs in the quasi-one-dimensional system. (b) and (c): The GL free energy density \(f(q)\), the supercurrent \(j(q)\) (solid line), and \(-j(q)\) (dashed line) for weak tilting with \(\lambda=2\). The parameters are \(\alpha=16\), \(\lambda=1\), \(T_{c}\approx 0.46\), \(T=0.1\), and \(\mu=0.4\). (e): The GL free energy density \(f(q)\) for different Cooper pairs: red line (Cooper pairing channel 1), blue line (Cooper pairing channel 3). The supercurrent \(j(q)\) for different Cooper pairs: red line (Cooper pairing channel 1), blue line (Cooper pairing channel 3). The pairing channel \(j(q)\) for different Cooper pairs: red line (Cooper pairing channel 1), blue line (Cooper pairing channel 2), and black line (Cooper pairing channel 3). Dashed lines represent the opposite supercurrent \(-j(q)\) for Cooper pairing channels with the same color. The parameters in (e) and (f) are the same as in (b) and (c), except that the parameter \(\lambda=4.4\).
Figure 2: (a), (b): The GL free energy density \(f(q)\) and the supercurrent \(j(q)\) (blue line) and \(-j(q)\) (red line) under weak tilting with \(\lambda=2\), respectively. (c), (d): The same quantities as (a, b) under strong tilting with \(\lambda=4.4\), respectively. The parameters are \(\alpha=16\), \(T_{c}\approx 0.46\), \(T=0.1\), and \(\mu=0.4\).
parameters are \(\lambda\) and \(\mu\). To elucidate the relationship between the quality factor and \((\mu,\lambda)\), we present the phase diagram shown in Fig. 4(a). Interestingly, higher quality factors are observed just above the because the free energy density becomes more asymmetric near the critical line [see Fig. 4(b)].
We also observe that the quality factor tends to zero as \(\lambda\) increases. Qualitatively, for very large \(\lambda\), two Fermi points that form channel 3 in Fig. 3(d) merge into a single Fermi point [see the inset band dispersions in Fig. 4(b)]. Effectively, there are only two possible Cooper pairing channels; therefore, the diode quality factor could be diminished.
Quantitatively, we selected four typical parameters in the parameter space (denoted by star, hexagon, disk, half-disk), as shown in Fig. 4(a, b). At larger values of \(\lambda\), the free energy density exhibits two valleys, and the two valleys are approximately mirror images of each other about the axis at \(q\approx 0\). The supercurrent is defined as the derivative of the free energy density with respect to the Cooper pairing momentum. Therefore, for any positive current, there exists a negative current with the same absolute value. In other words, the diode quality factor equals zero.
Our findings not only confirm the presence of SDEs in our 1D model with asymmetric band dispersions but also underscore the significance of accounting for multiple Cooper pairing channels under strong tilting conditions. The observed complex patterns in the free energy density and supercurrent open up new avenues for optimizing superconducting systems for non-reciprocal effects.
## IV SDE in tilted Weyl semimetals
Weyl semimetals are intriguing materials characterized by non-degenerate touching points, known as Weyl nodes, between their valence and conduction bands. Weyl nodes exhibit linear dispersion and give rise to various intriguing properties associated with the topological nature of the bulk band structure [37; 38; 39]. There are two different types of Weyl semimetals: type I Weyl semimetals with pointlike Fermi surfaces and type II Weyl semimetals with defined by electron and hole pockets touching at the Weyl nodes [44; 45; 46]. The latter type can be obtained from the former by strongly titing the Weyl dispersion.
In general, to realize SDEs, both \(\mathcal{T}\)- and \(\mathcal{I}\)- symmetries must be broken. The low density of states in Weyl semimetals makes breaking the \(\mathcal{T}\)- and \(\mathcal{I}\)- symmetries easier. On the other hand, as shown in the last section, we found that asymmetric band dispersions can induce the SDEs. Therefore, tilted Weyl semimetals provide us with a typical example for investigating the possibility of realizing SDEs.
In this section, we introduce two simple lattice models of tilted Weyl semimetals to investigate the SDEs. The Bloch Hamiltonian describing the first tilted Weyl semimetal and its corresponding energy spectrum can be expressed as follows:
\[H_{\mathrm{W}}(\mathbf{k}) =\left(3+2\cos k_{z}-2\cos k_{x}-2\cos k_{y}\right)\sigma_{z}\] \[+2\sin k_{+}\sigma_{x}+2\sin k_{-}\sigma_{y}+(\lambda\sin 2k_{x}- \mu)\sigma_{0} \tag{17}\] \[E_{\mathrm{W}}^{\pm}(\mathbf{k}) =\pm\left[\left(3+2\cos k_{z}-2\cos k_{x}-2\cos k_{y}\right)^{2}\right.\] \[\left.+4\sin^{2}k_{+}+4\sin^{2}k_{-}\right]^{1/2}+\lambda\sin 2k_ {x}-\mu \tag{18}\]
where the parameter \(\lambda\) controls the tilt strength, \(\mathbf{k}=(k_{x},k_{y},k_{z})\) represents the Bloch momentum, \(\mu\) is the chemical potential, \(k_{\pm}=\left(k_{x}\pm k_{y}\right)/2\), and the Pauli matrices \((\sigma_{x},\sigma_{y},\sigma_{z})\) denote spin. This model has two Weyl nodes at \(\mathbf{k}=(0,0,\pm\pi/3)\). In Fig. 5(a, c), we provide the eigen-energies as a function of \(k_{x}\) at \(k_{z}=\pi/3\), \(k_{y}=0\) for the tilted Weyl semimetal with different tilt strengths. At \(\lambda=0\), the system Hamiltonian preserves \(\mathcal{I}=\sigma_{z}\) but breaks \(\mathcal{T}=i\sigma_{y}\mathbb{K}\). For nonzero \(\lambda\), \(\mathcal{I}\)- symmetry is also broken while \(|\lambda|>\lambda_{c}\approx 0.7\) renders the Weyl nodes type-II. For arbitrary \(\lambda\) but \(\mu=0\), \(H_{\mathrm{W}}(\mathbf{k})\) obeys \(\sigma_{x}H_{\mathrm{W}}^{*}(-\mathbf{k})\sigma_{x}=-H_{\mathrm{W}}(\mathbf{k})\), which is particle-hole symmetry of the form (4). Thus, \(\mu\neq 0\) is necessary for a non-zero SDE.
In the presence of s-wave pairing with a nonzero Cooper pair momentum, the BdG Hamiltonian is given by:
\[H_{\mathrm{W}}^{\mathrm{BdG}}(\mathbf{k},\mathbf{q})=\begin{pmatrix}H_{ \mathrm{W}}(\mathbf{k}+\mathbf{q}/2)&-i\Delta\sigma_{y}\\ i\Delta\sigma_{y}&-H_{\mathrm{W}}^{*}(-\mathbf{k}+\mathbf{q}/2)\end{pmatrix} \tag{19}\]
The tilt is along \(k_{z}\), allowing us to set \(\mathbf{q}=(0,0,q)\). \(H_{\mathrm{W}}^{\mathrm{BdG}}(\mathbf{k},q)\) satisfies the particle-hole symmetry \(\tau_{x}\mathbb{K}H_{\mathrm{W}}^{\mathrm{BdG}}(\mathbf{k},q)\mathbb{K}\tau_{ x}=-H_{\mathrm{BdG}}(-\mathbf{k},q)\), which ensures the existence of pairs of opposite eigenvalues \(E_{\pm}(-\mathbf{k})\) and \(-E_{\pm}(\mathbf{k})\).
In the 1D model, we observed that the strong tilting gives rise to more pairing channels, which create new structures in the free energy density and the supercurrent and enhance the SDE. In 3D model, the number and details of the pairing channels will depend on the transverse momenta \((k_{x},k_{y})\) in general. Nonetheless, a
Figure 4: (a) The quality factor \(\eta(\mu,\lambda)\) for the tilted 1D model in the \(\lambda-\mu\) plane. The dashed line represents the critical tilting value \(\lambda_{c}\) as a function of the chemical potential \(\mu\), where \(\lambda_{c}=\left|1+3\left(\frac{\mu^{2}|q|}{4}\right)^{1/3}\right|\) with \(\alpha=16\). The green points depict the maximum quality factor calculated numerically. (b) The free energy density with parameters corresponding to the star, hexagon, disk, and half-disk in (a). Insets display the associated band dispersion.
similar enhancement is expected when multiple channels participate in the pairing. To investigate this possibility, we numerically calculate \(f(q)\) and \(j_{z}(q)\equiv j(q)\) at \(T=0\). As shown in Fig. 5(a), for a relatively small tilt for a given \(\mu\), there is only one type of pairing channel, only one minimum in \(f(q)\) and a small difference between \(j_{c}^{\pm}\) that yield a diode quality factor of \(\eta\approx 1.8\%\). However, for a larger tilted strength, three different types of Cooper pairing channels are present, which induce two minima in \(f(q)\) a larger difference between \(j_{c}^{+}\) and \(j_{c}^{-}\) are boosted diode quality factor of \(\eta\approx 3.7\%\) [see Fig. 6(c-d)].
We perform a similar analysis on a different lattice model of a tilted Weyl semimetal. In addition to the pockets near the Weyl nodes for the chosen parameters, there are Fermi pockets near the Brillouin zone boundary. Therefore, this model could support more possible cooper pairing channels, and the SDE could be enhanced.
The Bloch Hamiltonian describing the tilted Weyl semimetal and its corresponding energy spectrum can be expressed as:
\[\tilde{H}_{\rm W}({\bf k}) =2\left(\cos k_{x}-\cos k_{0}-\cos k_{y}-\cos k_{z}+2\right) \sigma_{x}\] \[+2\sin k_{y}\sigma_{y}+2\sin k_{z}\sigma_{z}+(\lambda\sin 2k_{z}- \mu)\sigma_{0} \tag{20}\] \[\tilde{E}_{\rm W}^{\pm}({\bf k}) =\pm 2\left[\left(\cos k_{x}-\cos k_{0}-\cos k_{y}-\cos k_{z}+2 \right)^{2}\right.\] \[\left.+\sin^{2}k_{y}+\sin^{2}k_{z}\right]^{1/2}+\lambda\sin 2k_{z}-\mu \tag{21}\]
This model has two Weyl nodes at \({\bf k}=(\pm k_{0},0,0)\); we set \(k_{0}=\pi/4\) henceforth. In Fig. 7(a, d), we show the Fermi pockets for the tilted Weyl semimetal with different tilt strengths. At \(\lambda=0\), the system Hamiltonian preserves \(\mathcal{I}=\sigma_{x}\) but breaks \(\mathcal{T}\)- symmetry. For nonzero \(\lambda\), \(\mathcal{I}\) is also broken while \(|\lambda|>1\) renders the type-II Weyl nodes. For arbitrary \(\lambda\) but \(\mu=0\), \(\tilde{H}_{\rm W}({\bf k})\) obeys \(\sigma_{z}\tilde{H}_{\rm W}^{*}(-{\bf k})\sigma_{z}=-\tilde{H}_{\rm W}({\bf k})\), which is particle-hole symmetry of the form Eq. (4).
In the presence of s-wave pairing with a nonzero Cooper pair momentum, the BdG Hamiltonian is given by:
\[\tilde{H}_{\rm W}^{\rm BdG}({\bf k},{\bf q})=\begin{pmatrix}\tilde{H}_{\rm W }({\bf k}+{\bf q}/2)&-i\Delta\sigma_{y}\\ i\Delta\sigma_{y}&-\tilde{H}_{\rm W}^{*}(-{\bf k}+{\bf q}/2)\end{pmatrix} \tag{22}\]
The tilt is along \(k_{z}\), allowing us to set \({\bf q}=(0,0,q)\). \(\tilde{H}_{\rm W}^{\rm BdG}({\bf k},q)\) satisfies the particle-hole symmetry \(\tau_{x}\tilde{\rm K}\tilde{H}_{\rm W}^{\rm BdG}({\bf k},q)\mathbb{K}\tau_{x }=-\tilde{H}_{\rm BdG}(-{\bf k},q)\), which ensures the existence of pairs of opposite eigenvalues \(\tilde{E}_{\pm}(-{\bf k})\) and \(-\tilde{E}_{\pm}({\bf k})\).
As shown in Fig. 7(b-c), for a relatively small tilt for a given \(\mu\), there is only one minimum in \(f(q)\) and a small difference between \(j_{c}^{\pm}\) that yield a diode quality factor of \(\eta\approx 3.8\%\). However, for a larger tilted strength, two minima in \(f(q)\) a larger difference between \(j_{c}^{+}\) and \(j_{c}^{-}\) are boosted diode quality factor of \(\eta\approx 18.4\%\) [see Fig. 7(e-f)]. The quality factor of the SDE in this model is much higher than the diode quality factor in the first model, confirming that multiple Cooper pairing channels can enhance the diode quality factor.
## V SDE in tilted Dirac semimetals
Similar to Weyl semimetals, in a Dirac semimetal, the valence and conduction bands touch linearly at specific points in the Brillouin zone, known as Dirac points, where the energy dispersion relation is linear in momentum [34; 35; 36]. The existence of these three-dimensional Dirac points is of profound significance in condensed matter physics. At the quantum critical point, where a transition occurs between a normal insulator and a topological insulator, a three-dimensional Dirac semimetal manifests [48]. This quantum
Figure 5: (a) Projected band structure of a tilted Weyl semimetal near the Weyl node with Bloch momentum \({\bf k}=(0,0,\pi/3)\). (b) Fermi surface of a weakly tilted Weyl semimetal. Parameters: \(\lambda=-1/2\), \(T=0\), \(g=12\), and \(\mu=0.55\). (c) and (d) show the same quantities as (a) and (b), respectively, with the parameter \(\lambda=-2\). The band dispersion remains consistent at the other Weyl node with \({\bf k}=(0,0,-\pi/3)\).
Figure 6: (a), (b): The free energy density \(f(q)\), the supercurrent \(j(q)\) (blue dotted line), and \(-j(q)\) (red dotted line) for \(\lambda=-1/2\), \(T=0\), \(g=12\), and \(\mu=0.55\). (c), (d): The same quantities as (a, b) with the parameter \(\lambda=-2\).
critical point represents a delicate balance between different electronic states, resulting in the appearance of a Dirac semimetal phase that possesses distinct topological properties. The formation of this exotic phase further highlights the role of symmetries in dictating the behavior of electronic states and their topological nature.
In the last section, we have shown that SDE could be realized in tilted Weyl semimetals. Due to the similarity between Weyl semimetals and Dirac semimetals, a natural question arises: can introducing a perturbation term to the Dirac semimetal, which tilts the band dispersion and breaks both \(\mathcal{T}\)- and \(\mathcal{I}\)- symmetries, support the emergence of SDEs? To answer this question, we consider a lattice model of the Dirac semimetals and study the possibility of SDEs induced by the tilting.
We focus on a cubic lattice model with a single Dirac point at the \(\Gamma=(0,0,0)\) point. The dispersion is tilted in a specific direction, assumed to be in the \(z\) direction as shown in Fig. 8. The Bloch Hamiltonian is:
\[H_{\rm D}(\mathbf{k}) =\sin k_{x}\Gamma_{xy}+\sin k_{y}\Gamma_{xx}+\sin k_{z}\Gamma_{y0}\] \[+(3-\cos k_{x}-\cos k_{y}-\cos k_{z})\Gamma_{x0}\] \[+(\lambda\sin k_{z}-\mu)\Gamma_{00} \tag{23}\]
where the matrix \(\Gamma_{ab}\equiv\tau_{a}\otimes\sigma_{b}\) with \(a\), \(b\in(0,x,y,z)\). The term proportional to \(\lambda\) induces tilting and breaks the \(\mathcal{T}\)- and \(\mathcal{I}\)- symmetries while a non-zero \(\mu\) is needed to break symmetries studied in Sec. II.
\(s\)-wave superconductivity is captured by the BdG Hamiltonian:
\[H_{\rm D}^{\rm BdG}(\mathbf{k},\mathbf{q})=\begin{pmatrix}H_{\rm D}(\mathbf{k }+\mathbf{q}/2)&-i\Delta\sigma_{y}\\ i\Delta\sigma_{y}&-H_{\rm D}^{*}(-\mathbf{k}+\mathbf{q}/2)\end{pmatrix} \tag{24}\]
As demonstrated in Fig. 9, our investigation reveals intriguing similarities between the free energy density and the SDEs observed in Dirac semimetals and those previously observed in Weyl semimetals. The quality factor \(\eta\approx 2.5\%\) at weak tilting with \(\lambda=-0.3\) and \(\eta\approx 11.7\%\) at stronger tilting with \(\lambda=-1.5\). This enhancement is accompanied by the appearance of multiple pairing channels and multiple minima in the free energy. These behaviors motivate exploring tilted Dirac semimetals as well for the realization of SDEs.
## VI Candidate materials
For materials with broken \(\mathcal{T}\)- and \(\mathcal{I}\)- symmetries, the realization of SDEs might be hindered by additional lattice symmetries, such as mirror symmetry or reflection symmetry. Consequently, these additional symmetries
Figure 8: (a) Projected band structure of the Dirac semimetal. (b) Fermi surface for \(\lambda=-0.3\), \(T=0\), \(g=2.6\), and \(\mu=0.2\). (c), (d) Same quantities as in (a, b), with the parameters being identical to those in (a, b), except for the parameter \(\lambda=-1.5\).
Figure 7: (a), (d): Fermi pockets of the tilted Weyl semimetal. (b), (c): The free energy density \(f(q)\), the supercurrent \(j(q)\) (red dotted line), and \(-j(q)\) (black dotted line) for \(\lambda=-1\), \(T=0\), \(g=10\), and \(\mu=0.4\). (e), (f): The same quantities as (b, c) with the parameter \(\lambda=-2\).
Figure 9: (a), (b): Free energy density \(f(q)\), supercurrent \(j(q)\) (blue dotted line), and \(-j(q)\) (red dotted line) for parameters \(\lambda=-0.3\), \(T=0\), \(g=2.6\), and \(\mu=0.2\). (c), (d): Same quantities as in (a, b) with identical parameters, except for \(\lambda=-1.5\).
would also need to be broken to enable the occurrence of SDEs. One such material exemplifying this is Ti\({}_{2}\)MnAl, with space group \(F\bar{4}3M\) (No. 216)[49]. In Ti\({}_{2}\)MnAl, weak spin-orbit coupling further breaks the mirror symmetry (M\({}_{\pm 110}\)), leading to different tilts between the two mirror-symmetric Weyl points. Another set of materials can be found in the RAlX family with the space group I4\({}_{1}\)md (No. 109), where R represents rare earth metals like Pr, Ce, and Sm, and X denotes Ge or Si[50; 51]. These materials lack horizontal mirror symmetry, which increases the likelihood of asymmetric bands in the z-direction. If superconductivity could be realized in them, then they are potential candidate materials for verifying our theoretical studies.
## VII Conclusions
In this work, we delved into the intriguing phenomenon of SDEs in topological semimetals. We demonstrated, by investigating a simple 1D toy model using various numerical and analytical methods, that multiple pairing channels rather than tilting the dispersion enrich the superconducting physics and enhance the SDE. We carried this understanding to 3D Weyl and Dirac semimetals, showed the existence of the SDE in these systems, and demonstrated its enhancement due to multiple Fermi pockets and pairing channels.
Our findings hold implications for future explorations of superconducting phenomena and topological effects in condensed matter systems. Moreover, the intrinsic nature of SDEs in the presence of asymmetric band dispersions suggests a promising avenue for designing advanced superconducting devices and harnessing nonreciprocal transport in quantum technologies. Ultimately, this research opens up new directions for investigating emergent phenomena at the intersection of superconductivity and topological physics.
###### Acknowledgements.
This work was supported by the Department of Energy grant no. DE-SC0022264.
|
2303.18218 | Covering all but the low weight vertices of the unit cube | In this paper we discuss a result similar to the polynomial version of the
Alon-F\"uredi theorem. We prove that if you want to cover the vertices of the
$n$-dimensional unit cube, except those of weight at most $r$ then you need an
algebraic surface of degree at least $n-r$. | Peter Sziklai, Zsuzsa Weiner | 2023-03-31T17:19:53Z | http://arxiv.org/abs/2303.18218v1 | # Covering all but the low weight vertices of the unit cube
###### Abstract
In this paper we discuss a result similar to the polynomial version of the Alon-Furedi theorem [1]. We prove that if you want to cover the vertices of the \(n\)-dimensional unit cube, except those of weight at most \(r\) then you need an algebraic surface of degree at least \(n-r\).
Keywords: polynomial method; unit cube; Zeilberger's method
## 1 Introduction
Let \(\mathcal{Q}\) be the unit cube \(\{0,1\}^{n}\) of the vector space \(\mathbb{F}^{n}\), where \(\mathbb{F}\) is a field. There is a quadratic surface covering all the vertices of \(\mathcal{Q}\). But if we forbid to cover some of the vertices it becomes a much more difficult question how (i.e. by how small degree polynomial) can we achieve it. A typical result of this flavour states that if we forbid one vertex (e.g. the origin) then we need a polynomial of degree at least \(n\); or more generally, formulated the other way around in [1], if a polynomial of degree \(d\) does not vanish completely on the grid \(S_{1}\times...\times S_{n}\), where \(0<|S_{i}|,S_{i}\subset\mathbb{F}\ \forall i\), then it is nonzero on at least \(\min\prod y_{i}\) points of the grid, where the minimum is taken over all sets of integers \(0<y_{i}\leq|S_{i}|\ \forall i\), the sum of which is at least \(\sum|S_{i}|-d\).
There is an abundance of results related to the Alon-Furedi paper, we do not survey them here.
## 2 The main result
The _weight_ of a vector is just the number of nonzero coordinates of it. The next theorem extends the result of Alon-Furedi [1].
**Theorem 1**.: _In \(\mathbb{F}^{n}\), if for a polynomial \(f\in\mathbb{F}[x_{1},x_{2},...,x_{n}]\) of degree \(d\), we have \(f(x)=0\) for each vertex \(x\) of the unit cube except the vertices of weight \(\leq r\), where \(f(x)\neq 0\), then \(d\geq n-r\)._
Note that the theorem is sharp, an obvious example is the following polynomial (and there are many others).
**Example 2**.: _If \(\mathrm{char}\ \mathbb{F}=0\) or \(n<\mathrm{char}\ \mathbb{F}\) then_
\(f(x_{1},x_{2},...,x_{n})=\prod_{s=r+1}^{n}(x_{1}+x_{2}+...+x_{n}-s)\) _is a polynomial vanishing on the vertices of the unit cube of weight at least \(r+1\) and nonzero on the rest._
There are many versions and proofs of similar results, see [1]. Here we show one, which depends on careful examination of the coefficients of the polynomial.
**Proof** of the theorem. Suppose that, on the contrary, \(d<n-r\). Write
\[f(x_{1},x_{2},...,x_{n})=\sum_{0\leq i_{1}+i_{2}+...+i_{n}\leq d}a_{i_{1},i_{2},...,i_{n}}x_{1}^{i_{1}}x_{2}^{i_{2}}...x_{n}^{i_{n}}\ \.\]
We say that a term _contains_ the variable \(x_{k}\) if the exponent of \(x_{k}\) in the term is nonzero. Define \(\alpha_{\{j_{1},j_{2},...,j_{s}\}}\) or \(\alpha_{j_{1},j_{2},...,j_{s}}\) as the sum of the coefficients of the terms of \(f\), _containing_ precisely the variables \(x_{j_{1}},x_{j_{2}},...,x_{j_{s}}\) (i.e. with exponent at least 1) but no other variables. Note that our assumption \(d<n-r\) implies that
\[\alpha_{J}=0\ \mbox{for all}\ J\subset\{1,...,n\},\ |J|\geq n-r.\]
Substituting vertices of \({\cal Q}\) with weight \(\leq r\) (i.e. vectors with at most \(r\) coordinates being 1 and all the others zero), we get that
\[\alpha_{J}\neq-\sum_{A\subsetneq J}\alpha_{A}\ \ \ \ \ \ \ \mbox{for}\ 1\leq s\leq r,\ J\subseteq\{1,...,n\},\ |J|=s\ \.\]
Now substituting vertices of \({\cal Q}\) with weight \(s\), where \(0\leq s\leq n\), and denoting \(r^{s}=\min(s,r)\), by Mobius-inversion we get that for \(J\subseteq\{1,...,n\},\ |J|=s\)
\[\alpha_{J}=\sum_{A\subseteq J}(-1)^{|J\setminus A|}f(A)=\sum_{u=0}^{r^{*}}(-1 )^{s-u}\sum_{A\subseteq J\atop|A|=u}\sum_{B\subseteq A}\alpha_{B}=\]
\[\sum_{u=0}^{r^{*}}(-1)^{s-u}\sum_{B\subseteq J\atop|(B|\leq u)}{s-|B|\choose u -|B|}\alpha_{B}=\sum_{t=0}^{r^{*}}\left(\sum_{u=t}^{r^{*}}(-1)^{s-u}{s-t\choose u -t}\right)\sum_{B\subseteq J\atop|B|=t}\alpha_{B}\.\ (**)\]
As
\[\sum_{u=t}^{r^{*}}(-1)^{s-u}{s-t\choose u-t}=\left\{\begin{array}{ll}1&\mbox {if}\ t=s=r^{*};\\ 0&\mbox{if}\ 0\leq t<s=r^{*};\ \mbox{and}\\ (-1)^{s-r^{*}}{s-1-t\choose r^{*}-t}&\mbox{otherwise};\end{array}\right.\]
from \((**)\) we have in the case \(s\leq r\) (the obvious)
\[\alpha_{J}\ =\ \alpha_{J}\ \ ;\]
while in the case \(r<s\leq n\) we get
\[\alpha_{J}\ =\ \sum_{t=0}^{r}(-1)^{s-r}{s-1-t\choose r-t}\sum_{B\subseteq J \atop|B|=t}\alpha_{B}\ \.\]
This is a set of linear equations, and its equations can be indexed by the complement sets \(\bar{J}=\{1,...,n\}\setminus J\) and the "variables" are the coefficient sums \(\alpha_{B}\) for the subsets \(B\subseteq\{1,...,n\},\ \ |B|\leq r\). If we consider the equations \(|\bar{J}|\leq r\) then we get a system of _homogeneous_ linear equations of size \(\sum_{i=0}^{r}{n\choose i}\ \times\ \sum_{i=0}^{r}{n\choose i}\),
as the corresponding \(\alpha_{J}\) values on the _left hand sides_ are all zero by \((*)\).
**Firstly**, suppose that \(r<n/2\).
The rows and the columns of the matrix \(M\) of this system of equations are indexed by the subsets of size at most \(r\) of \(\{1,...,n\}\), and an entry \(m_{A,B}\) is equal to \((-1)^{n-r-|A|}{n-1-|A|-|B|\choose r-|B|}\) whenever \(A\) and \(B\) are disjoint subsets, and zero otherwise.
**Claim:**\(M=M^{-1}\).
Proof: in \(MM\), the entry indexed by the subsets \(A\) and \(B\) is the following:
\[\sum_{U}m_{A,U}m_{U,A}=\sum_{U\subseteq\bar{A}}(-1)^{|A|+|U|}{n-1-|A|-|U| \choose r-|U|}{n-1-|A|-|U|\choose r-|A|}=\]
\[(-1)^{|A|}\sum_{u=0}^{\min(n-|A|,r)}(-1)^{u}{n-|A|\choose u}{n-1-|A|-u\choose r -u}{n-1-|A|-u\choose r-|A|}=1\.\]
If \(A\neq B\) then
\[\sum_{U}m_{A,U}m_{U,B}=\sum_{U\subseteq\bar{A}\cup B}(-1)^{|A|+|U|}{n-1-|A|-| U|\choose r-|U|}{n-1-|B|-|U|\choose r-|B|}=\]
\[(-1)^{|A|}\sum_{u=0}^{\min(n-|A\cup B|,r)}(-1)^{u}{n-|A\cup B|\choose u}{n-1-| A|-u\choose r-u}{n-1-|B|-u\choose r-|B|}=0\.\]
These equalities can be proved by Zeilberger's method (see the Appendix), we used the fastZeil Mathematica package developed by Paule, Schorn and Riese [2]. We are grateful for them to share the package with us and for their helpful advice.
Hence \(M\) is invertible indeed and the unique solution is \(\alpha_{J}=0\) for all \(|J|\leq r\). But this is a contradiction.
**Secondly**, suppose that \(r\geq n/2\).
Now the matrix \(M\) is similar, but (as we have now equations for \(n-r\leq s\leq r\)), it contains rows belonging to equations \(\alpha_{J}=\alpha_{J}\), i.e. in the row indexed by \(A=\bar{J}\), \(|J|=s,\ n-r\leq|A|\leq r\), the element \(m_{A,B}=1\) for \(B=\bar{A}\) and zero otherwise.
The rows and the columns of the matrix \(M\) of this system of equations are still indexed by the subsets of size at most \(r\) of \(\{1,...,n\}\), and the rows indexed by sets of size less than \(n-r\) remained the same, i.e. the entry \(m_{A,B}\) is equal to \((-1)^{n-r-|A|}{n-1-|A|-|B|\choose r-|B|}\) whenever \(A\) and \(B\) are disjoint subsets, and zero otherwise.
Note that if we order the index sets increasingly w.r.t. their size, and in the same
way for rows and columns, then in \(M\) we can see an \(\sum_{i=n-r}^{r}\binom{n}{i}\times\sum_{i=n-r}^{r}\binom{n}{i}\) identity matrix in the bottom-right corner, only zeroes on its left, and in the upper-left corner we find \(M_{0}\) of size \(\sum_{i=0}^{n-r-1}\binom{n}{i}\times\sum_{i=0}^{n-r-1}\binom{n}{i}\) which is similar to the 'old' version of \(M\) above and we can prove \(M_{0}=M_{0}^{-1}\).
It follows that \(M\) is invertible indeed and the unique solution is \(\alpha_{J}=0\) for all \(|J|\leq r\). But this is a contradiction again.
We note that in the extremal case \(d=n-r\) the same equalities can be used to describe the \(\alpha_{J}\)-s; there remains a lot of freedom to choose the coefficients of \(f\).
## 3 Appendix
Here we sketch the proof of the two equalities (1) and (2) which serve the proof of \(M=M^{-1}\). Note that for \(r=0\), the matrix \(M\) is 1-by-1 with its only entry being \((-1)^{n}\); while for \(r=1\) we have an \((n+1)\times(n+1)\) matrix for which, again, it is easy to check (1) and (2).
Now to prove (1) let
\[S_{1}(r)=\sum_{u=0}^{r}(-1)^{u+|A|}\binom{n-|A|}{u}\binom{n-1-|A|-u}{r-u}\binom {n-1-|A|-u}{r-|A|}.\]
Note that in (1) the sum runs until \(\min(n-|A|,r)\) which is \(r\) as \(r<n/2\). We want to show that \(S_{1}(r)=1\), for \(r<n/2\). Let \(n-|A|=m\) and \(|A|=a\). Zeilberger's method provides the recursion:
\[-(a-r-1)(m-r-1)(a+m-2r-4)(a+m-r-1)\ S_{1}(r)+\\ (a+m-2r-3)(a^{2}m-a^{2}r-a^{2}+am^{2}-2amr-2am+ar^{2}+ar-a-m^{2}r -m^{2}+\\ mr^{2}+mr-m+2r^{2}+6r+4)\ S_{1}(r+1)-\\ -(r+2)(a-r-2)(m-r-2)(a+m-2r-2)S_{1}(r+2)=0\]
For \(r,a<n/2\), the coefficient of \(S_{1}(r+2)\) is nonzero. From the first paragraph of this section, \(S_{1}(r)=1\) for \(r=0,1\) and so, comparing the coefficients of \(S_{1}(r),S_{1}(r+1)\) and \(S_{1}(r+2)\) we get, by induction, that \(S_{1}(r)=1\) for all \(r\).
In order to prove (2) let
\[S_{2}(r)=\sum_{u=0}^{r}(-1)^{u+|A|}\binom{n-|A\cup B|}{u}\binom{n-1-|A|-u}{r- u}\binom{n-1-|B|-u}{r-|B|}.\]
In (2), the sum runs until \(\min(n-|A\cup B|,r)\), but when \(u>n-|A\cup B|\) then \(\binom{n-|A\cup B|}{u}=0\), so the result does not change if we sum up to \(r\). We want to show that \(S_{2}(r)=0\), for \(r<n/2\). Let \(n-|A\cup B|=m\), \(|A\cap B|=w\), \(|A|=a\)
and \(|B|=b\). Zeilberger's method provides the recursion:
\[-(a-r-1)(b+m-r-w-1)(a+b+m-2r-w-4)(a+b+m-r-w-1)S_{2}(r)\] \[-(a+b+m-2r-w-3)(a^{2}b-a^{2}r-a^{2}w-2a^{2}+ab^{2}+abm-2abr-3abw-4 ab-2amw\] \[-am+ar^{2}+4arw+5ar+2aw^{2}+7aw+5a-b^{2}r-b^{2}w-2b^{2}-2bmw-bm+ br^{2}+4brw\] \[+5br+2bw^{2}+7bw+5b+m^{2}r-m^{2}w+m^{2}-mr^{2}+2mrw-mr+2mw^{2}+4 mw+m\] \[-3r^{2}w-2r^{2}-3rw^{2}-11rw-6r-w^{3}-5w^{2}-9w-4)S_{2}(r+1)\] \[+(r+2)(b-r-2)(-a-m+r+w+2)(a+b+m-2r-w-2)S_{2}(r+2)=0.\]
Again we see that \(S_{2}(r)=0\) for \(r=0,1\) and the coefficient of \(S_{2}(r+2)\) is nonzero when \(r<n/2\) and so \(S_{2}(r)\) is always \(0\).
## 4 Addendum
After publication of this paper, the authors learned that a more general version of their result had been proved independently, slightly earlier, by Venkitesh [3], Corollary 33. In [3], this is a corollary of a nice, rather complex series of results, so our 2 or 3 pages long proof remains still interesting; and we believe that this application of Zeilberger's method is still worth publishing.
## 5 Acknowledgements
The second author acknowledges the partial support of the National Research, Development and Innovation Office - NKFIH, grant no. K 124950. The first author is grateful for the partial support of project K 120154 of the National Research, Development and Innovation Fund of Hungary; and for the support of the National Research, Development and Innovation Office within the framework of the Thematic Excellence Program 2021 - National Research Subprogramme: "Artificial intelligence, large networks, data security: mathematical foundation and applications".
|
2310.18320 | AI (r)evolution -- where are we heading? Thoughts about the future of
music and sound technologies in the era of deep learning | Artificial Intelligence (AI) technologies such as deep learning are evolving
very quickly bringing many changes to our everyday lives. To explore the future
impact and potential of AI in the field of music and sound technologies a
doctoral day was held between Queen Mary University of London (QMUL, UK) and
Sciences et Technologies de la Musique et du Son (STMS, France). Prompt
questions about current trends in AI and music were generated by academics from
QMUL and STMS. Students from the two institutions then debated these questions.
This report presents a summary of the student debates on the topics of: Data,
Impact, and the Environment; Responsible Innovation and Creative Practice;
Creativity and Bias; and From Tools to the Singularity. The students represent
the future generation of AI and music researchers. The academics represent the
incumbent establishment. The student debates reported here capture visions,
dreams, concerns, uncertainties, and contentious issues for the future of AI
and music as the establishment is rightfully challenged by the next generation. | Giovanni Bindi, Nils Demerlé, Rodrigo Diaz, David Genova, Aliénor Golvet, Ben Hayes, Jiawen Huang, Lele Liu, Vincent Martos, Sarah Nabi, Teresa Pelinski, Lenny Renault, Saurjya Sarkar, Pedro Sarmento, Cyrus Vahidi, Lewis Wolstanholme, Yixiao Zhang, Axel Roebel, Nick Bryan-Kinns, Jean-Louis Giavitto, Mathieu Barthet | 2023-09-20T15:53:36Z | http://arxiv.org/abs/2310.18320v1 | # AI (r)evolution - where are we heading?
###### Abstract
Artificial Intelligence (AI) technologies such as deep learning are evolving very quickly bringing many changes to our everyday lives. To explore the future impact and potential of AI in the field of music and sound technologies a doctoral day was held between Queen Mary University of London (QMUL, UK) and Sciences et Technologies de la Musique et du Son (STMS, France). Prompt questions about current trends in AI and music were generated by academics from QMUL and STMS. Students from the two institutions then debated these questions. This report presents a summary of the student debates on the topics of: Data, Impact, and the Environment; Responsible Innovation and Creative Practice; Creativity and Bias; and From Tools to the Singularity. The students represent the future generation of AI and music researchers. The academics represent the incumbent establishment. The student debates reported here capture visions, dreams, concerns, uncertainties, and contentious issues for the future of AI and music as the establishment is rightfully challenged by the next generation.
## 1 Introduction
Deep learning-based technologies are evolving very quickly, and seem to become the basis of numerous changes in our everyday life. However, self-driving cars, conversational agents, machine-based language translation, and image generators are only the tip of the iceberg. Machine learning techniques are developed for a growing list of application domains: medicine, smart cities, humanoid robots, management of electricity grids, and they are starting to be used in fundamental research disciplines like physics, chemistry, genetics, and even mathematics.
Given the apparent proliferation of technologies that enable learning from data, the Queen Mary University of London (QMUL) & Sciences et Technologies de la Musique et du Son (STMS) doctoral day held on 13 Feb 2023 aimed to stimulate discussions about the future impact and potential of Artificial Intelligence (AI) in the field of music and sound technologies. Eight prompt questions about current trends in AI and music, and discourse surrounding AI were generated by academics from QMUL and STMS. Students from the two institutions then formed teams to discuss and debate these questions. This report presents a summary of the discussions by research students from QMUL and
STMS. Given the healthy debate around the questions, it must be noted that the opinions in this text are not necessarily shared by all authors.
## 2 Data, Impact, and the Environment (group A)
**Question 2.1**: **Machine learning systems rely on large amounts of data that are difficult and expensive to generate and process. How can we ensure that academic research remains competitive and innovative compared to the various industrial players that have access to better computational resources and larger amounts of data?**
Firstly, redundant effort on the same task could be avoided: once there is enough industrial interest in a specific task, it could be a sign for the academia to shift their focus towards unsolved research questions. For example, in the field of source separation, the first breakthroughs happened with [20] and U-net [14] came out in 2017 by Sony and Spotify respectively, but based on private datasets. After a few years of the SiSEC (Signal Separation Evaluation Campaign) challenge running and more public datasets made available did we finally come across an open-source implementation, Open-Unmix [15], that matched industry model performance while using the same architecture from [20]. Subsequently, Deezer released Spleeter [1], using a very similar U-net architecture. While the contribution may be significant in terms of making tools more accessible to a wider audience, it appears to be limited in terms of methodology. Since then, the state-of-the-art for music source separation has consistently been pushed by industry research, such as the latest Hybrid Demucs model [13], which often relies on smaller contributions from academic researchers to improve performance. At this point, we see a trend of models trained by academia (such as LaSAFTNet [12]) on publicly available datasets that rarely outperform state-of-the-art models released by industry giants under the common evaluation setting. While their contributions may not always lead to major breakthroughs in the field, they are nevertheless crucial in advancing research in specific cases uncovered by the industry such as instrument-specific separation, acappella separation, blind separation, etc.
Secondly, it is crucial to encourage the creation of finished prototypes or ready-to-use tools in academia, properly licensed to avoid situations where a company reimplements it and sells it as a commercial product. This is difficult for some products, such as hardware. And it requires efforts to maintain a product. Therefore ideally industry and academia could work together in developing and maintaining a product, through technology transfer, including licensing and spinout. To this end, academia should recognize accomplishment in releasing finished prototypes so that student researchers would be more encouraged to do so.
Thirdly, while huge models like GPTs are trained to solve the general problem of natural language understanding and generation, academic researchers could aim at solving specific research questions with smaller models and limited resources. Academia could take advantage of the large pretrained models from the industry, and the industry could get insights on improving every single task. This approach could lead to more efficient use of resources and a better understanding of how to solve specific research questions.
In conclusion, by avoiding redundant efforts, encouraging creating ready-to-use tools and technology transfer, and utilizing pre-trained models from the industry, academia can remain competitive and innovative despite the better resources and larger datasets available to industrial players.
### Question 2.2 What can be done about the environmental impact of deep learning approaches?
The increasing use of AI in academic research and the industry is raising concerns about its environmental impact. Several practices have been suggested in [17], including reporting training time, sensitivity for machine learning models, and sharing local infrastructure. A set of efficiency measures is proposed in [18], such as carbon emission and electricity usage.
In this context, there are other initiatives that can be taken to address these concerns. First, researchers should be encouraged to report the amount of energy consumption or carbon emissions
used in their publications. This information will help raise awareness of the environmental impact of AI research, and incentivize researchers to develop more efficient algorithms and training methods.
Secondly, a task-specific energy rating system can be implemented to label the energy level of open-source pretrained models. This rating system could be displayed on open source model hubs such as huggingface 1. By doing so, people can get an idea of the energy consumption behind each model, and choose a more energy-efficient model for their specific task.
Footnote 1: [https://huggingface.co/](https://huggingface.co/)
However, it is important to note that the more efficient a model becomes, the more it would be used, leading to equal or more energy consumption. This paradoxical situation is similar to the development of engines. The overall increase in energy consumption seems to be inevitable.
Back in 2009, there was a debatable article saying that "Performing two Google searches from a desk computer can generate about the same amount of carbon dioxide as boiling a kettle for a cup of tea" 2. Another evolution with ChatGPT by Microsoft is happening right now. Search engines have become a big part of people's lives, and we cannot abandon them. Similarly, AI is necessary for the advancement of technology, but we should be concerned about energy consumption and continue to explore ways to reduce it.
Footnote 2: [https://searchengineland.com/calculating-the-carbon-footprint-of-a-google-search-16105](https://searchengineland.com/calculating-the-carbon-footprint-of-a-google-search-16105)
Meanwhile, we can be optimistic about the use of eco-friendly energy sources such as solar and wind energy, as well as more energy-efficient computers such as biological computers becoming more accessible in the near future.
## 3 Responsible Innovation and Creative Practice (group B)
**Question 3.1**: _What does responsible innovation mean for the field of AI for sound and music? How should we envision the use of AI in sound processing and music in a way that artists would find rewarding, and how can we avoid negative impacts, for example for composers and performers?_
In recent years, we have observed a considerable rise in public interest towards AI generative models. Many discussions have arisen in the media which question whether image generation tools will eventually'replace' visual artists, alongside assessing the ethics of using artists' work to train these models without their economic compensation [14, 15]. Recent advancements in audio generation have extended these ethical discussions to the music industry, where musicians are now also beginning to face very similar issues.
Whilst there are many commonalities between the domains of the visual arts and music, such as models of remittance/retribution, it appears that the bond between spectator and artist is generally stronger in popular music than in the visual arts. With these economic concerns in mind, it is hard to imagine a fan culture centered around AI generated music, at least in the form we see today towards our most popular artists. A more immediate issue, however, is that of AI technologies automating the jobs, such as sound engineering or mastering, which many musicians undertake to support themselves financially alongside their composition and performance work. The rate at which new AI models are released calls for a discussion across disciplines with regards to how technology's impact upon these professions may be mitigated. We suggest that, rather than developing tools that aim to _solve_ a problem (e.g., solve automatic mixing), we should instead aim to create tools that _assist_ or _support_ the artist's work in that particular task [1]. Ultimately, the AI technologies (i.e., the models) would be the same, but instead of being presented as close-ended generators, they could be developed as interfaces musicians can interact with and include in their workflows. Even the best automatic mixing model does not possess the same reflexivity that an expert mixing engineer has - a sound engineer might have listened to a particular piece of music earlier on in the day that inspires them to approach mixing in a new way. Such a dynamicity has not yet been encountered in our current AI music models, which are yet to adopt such an approach to their materials and past predictions.
The rigidity of these AI models has further implications in how we define and understand our musical idioms. Typically, AI technologies are designed to both codify patterns for use in generative environments and extract dominant characteristics from arbitrary data points. The prominence of these
patterns and characteristics are typically influenced through a curated training and evaluation process which, when employed for creative purposes, serve to imbue some desired semantic/semiotic/qualitative values. In the sound and music domain, these characteristics are often inferred from the tradition of Western music theory, particularly functional harmony, metric understandings of rhythm, standardised instrumental/orchestral forces and the cultural artefacts that symbolically coincide with them. Although Western music theory has it uses, and is both studied and understood by many, its effectiveness in both practice and technological contexts is intrinsically limited. In many cases, music theory does not even apply to many Western music contexts, and it also struggles to remain relevant when extended to other cultural traditions (see e.g. [1]). As a result of this, practitioners tend to have a dynamic relationship towards their understanding of music theory, employing it instinctively, and defying it or redefining it on a regular basis. These limitations of Western music theory are similarly transferred to the technologies that robustly and naively center around them, which detracts from our more general cultural understanding as much as it limits the applicability and scope of these technologies.
Moreover, the development of AI technology, as it is currently organised, leaves little place for critical discussion on 'why' and 'how' our AI tools are created outside of a race for scientific progress. AI technologies are too often immediately deployed and shared as functional and effective tools without concern for the ethical and sociotechnical questions that the concept of a 'tool' entails. This is heightened by the frequent use of analogies between neural networks and biological processes in both research and pedagogical contexts (e.g. the analogy between the brain neuron and the neuron from a neural network). This drives a narrative in which AI technologies are 'naturalised', giving them a sort of autonomy, as if AI researchers and developers' tasks were to discover or to unearth the natural processes behind the 'already existing' AI and deep learning technologies, akin to studying animal or human intelligence. Instead, it must be stated that these technologies are human-made, and that society, and (especially) researchers and programmers, hold responsibility for how and why they are created, the biases they contain and the position they occupy within our culture.
A de-naturalisation or de-mystification of AI would contribute to refocusing responsibility towards the researchers and developers that build these technologies. That being said, the impact of these technologies is not foreseeable from the perspective of a single discipline. Responsibility is achieved through interdisciplinary research narratives - a responsible innovation takes care over all disciplines which relate to the specific topic of study. What Donna Haraway termed "response-ability" [11], Debaise & Stengers describe as "the capacity to be accountable for an action or an idea to those for whom the action or idea will have consequences" [12, p.17]. Response-ability, in this sense, is the desire for technological research and culture to work together to preserve and strengthen their interdependence. It encourages them to come "face-to-face" [11] with one another, to be able to empathise with one another, and to remain considerate and explicit with regards to the power that one might have over the other. As technological and cultural landscapes evolve sympathetically, it is the responsibility of those who forge them to engender and maintain their togetherness.
**Question 3.2**: **Currently, most research activities dealing with AI for sound and music focus around generation, analysis, and synthesis. Can we imagine AI contributions to music playing, music listening, musical performances, manufacturing of (augmented) instruments, acoustic contexts such as concert halls, the diffusion or design of sound, or other domains not yet considered?**
Mainstream media, industries and publications tend to gravitate towards AI models that focus upon tasks such as music generation conditioned on artists or genres (e.g., Jukebox [13]) or, more recently, text-to-music generation (e.g., MusicLM [1]). These tasks are highlighted and celebrated for their impressive coverage and holistic scope, whilst the achievements of many smaller AI technologies are often easy to understate and overlook. More situated, ambiguous or highly contextualised uses of AI, such as augmented instrument design, do not receive the same mainstream attention, despite their own ingenuity and prowess. The more specific an application of AI becomes, or the more situated it becomes within a particular art practice, the more likely it is to occupy a niche position in the overall conversation surrounding AI for sound and music.
The prevalent tasks in sound and music generation have also carved out a particular space in academia, encouraging the formation of standardised benchmarks, the establishment of objective evaluation metrics, and the creation of publications which focus solely on the evaluation of these AI models [10]. Tasks that are harder to evaluate or standardise have greater difficulty entering into the academic publication schema with the same status. As these larger generative tasks are typically more open-ended and relatable, there is much more opportunity for others to become influenced by these models, and continually develop upon them. This encourages research to fall into distinct trends, where technologies linearly influence the development of one another, and tasks become more generalised as they develop. As a result, agreed upon quantitative standards arise to compare the multitude of approaches towards a given task, and the breadth and scope of these evaluations is effectively narrowed. This progress narrative, akin to what Thomas Kuhn describes as "normal science" [14], produces the perception that everybody is working on the same thing, and marginalises those who are working outside of these dominant approaches.
As the development and use cases of these mainstream technologies become more generalised, they too begin to have subversive effects on cultural practice and its related industries. Technological pursuits which may have originated from ideas relating to the creation of sounds in specific contexts, have now matured and become generalised synthesis tools, equally applicable to a wide range of scenarios. And as these tools become more widely applicable, they also obfuscate the need for more context-dependent devices, as well as techniques for some of our more underused and unique musical practices and idioms. In line with this idea of 'normal science', we can think of these technologies as encouraging a sense of 'normal practice', whereby cultural development is fixated on perfecting some major aspect of its activity, as opposed to encouraging experimentalism and the accumulation of new practice driven techniques. As far as the grander narrative of computer music research is concerned, this is a distinct shift from the practices of our previous cultures of research - those whom, like Jean Claude-Risset and Miller Puckette, engaged in their research as means of furthering both their own creative practice and the landscape of potential creative techniques and ideas [15].
In terms of responsible approaches towards AI, and the development of technologies that are successful outside the dominant trends of generative music, synthesis and analysis, it is important to remain situated within and connected to exploratory and experimental arts practices. As an incentive for research, many of the fringe aesthetics involved in these cultures incite the curation of new directives and techniques for creative and technological expression. Although these practices generally fall outside of the mainstream, and similarly will not receive the same media and industrial attention as the larger AI models do, they are just as important for the development of technology as they are for the continued growth of our cultural narrative and understandings. Where the ethical quandaries towards artists and their practices are concerned, research that pertains a strong relationship with the arts and its development aligns itself with Haraway's aforementioned idea of "response-ability" [17]. In this sense, technological development and research may embody the same sense of creativity and inventiveness as the cultural practices it works alongside. And in doing so, in supporting these more situated affinities, the overarching narrative of progress is mitigated and directed away from a culture of normal practice and obfuscation, and towards one where artists, musicians and technologists can continue to prosper together, in sympathy and interdependently.
## 4 Creativity and Bias (group C)
**Question 4.1**: _AI generators like DALL-E 2, or MusicLM produce media content based on text prompts, but can we call these systems creative? What is creativity and what are the characteristics of an activity we would want to call creative? To what extent can we expect to find these characteristics in a DNN?_
Prior to assessing the creative potential of deep generative models such as DALL-E [13] or MusicLM [1], one should discuss the core concept of _creativity_ itself, which is complex and challenging to define precisely. Among the many attempts to define creativity, one can identify recurrent properties which are shared by the majority of literature on the topic, namely the ideas of _novelty_, _intention_ and _cultural relevance_[2]. Creative acts are purposeful and deliberate, involving
the author's intent and choices, which is fundamentally different from the process of imitation. As current deep generative models like DALL-E and MusicLM are trained to extract statistical patterns from the training data, they lack the required intentionality to pursue a true creative act. Furthermore, the authors in [20] argue that, even in a scenario where machines would develop a sense of volition/intention, thus being able to create something uninfluenced by human-made art, it would not be possible for us to understand their creative outcome, for we are bound to frame such outcome from a human perspective.
Additionally, we suggest that the best we can aim at, as human beings standing by an artistic creation by another entity, is to an understanding of what could have motivated another human being to create such a work.
These systems can also be intended as proposal generators, which can be refined by the end user. One interesting aspect is that many artists deliberately deviate from the original objective of such models by feeding them unexpected inputs. To this extent, they seek to break the initial system in order to fully leverage all the expressive abilities of these models [17].
In the end, we believe that generative systems should not be aimed at replacing humans but rather as offering co-creative tools [1] which could complement and extend current instruments and they should be designed to reflect the will to pursue artistic intentions.
Question 4.2 How can we reduce the bias in machine learning models? E.g. bias towards Western musical cultures?
There are many aspects to be considered when reflecting on biases in machine learning models. It is important to recognize that we cannot completely unbias the output of a generative model, since there will always be an unremovable source of bias coming from the data used to train the system on. Still, cultural biases can represent a discriminating factor towards under-represented scientific and artistic communities. When attempting to reduce such components, encouraging diversity should be the main goal. To this end, we believe that it is important to reinforce the need for datasets that target culturally different segments. The creation of special conference tracks or journal special issues encouraging the production and release of datasets that are culturally diverse (e.g. International Society for Music Information Retrieval (ISMIR)) can help stimulating the development of models that are not biased towards, e.g., Western musical cultures, bringing together researchers and practitioners from different backgrounds and cultures.
It is also important to recognize that biases can sometimes be task-oriented or useful, especially in user-oriented deep learning products where biases from a user perspective may be desirable. For example, a recommendation system for a particular genre of music may be biased towards that genre, but it may also be useful for users who are looking for music in that genre. It is essential to evaluate biases on a case-by-case basis to determine whether they are helpful or detrimental.
Finally, introducing the correct inductive biases in deep models can help ease down the learning process and reduce the number of training examples required [1]. It is thus important to carefully differentiate whether bias can potentially be harmful, e.g. coming from a strongly biased dataset, or useful, as in the case where modelling choices can enhance the learning capabilities of such artificial agents.
## 5 From Tools to the Singularity (group D)
Question 5.1 Is the singularity around the next corner or are the implicit limitations to AI systems that will keep them "dumb" irrespective of the amount of data we train them with?
The concept of singularity - a point in time when artificial intelligence surpasses human intelligence and becomes self-improving [10] - has been the subject of much speculation and debate. In our discussion, we explored the question of whether the singularity is around the next corner or if there are inherent limitations that will keep AI systems "dumb" regardless of the amount of data used to train them.
While we did not arrive at a definitive answer, we acknowledged that there have been numerous breakthroughs in AI recently that are becoming more frequent. Early in 2017, R-net [14]
has demonstrated performance beyond the level of human experts in the Question Answering task; Following the theory of scaling law [13], recent large models containing massive parameters and trained with massive data have indeed shown better performance than previous AI models on various tasks [14].
There are many challenges and limitations that AI systems face, even with large amounts of data. The first thing that should be considered is the potential ethical issues. For instance, there are concerns around data privacy, stereotypes, and the ethical implications of AI-driven decision-making [1, 2, 1]. These issues are already being manifested in a wide range of AI applications and have been embodied as never before with the development of large-scale models, from predictive policing and facial recognition technology [15], to disputes over copyright ownership [16].
In this context, the intelligence exhibited by AI systems may be criticized as "dumb" since it cannot guarantee responsible output. Given the rapidly evolving nature of these technologies, it is essential for humans to consider the ethical implications of AI systems and engage in ongoing dialogues to best manage their associated risks and benefits. However, it is challenging to address this issue fundamentally due to the neural network structure of large AI models, which often lack interpretability [13]. Despite this challenge, it is crucial to prioritize transparency, accountability, and openness in AI development and engage with a broad range of stakeholders, including government regulators, civil society organizations, and affected communities.
Overall, our discussion implies that despite the potential for large models to surpass human performance and reach the singularity, they may still exhibit "dumb" aspects that need to be addressed. Thus, it is crucial to focus on the more immediate and pressing concerns related to AI applications to ensure that these technologies are developed and deployed in a manner that aligns with our values and promotes the greater good.
**Question 5.2**: **Do AI systems escape the notion of being tools? Applied to the field of sound and music, are they qualitatively different from digital audio tools? Or is their contribution limited to that of a tool in the hands of artists or listeners, where these new AI-based techniques simply allow doing things better/differently?**
To answer the question of whether AI systems in sound and music can be seen as more than just tools, it is crucial to first distinguish between a tool and an agent and provide some definitions. Generally, from a philosophical point of view, one could ask whether AI systems have agency, autonomy, intentionality, consciousness, or moral responsibility [17]; from a practical point of view, one could ask whether AI systems can perform tasks that are beyond the capabilities or expectations of human users, or whether they can influence or interact with human users in ways that are not predetermined by their design. In the field of sound and music, we should consider how AI systems affect the roles and relationships between composers, performers, listeners, and critics.
While music AI systems still operate within the parameters set by their programmers and users, they do have the ability to introduce new levels of surprise, unpredictability, and creativity into the creative process:
1. One example of this is the creation of AI-generated music [2], where AI systems can generate original pieces of music based on existing data or specifications. This can be seen as going beyond the role of a tool and into the realm of a creative collaborator or even composer. However, while AI-generated music has been around for at least 20 to 40 years, there has yet to be a sizable community of people coalescing around it, unlike the case with the development of new musical instruments or algorithms in the past.
2. Another example is the use of AI for sound design [23], where AI systems can be used to generate or manipulate sounds in ways that would be difficult or impossible for a human to do manually. This can be seen as expanding the creative possibilities of sound design beyond what traditional tools allow.
AI is also playing a larger role in shaping how people consume music through the rise of AI-generated playlists [15] and personalized music recommendations [16]. This can be seen
as going beyond the role of a tool in the hands of listeners and into the realm of a new type of music curator or even a tastemaker.
However, there are still valid arguments for viewing AI systems in sound and music as simply tools that assist artists and listeners in achieving their goals. It is important to consider the limitations and challenges of these technologies in creating truly surprising and innovative works. Overall, the potential for AI to develop intentionality and generate new forms of art is intriguing, but it also highlights the need for ongoing discussion and exploration in this field.
## Acknowledgments
The doctoral day was supported by Sciences et Technologies de la Musique et du Son (STMS, France) and by the UKRI Centre for Doctoral Training in Artificial Intelligence and Music, supported jointly by UK Research and Innovation [grant number EP/S022694/1] and Queen Mary University of London (QMUL, UK).
\({}^{*}\)Nick Bryan-Kinn's is now at the Creative Computing Institute, University of the Arts London.
|
2309.12008 | NanoSLAM: Enabling Fully Onboard SLAM for Tiny Robots | Perceiving and mapping the surroundings are essential for enabling autonomous
navigation in any robotic platform. The algorithm class that enables accurate
mapping while correcting the odometry errors present in most robotics systems
is Simultaneous Localization and Mapping (SLAM). Today, fully onboard mapping
is only achievable on robotic platforms that can host high-wattage processors,
mainly due to the significant computational load and memory demands required
for executing SLAM algorithms. For this reason, pocket-size
hardware-constrained robots offload the execution of SLAM to external
infrastructures. To address the challenge of enabling SLAM algorithms on
resource-constrained processors, this paper proposes NanoSLAM, a lightweight
and optimized end-to-end SLAM approach specifically designed to operate on
centimeter-size robots at a power budget of only 87.9 mW. We demonstrate the
mapping capabilities in real-world scenarios and deploy NanoSLAM on a
nano-drone weighing 44 g and equipped with a novel commercial RISC-V low-power
parallel processor called GAP9. The algorithm is designed to leverage the
parallel capabilities of the RISC-V processing cores and enables mapping of a
general environment with an accuracy of 4.5 cm and an end-to-end execution time
of less than 250 ms. | Vlad Niculescu, Tommaso Polonelli, Michele Magno, Luca Benini | 2023-09-21T12:27:18Z | http://arxiv.org/abs/2309.12008v1 | # NanoSLAM: Enabling Fully Onboard SLAM
###### Abstract
Perceiving and mapping the surroundings are essential for enabling autonomous navigation in any robotic platform. The algorithm class that enables accurate mapping while correcting the odometry errors present in most robotics systems is Simultaneous Localization and Mapping (SLAM). Today, fully onboard mapping is only achievable on robotic platforms that can host high-wattage processors, mainly due to the significant computational load and memory demands required for executing SLAM algorithms. For this reason, pocket-size hardware-constrained robots offload the execution of SLAM to external infrastructures. To address the challenge of enabling SLAM algorithms on resource-constrained processors, this paper proposes NanoSLAM, a lightweight and optimized end-to-end SLAM approach specifically designed to operate on centimeter-size robots at a power budget of only \(87.9\,\mathrm{mW}\). We demonstrate the mapping capabilities in real-world scenarios and deploy NanoSLAM on a nano-drone weighing \(44\,\mathrm{g}\) and equipped with a novel commercial RISC-V low-power parallel processor called GAP9. The algorithm is designed to leverage the parallel capabilities of the RISC-V processing cores and enables mapping of a general environment with an accuracy of \(4.5\,\mathrm{cm}\) and an end-to-end execution time of less than \(250\,\mathrm{ms}\).
SLAM, Mapping, Nano-Drone, UAV, Constrained Devices.
## I Introduction
The field of autonomous pocket-size robotics systems and Unmanned Aerial Vehicles (UAVs) experienced rapid growth in the past years due to the advancement and miniaturization of capable embedded computing platforms creating new possibilities for IoT applications [1, 2, 3, 4]. Nano-robots, and especially palm-size UAVs, weigh only a few tens of grams and benefit from increased agility compared to their standard-size counterparts, enabling them to fly in narrow spaces reliably [5, 6]. Furthermore, their reduced dimensions make nano-UAVs perfect candidates for safely operating near humans, especially in ramped indoor environments [7, 8]. In most practical applications, the mission of the nano-UAV is to follow a path through the environment that is predefined or adjusted dynamically during the mission [5, 7]. For instance, finding the source of gas leaks [9] or localizing and reaching sensor nodes for data acquisition [10] are only a few examples of such applications.
The environments where nano-UAVs typically fly are filled with walls and obstacles, and thus, optimal path planning requires good knowledge of the surroundings map [5]. Furthermore, in a wide range of applications, the map can change over time, so reprogramming the map into the nano-UAVs is not an ideal solution [11]. In smart buildings, for instance, where the layout of reconfigurable walls can change [12], or simply in crowded offices where tables, chairs, and furniture are often moved. Moreover, the arrangement of pallets and shelves in warehouses can change from one day to another, and therefore, the UAVs used for inventory need a constantly updated map for reliable navigation [13, 14, 15].
In the scenarios mentioned so far, the drone needs an accurate environmental map and the ability to localize itself within the map [11]. The algorithm class that performs both tasks is called Simultaneous Localization and Mapping (SLAM). Among the existing SLAM algorithms, graph-based SLAM [16, 17] is one of the most adopted variations of the algorithm due to its high accuracy and capability to refine the complete trajectory. Moreover, graph-based SLAM models each trajectory pose (i.e., position and heading) as a graph node and the odometry measurements as graph edges. Due to the odometry errors that typically characterize any robotic platform, the uncertainty in the poses grows as the drone moves [2, 18]. Hence, upon revisiting a location (i.e., loop closure), the pose error is higher than at the initial visit. To mitigate this issue, the robot also acquires environmental observations (i.e., depth measurements) during the flight [19]. By comparing the observations associated with two different poses, an accurate rigid body transformation can be derived between the two, using an approach called scan-matching [19].
While the transformation provided by scan-matching allows correcting the current pose, graph-based SLAM propagates this information back to the previously added nodes in the graph (i.e., graph optimization) and corrects the whole trajectory [16]. In conclusion, the accuracy of the corrected trajectory depends on the accuracy of the scan-matching and, therefore, on the observations' accuracy [20]. In most common applications, the observations consist of depth measurements, typically provided by LiDARs or stereo cameras [21, 22]. Although SLAM paired with LiDARs is widely used in applications with standard-size UAVs, these solutions require large amounts of computational resources and memory, which
are not available on nano-UAVs [6, 19]. Furthermore, even the most compact LiDARs used with standard-size UAVs are about one order of magnitude heavier1 than the maximum payload of nano-UAVs [23].
Footnote 1: The Crawirley 2.1 weights \(27\,\mathrm{g}\) and supports a maximum payload of \(15\,\mathrm{g}\), while a lightweight LiDAR such as UST-1020LX from Hokuyo weighs \(130\,\mathrm{g}\).
The recent release of lightweight (i.e., \(42\,\mathrm{mg}\)), low-resolution, and energy-efficient depth sensors based on Time of Flight (ToF) technology has changed the status quo in the feasibility of SLAM for nano-UAVs [24]. With the aid of such sensors, recent works demonstrated SLAM on nano-UAVs, but only under the assumption that the complex SLAM could be offloaded to an external base station [19]. This approach reduces the flight time due to the significant power consumption introduced by the radio communication with the base station [7]. Even more serious issues are the latency associated with the wireless communication protocol and the limited radio link range, which typically constrains the operating area within a few tens of meters in indoor environments [7]. Furthermore, because of the limited measurement capabilities of the early ToF sensors (i.e., a single distance value per sensor and a narrow FoV) [5], the existing systems that enable SLAM with nano-UAVs can only map simple-geometry environments such as long flat corridors. In contrast to the existing works, we exploit the novel VL53L5CX ToF sensor, which features an 8\(\times\)8 resolution and provides a 64-pixel depth map with a Field of View (FoV) of \(45^{\circ}\). By mounting four such sensors on the nano-UAV (i.e., front, rear, left, right), we achieve a cumulative FoV of \(180^{\circ}\). Furthermore, spinning the drone by \(45^{\circ}\) results in a full angular coverage (i.e., \(360^{\circ}\)), providing superior loop-closure performance compared to the previous ToF-based solutions and achieving centimeter-precision scant matching accuracy, similar to the LiDAR-based approaches.
Despite the sparse information provided by the ToF sensors, scan-matching remains a computationally intense and memory-hungry problem. Furthermore, the computational requirements are further exacerbated by the graph optimization performed by the graph-based SLAM, which is independent of the depth observations. Standard-size UAV systems used for SLAM typically employ powerful embedded computers such as Qualcomm Snapdragon, Nvidia Jetson TX2, or Xavier [22], which have a power consumption of a few tens of watts, about two orders of magnitude higher than the power budget nano-UAVs typically have for computation. Recent trends in microcontroller design emphasize parallel processing, hardware accelerators, and energy efficiency. The GAP9 System on Chip (SoC) from GreenWaves Technologies2 exemplifies these trends, being suited for specialized applications with nano-UAVs. With advanced parallel capabilities provided by the RISC-V cores, power optimization, and sensor integration, GAP9 empowers nano-UAVs with real-time edge computing, extended flight times, and enhanced data processing. GAP9 is based on the Parallel Ultra-Low-Power (PULP) computing paradigm [25], has a small form factor, and a power consumption below \(180\,\mathrm{mW}\).
Footnote 2: [https://greenwaves-technologies.com/](https://greenwaves-technologies.com/)
This paper proposes NanoSLAM, the first fully deployable framework that enables SLAM onboard nano-UAVs, performing the whole computation and environmental perception without relying on any external infrastructure or computation offload. Furthermore, by exploiting novel and low-power depth sensors in combination with the parallel capabilities of GAP9 SoC, our system achieves accurate indoor mapping comparable with SoA results from bigger and more computationally capable drones, commonly referred to as MAV or standard-size UAV [1]. Exploiting the parallel capability and energy efficiency of GAP9, we executed scan-matching and SLAM in real-time onboard the nano-UAV, which was not performed by any previous work. The contribution of this paper can be summarized as follows:
_(i)_ an optimized parallel implementation of the graph-based SLAM algorithm that runs in the GAP9 SoC in real-time in less than \(250\,\mathrm{ms}\). We comprehensively examine the various stages of the SLAM algorithm, providing an in-depth analysis of the optimizations made to each stage. Additionally, we evaluate the algorithm's execution time and memory requirements. _(ii)_ a parallel implementation and evaluation of the Iterative Closest Point (ICP) algorithm, an SoA in scan-matching, running onboard in \(55\,\mathrm{ms}\). _(iii)_ a custom plug-in companion board for the commercial Crazyflie 2.1 nano-UAV that extends the sensing capabilities of the drone with 4 ToF matrix sensors, allowing it to perform scan-matching and autonomous navigation. _(iv)_ a communication protocol that orchestrates the integration and data exchange between the drone's stock MCU and the GAP9 SoC, dictating how to store that graph, exchange graph poses, add edges, and perform graph optimization. _(v)_ an extensive in-field experimental evaluation that proves our system's closed-loop mapping functionality, which exploits NanoSLAM to achieve a trajectory error reduction by up to 67% and a mapping accuracy of \(4.5\,\mathrm{cm}\).
## II Related Work
In the field of robotics, several essential components are indispensable for facilitating autonomous navigation on diverse unmanned vehicles. These components encompass real-time environment perception [26], onboard computational capabilities for prompt mission inference [11, 27], and, pertinent to the focus of this paper, the competence to map and explore unknown environments [28]. Mapping an environment is generally done by employing different combinations of sensors [2], such as LiDARs, stereo cameras, laser scanners, or radars. Subsequently, environmental and spatial information collected from these sensors is paired with estimation methods, including particle filters [11], Extended Kalman Filters (EKFs), covariance intersection that enables position estimation, and, finally, SLAM [29] that combines the position information with environmental observations to generate a layout of the environment. As discussed in the literature [30, 31], SLAM consists of two components: the front-end processing represented mainly by feature extraction and loop closure, which is largely dependent on the sensors used, and the sensor-agnostic pose-graph optimization, in charge of the back-end processing [32].
As the name suggests, visual SLAM (vSLAM) uses images to extract depth information [33]. It can use simple monocular
cameras (e.g., wide angle, fish-eye, and spherical cameras), compound eye cameras (e.g., stereo and multi cameras), and RGB-D cameras such as depth or ToF cameras [33]. While SLAM can be enabled at a low cost with relatively inexpensive and limited cameras, the process involves large data volumes and is often marred with limited mapping accuracy [34]. On the other side, LiDARs are significantly more precise for depth estimation and are commonly used for applications involving high-speed moving vehicles such as self-driving cars and drones [35]. LiDAR-based systems typically provide sparse samples organized into high-precision 2D or 3D point clouds. Even if they yield accurate mapping results when combined with SLAM, LiDARs are generally expensive and heavy, weighing a few hundred grams [35].
Today, SLAM is useful in many applications [31] such as navigating a fleet of mobile robots to arrange shelves in a warehouse [13], parking self-driving cars in empty spots, autonomous race competitions [36], or delivering packages by navigating drones in unknown environments [6]. Many available tools already provide plug-and-play SLAM solutions that could be paired with other tasks such as sensor fusion, object tracking, path planning, and path following [36]. Although the mapping task seems to be a solved research problem in the literature, it relies on strong assumptions, such as memory availability of several gigabytes and powerful processor, e.g., the Intel i7 family [35, 37]. Moreover, carrying heavy and power-hungry 3D scanners, such as stereo cameras and LiDARs, is not considered a limitation for conventional robotic applications [35, 36, 37]. However, these assumptions do not hold for miniaturized and low-power robotic platforms, where the hardware cost is a concern, the payload is limited to a few tens of grams, and the computation power budget is limited to hundreds of \(\mathrm{mW}\)[7, 11, 38]. Hence, enabling onboard mapping on this tiny class of devices is still an open problem.
This paper focuses on nano-UAVs as a specific application scenario to empirically validate the efficacy of our lightweight NanoSLAM approach. However, the challenges discussed in enabling mapping on nano-UAVs can be extended to the broader domain of micro-robotics and, more generally, to low-cost and resource-constrained devices [39]. Standard-size UAVs distinguish themselves from Micro-Aerial Vehicles (MAVs) and nano-UAVs in their physical dimensions, weight, total power consumption, and onboard processing capabilities [39]. For the latter two, the sensing and processing power budget represents about \(\frac{1}{10}\) of the power consumed by the motors [7]. Presently, the majority of cutting-edge advancements in robotic perception and mapping have been showcased on standard-size UAVs and MAVs, which possess a power budget ranging from \(50\,\mathrm{W}\) to \(100\,\mathrm{W}\) and a total mass of \(\geq 1\,\mathrm{kg}\)[40]. Consequently, these vehicles can be equipped with high-performance onboard computing platforms, such as GPUs featuring gigabytes of memory [40]. Conversely, nano-UAVs, typically based on low power MCUs, weigh less than \(50\,\mathrm{g}\) with a power budget in the range of \(5\,\mathrm{W}-10\,\mathrm{W}\), with only \(0.5\,\mathrm{W}-1\,\mathrm{W}\) being allocated for powering the sensors, all the electronics, and the computational units [7, 39]. Low-power MCUs usually offer limited memory capacity, typically ranging from \(100\,\mathrm{kB}\) to \(1\,\mathrm{MB}\), posing a significant constraint for visual-based perception and mapping [7, 11].
Previous studies conducted on MAVs and UAVs have commonly utilized miniature, conventional \(360^{\circ}\) LiDAR sensors [42] or depth stereo cameras [40] to perform mapping. For instance, Kumar _et al._[43] integrated single-layer LiDAR sensors with inertial measurement units for indoor mapping tasks using a DJI Phantom 3 drone. This setup required an additional desktop-class Intel i5 processor onboard. The LiDAR sensor employed measures \(62\,\mathrm{mm}\times 62\,\mathrm{mm}\times 87.5\,\mathrm{mm}\), weighs \(210\,\mathrm{g}\), and consumes approximately \(8.4\,\mathrm{W}\). Similarly, Gao _et al._[44] integrated a multi-layer LiDAR sensor with a desktop-class Intel i7 processor to enable 3D mapping of indoor environments. The LiDAR sensor they use consumes \(8\,\mathrm{W}\) and measures \(103\,\mathrm{mm}\times 103\,\mathrm{mm}\times 72\,\mathrm{mm}\) with a weight of \(509\,\mathrm{g}\). Another approach by Fang _et al._[45] uses an RGBD camera combined with a particle filter to navigate through obstructed shipboard environments. Their platform is \(58\,\mathrm{cm}\times 58\,\mathrm{cm}\times 32\,\mathrm{cm}\) in size, carries over \(500\,\mathrm{g}\) of instrumentation, and is operated by a high-performance octa-core ARM processor. Table I provides an overview of SoA mapping strategies in the UAV field, encompassing sensor types, mapping accuracy, and power consumption of the computing platforms. For example, Causa _et al._[35] proposed a scalable mapping strategy based on LiDAR and GNSS, utilizing a standard-size UAV weighing \(3.6\,\mathrm{kg}\) with off-board processing. Shen _et al._[22] focused on onboard intelligence, utilizing a power-hungry Nvidia Xavier (\(30\,\mathrm{W}\)) and a VLP-16 LiDAR. Huang _et al._[2] entrusted the mapping algorithm and onboard processing to a Jetson TX2, equipped with a multi-core CPU and a GPU. Additionally, Chang _et al._[37] proposed a robust multi-robot SLAM system designed to support swarms, but the results were validated offline using an Intel i7-8750H processor. Although these approaches demonstrated good mapping capabilities in the range of 5 to \(20\,\mathrm{cm}\), they involve large and heavy sensors that require power-intensive processing.
Implementing SLAM on nano-UAVs or any miniaturized and low-power hardware [46] is non-trivial due to the large memory and computation requirements typically associated with scan-matching or graph optimization. Moreover, alternatives such as offloading heavy computation tasks to an external computer is often an unpractical solution. In [2], authors show how the communication latency of a cloud-based multi-robot SLAM solution can reach up to \(5\,\mathrm{s}\), an unacceptable value in most nano-UAV uses cases. The severe limit imposed by continuous remote communication poses limits to the mapping speed and the overall system reliability [2], which further demonstrates the need for having a fully onboard SLAM even on resource-constrained nano-UAVs.
One approach to address the computational challenge involves parallelizing different processes on ultra-low power parallel SoCs [46, 25]. Utilizing embedded accelerators or multicore MCUs for processing, leveraging single instruction multiple data (SIMD) calculations, can enhance performance in certain scenarios [47, 25]. To this end, novel PULP SoCs have emerged in recent years, offering clusters of cores within \(100\,\mathrm{mW}\) of power consumption. Rossi _et al._[25] present the basis of the commercial SoC GAP family from Greenwaves, which has already demonstrated its capabilities in the field
of nano-UAVs for accurate localization [11] and autonomous navigation [7]. In particular, GAP9 is selected for the scope of this paper to carry the intensive computation.
To attain an optimal solution, the sensor selection needs to consider an optimal trade-off between power consumption, accuracy, and weight. In [48], the authors explore the possibility to use visual-based perception to enable obstacle avoidance and mapping on nano-UAVs. However, today, this direction does not seem to be promising due to the low performances of miniaturized RGB cameras and the large amounts of data they generate - which needs to be processed by resource-constrained processors [7, 49]. In their work [49], Tijmons _et al._ propose a stereo vision-based obstacle avoidance system for a flapping wing UAV, which demonstrates promising results with an onboard processing frequency of \(15\,\mathrm{Hz}\). This approach aligns with common methodologies employed in standard-size UAVs. However, their implementation needs an additional microcontroller (i.e., STM32F405) exclusively dedicated to image processing and the sensor board alone requires an energy consumption of \(484\,\mathrm{mW}\). It is worth noting that while the authors of [49] tested their system in real environments, they do not report any statistical analysis of the success rate. Furthermore, the authors acknowledge the limited robustness of their system in non-ideal flight conditions, such as the presence of small obstacles. Another practical example is provided by [7], where the authors introduce a grayscale camera-based navigation solution that is deployed onboard a nano-UAV to facilitate autonomous navigation and obstacle avoidance. A CNN is used for perception and exhibits reliable performance in detecting obstacles, allowing the drone to adjust its forward velocity or heading. However, in unfamiliar environments, particularly when executing \(90^{\circ}\) turns, the CNN's performance drops drastically, resulting in a high probability of collision when the drone exceeds \(0.6\,\mathrm{m}\,\mathrm{s}^{-1}\)[7]. Additionally, their solution often struggles to avoid collisions with unknown obstacles placed in narrow environments such as corridors. Thus, vision-based approaches are not optimal solutions to enable onboard depth estimation with nano-UAVs, which is why we employ sensors that directly measure the depth.
Since the commercially available LiDAR exceeds the power and weight constraints of pocket-size UAVs, alternatives have been investigated. Recent studies have shown potential in enabling autonomous navigation with depth sensors based on the ToF technology. In [24], the authors investigate the possibility of using a commercial multi-zone ToF sensor that exhibits good measurement accuracy when measuring distances smaller than \(2\,\mathrm{m}\). Moreover, [26] used a lightweight 64-pixel ToF sensor for robust obstacle avoidance in indoor and outdoor scenarios, with a maximum speed of \(1.5\,\mathrm{m/s}\). At the time of writing, two commercially available depth sensors stand out: the VL53L5CX from ST Microelectronics and the ToF IRS2381C REAL3 from Infineon. The latter boasts an impressive resolution of 38,000 pixels and a maximum range of 4 meters. However, it requires an external illuminator, consumes up to \(680\,\mathrm{mW}\) for the entire circuitry, and has a weight exceeding \(10\,\mathrm{g}\). On the other hand, the VL53L5CX offers a lower resolution of 64 pixels but is significantly lighter, weighing only \(42\,\mathrm{mg}\). Additionally, its prior utilization in the nano-UAV field [11, 24] serves as a compelling motivation for selecting it for this paper.
As depicted in Table I, the existing literature offers only a limited number of studies proposing mapping solutions that use UAVs and have been successfully evaluated in field [2]. Notably, [37, 40] achieve their objectives without relying on external infrastructure. However, within the nano-UAV domain, even fewer works tackle the mapping challenge [6, 19, 41], and they offload the computation to an external base station. Furthermore, the existing works performing mapping with nano-UAVs are not able to reach the same level of accuracy as standard-size UAVs within the literature.
To the best of our knowledge, this paper introduces the first system that enables entirely onboard SLAM execution to enable accurate mapping of general environments, providing a comprehensive methodology, implementation, and field results. Our study demonstrates the system's functionality even with low-power miniaturized sensors that weigh only \(44\,\mathrm{g}\). The achieved accuracy aligns with the SoA for MAVs and standard-size UAVs, with a mapping error down to \(4.5\,\mathrm{cm}\). The proposed system facilitates advanced autonomous capabilities in nano-UAVs, paving the way for enabling additional features such as optimal path planning and multi-agent collaboration.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Work & On-board processing & Sensor & Latency & Map & Field test & Power Consumption & System Weight \\ \hline \multicolumn{7}{c}{Nano-UAV and MAV} \\ \hline \multirow{2}{*}{**This work**} & \multirow{2}{*}{**Yes (Cortex-M4)**} & \(4\times\)**ToF 64-pixel** & **247**\(\,\mathrm{ms}\) & **4.8**\(\,\mathrm{cm}\) & **Yes** & **350**\(\,\mathrm{mW}\) & \(44\,\mathrm{g}\) \\ & & **VL53L5X** & **Post-processing & 10\(\,\mathrm{\SIUnitSymbolMicro}\)\(20\,\mathrm{cm}\) & Yes & - & \(27\,\mathrm{g}\) \\ [5] & Yes (Cortex-M4) & \(4\times\) ToF VL53L1x & \(<\)\(10\,\mathrm{ms}\) & No Map & Yes & \(240\,\mathrm{mW}\) & \(31.7\,\mathrm{g}\) \\ [19] & No (Intel i7 station) & \(4\times\) ToF VL53L1x & \(214\,\mathrm{ms}\) & 5-\(15\,\mathrm{cm}\) & No & - & \(31.7\,\mathrm{g}\) \\ [41] & No & \(4\times\) ToF VL53L1x & Post-processing & 4.7\(\,\mathrm{cm}\) & No & - & \(401\,\mathrm{g}\) \\ \hline \multicolumn{7}{c}{Standard-size UAV} \\ \hline
[35] & No & LiDAR & - & 5-\(20\,\mathrm{cm}\) & Yes & - & \(3.6\,\mathrm{kg}\) \\
[22] & Yes (Xavier) & VLP-16 LiDAR & \(49\,\mathrm{ms}\) & \(2.14\,\mathrm{m}\) & No & \(30\,\mathrm{W}\) & \(>\)\(2\,\mathrm{kg}\) \\
[2] & Yes (Jetson TX2 ) & RP-LIDAR & \(\sim\)\(1\,\mathrm{s}\) & - & Yes & \(>\)\(10\,\mathrm{W}\) & \(>\)\(2\,\mathrm{kg}\) \\
[37] & No (Intel i7 station) & LiDAR & Post-processing & 15-\(20\,\mathrm{cm}\) & Yes & - & - \\
[40] & Yes (Jetson TX2) & Intel RealSense D435 & \(\sim\)\(120\,\mathrm{ms}\) & - & Yes & \(7.5\,\mathrm{W}\) & \(1.3\,\mathrm{kg}\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: System and performance comparison between this paper and the State-of-the-Art (SoA) works present in the literature. On-board processing, sensing elements, mapping accuracy, and system setups are compared.
## III Algorithms
This section presents a lightweight localization and mapping methodology that leverages the scan-matching and graph-based SLAM algorithms, targetting pocket-size robotic platforms and emerging low-power processors. Our solutions can enable any robotic platform of similar size or bigger to perform low latency SLAM in real-time, given depth measurement capabilities enabled by sensors such as \(8\times 8\) STMicroelectronics VL53L8CX described in Section II.
### _Scan Frames and Scans_
Our objective is to conduct 2D localization and mapping utilizing depth sensors. Therefore, we assume a system equipped with \(n_{s}\) depth sensors (e.g., ToF) that provide measurements in the 2D plane with a resolution of \(n_{z}\) pixels (i.e., zones) per sensor. Figure 1 shows an example of such a system with \(n_{s}=4\) and \(n_{z}=8\), illustrating the drone, the ToF depth sensors, and how the distance measurements can be projected in the world frame. The world frame, body frame, and sensor frame are represented with \(W\), \(D\), and \(S\), respectively.
Let \(\mathbf{x}_{k}=(x_{k},y_{k},\psi_{k})\) be the state of the drone (i.e., _pose_) expressed in the world coordinate frame at the discrete timestamp \(k\). Furthermore, we use \(\alpha\in\{1,2,\ldots n_{s}\}\) to index among ToF sensors and \(\beta\in\{1,2,\ldots n_{z}\}\) to index among the zones of each sensor. The distance provided by sensor \(\alpha\) for the zone \(\beta\) at instant \(k\) is marked as \(d_{k}^{\alpha\beta}\). Equation 1 shows the distance measurement projection \(d_{k}^{\alpha\beta}\) acquired at pose \(\mathbf{x}_{k}\) into the world coordinate frame \(W\). Indeed, the distance \(d_{k}^{\alpha\beta}\) provided by the sensor is not the absolute distance to the object but the projection of the absolute distance on the \(OX\) axis of the sensor frame \(S^{\alpha}\). Thus, \(tan(\theta_{\beta})\cdot d_{k}^{\alpha\beta}\) represents the \(y\)-coordinate of the obstacle in the same sensor frame, where \(\theta_{\beta}\) is the angle of each sensor zone. Translating the obstacle's coordinates to the origin of the drone's body frame \(B\) and rotating it to the world frame \(W\) leads to the second term of Equation 1. The translation is performed by adding the offset \((o_{x}^{\alpha},o_{y}^{\alpha})\) to the obstacle's position - note that the offset is expressed in \(D\), and it is different for each sensor. \(\mathbf{R}\) represents the 2D rotation matrix and the sum \(\psi_{k}+\gamma_{\alpha}\) represents the angle between \(S^{\alpha}\) and \(W\), where \(\psi_{k}\) is the heading angle between \(D\) and \(W\); \(\gamma_{\alpha}\) represents the rotation of the sensor frame w.r.t \(D\) and for the example in Figure 1, \(\gamma_{\alpha}\in\{0^{\circ},90^{\circ},180^{\circ},270^{\circ}\}\). Lastly, we use the coordinates of the pose \((x_{k},y_{k})\) to perform another translation and obtain the coordinates of the obstacle expressed in the world frame \(W\). At every timestamp \(k\), the ToF sensors provide at most \(n_{s}n_{z}\) distance measurements - \(n_{s}\) sensors \(\times\)\(n_{z}\) zones - as some distance measurements might be invalid and therefore not considered. Projecting the \(n_{s}n_{z}\) points using Equation 1 leads to the collection \(\{(p_{x}^{\alpha\beta},p_{y}^{\alpha\beta})\mid\alpha\leq n_{s};\beta\leq n_{z}\}\) that we call a _scan frame_.
\[\begin{pmatrix}p_{x}^{\alpha\beta}(k)\\ p_{y}^{\alpha\beta}(k)\end{pmatrix}=\begin{pmatrix}x_{k}\\ y_{k}\end{pmatrix}+\mathbf{R}_{(\psi_{k}+\gamma_{\alpha})}\begin{pmatrix}d_{k}^{ \alpha\beta}+o_{y}^{\alpha}\\ tan(\theta_{\beta})\cdot d_{k}^{\alpha\beta}+o_{y}^{\alpha}\end{pmatrix} \tag{1}\]
The 2D point collection in the scan frame could be further used as input for the scan-matching algorithm. However, the cardinality of a scan frame (i.e., the number of 2D points) is still too small to enable accurate scan-matching. We overcome this issue by stacking \(n_{sf}\) consecutive scan frames in a set that we call a _scan_ and define as \(\mathbf{S_{k}}=\{(p_{x}^{\alpha\beta}(\tilde{k}),p_{y}^{\alpha\beta}(\tilde{k}))\mid \alpha\leq n_{s};\beta\leq n_{z};k\leq\tilde{k}<k+n_{sf}\}\). When the acquisition of a new scan is triggered, the robot starts appending new scan frames until it reaches the count of \(n_{sf}\). The resulting cardinality of a scan is \(n_{sf}n_{s}n_{z}\) points (minus the invalid pixels), which we call _the scan size_. Moreover, every scan \(\mathbf{S}_{k}\) has an associated _scan pose_\(\mathbf{x}_{k}\), which is the drone's pose when the scan acquisition starts.
Let \(f\) be the Field of View (FoV) of one ToF depth sensor, which leads to a cumulative FoV of \(n_{s}f\), generally smaller than \(360^{\circ}\). To virtually increase the FoV and achieve full coverage, the drone also spins by \(\frac{360-n_{s}f}{n_{s}}\) degrees in place around the \(z\)-axis while acquiring the scan. For example, given the scenario in Figure 1 and assuming an FoV of \(45^{\circ}\) for each sensor, the drone should spin other \(45^{\circ}\) during the scan to cover the surroundings completely. With this mechanism, scan-matching can determine the transformation between two scans \(\mathbf{S}_{p}\) and \(\mathbf{S}_{q}\), which also applies to their associated scan poses \(\mathbf{x}_{p}\) and \(\mathbf{x}_{q}\). The scan size is a balance between scan-matching accuracy and memory usage, determined by the limitations of the system.
### _Scan-matching_
Scan-matching is the process of determining the optimal rigid-body transformation between two scans. This transformation consists of rotation and translation, and with an ideal noise-free scenario, it should result in perfect overlapping with the other scan. Since scans and poses are strictly correlated, the transformation resulting from scan-matching also applies to the poses. When the drone is near a previously visited position, scan-matching can derive an accurate transformation w.r.t. a
Fig. 1: Illustration of the four ToF sensors onboard the drone and the coordinate frames of the world, drone, and sensors.
previously acquired pose in that location. In this way, scanning is used to correct the accumulated odometry errors. In this work, we implement and use ICP, an SoA algorithm in scan-matching [50].
We define two scans \(\mathbf{S}_{\mathbf{p}}=\{\mathbf{p}_{1},\mathbf{p}_{2},\ldots\}\) and \(\mathbf{S}_{\mathbf{q}}=\{\mathbf{q}_{1},\mathbf{q}_{2}\ldots\}\) where \(\mathbf{p}_{i}\) and \(\mathbf{q}_{i}\) are 2D points in the scans - we changed the initial indexing to enhance readability. Determining the optimal overlap between \(\mathbf{S}_{p}\) and \(\mathbf{S}_{q}\) can be formulated as a least squares problem, as shown in Equation 2[50]. Note that Equation 2 requires to know what element \(\mathbf{q}_{i}\) in scan \(\mathbf{S}_{q}\) corresponds to the element \(\mathbf{p}_{i}\) in scan \(\mathbf{S}_{p}\). If the correspondences are known, a direct and optimal solution can be obtained by solving the optimization problem in Equation 2. This is typically done by offsetting each scan by its center of mass and then applying a rotational alignment based on the singular value decomposition method [50].
\[\mathbf{R}^{*},\mathbf{t}^{*}=\operatorname*{arg\,min}_{\mathbf{R},\mathbf{t}}\sum\|\mathbf{q}_{i }-(\mathbf{R}\mathbf{p}_{i}+\mathbf{t})\|^{2}. \tag{2}\]
However, the correspondences are unknown in our case and in most of real-world scan-matching applications. A common heuristic for determining the correspondences is to use the Euclidean distance - i.e., pairing each point \(\mathbf{p}_{i}\) in \(\mathbf{S}_{p}\) with the closest point \(\mathbf{q}_{j}\) in \(\mathbf{S}_{q}\)[50]. This implies solving the problem \(\operatorname*{arg\,min}_{j}\|\mathbf{p}_{i}-\mathbf{q}_{j}\|\) for every point \(\mathbf{p}_{i}\), using an exhaustive search over all elements in \(\mathbf{S}_{q}\). Once these approximate correspondences are established, Equation 2 determines the transformation between the two scans, which is then applied to \(\mathbf{S}_{p}\). Repeating this process until the two scans overlap represents the ICP algorithm, which we summarize in Listing 1.
```
forkinrange(\(N_{iter}^{ICP}\)): #Computecorrespondences foriinlen(\(\mathbf{S}_{p}\)): correspondence[i]\(\leftarrow\)argmin\({}_{j}\|\mathbf{p}_{i}-\mathbf{q}_{j}\|\) #Calculatethetransformation \(\mathbf{R}^{*},\mathbf{t}^{*}\)\(\leftarrow\)Equation 2 #Applytransformationtscan\(\mathbf{S}_{p}\) \(\leftarrow\)\(\mathbf{R}^{*}\mathbf{S}_{p}+\mathbf{t}^{*}\)
```
Listing 1: The stages of the ICP algorithm. \(N_{iter}^{ICP}\) represents the number of iterations and \(\mathbf{R}^{*},\mathbf{t}^{*}\) the final solution after the algorithm executes.
### _Graph-based SLAM Algorithm_
In most GPS-denied environments, such as indoor scenarios, the drone's internal state estimator computes the position and heading by integrating velocity and angular velocity measurements. However, the measurements are affected by sensor noise, and integrating noisy data over time results in drift. Equation 1 shows that projecting distance measurements in the world frame to obtain a scan or the map requires trajectory knowledge. Since the trajectory error impacts the accuracy of the map, we use SLAM to first correct the trajectory and then compute the map w.r.t. the corrected path. For this purpose, we implement the graph-based SLAM introduced in [16], which can use scan-matching information to correct the trajectory. The graph-based SLAM represents the trajectory as a pose graph, where each pose (i.e., 2D position and heading) is modeled as a graph node, and the edges are relative constraints between the nodes. We distinguish two types of graph edges: _(i) the odometry edges_ incorporating motion information between any two consecutive poses, and _(ii) the loop closure (LC) edges_ which embody relative measurements derived by ICP.
Let \(N\) be the number of poses and \(n\) the number of LC edges. Moreover, let \(\mathbf{X}=\{\mathbf{x}_{0},\ldots,\mathbf{x}_{N-1}\}\) be the graph nodes expressed in \(W\), and \(\mathbf{z}_{ij}=(z_{x},z_{y},z_{\psi})\) the graph edge measurements, the latter being expressed in the coordinate frame of pose \(\mathbf{x}_{i}\). We note as \(\mathbf{\hat{z}}_{ij}\) the prediction of an edge measurement, or in other words, the edge measurement computed given two poses \(\mathbf{x}_{j}\) and \(\mathbf{x}_{i}\). Pose graph optimization (PGO) involves the estimation of optimal pose values that ensure consistency between the edge measurements \(\mathbf{z}ij\) and the predicted measurements \(\mathbf{\hat{z}}ij\). As shown in [16], this is done by minimizing the sum of the squared differences \(\mathbf{e}_{ij}=\mathbf{z}_{ij}-\mathbf{\hat{z}}_{ij}\), where Equation 3 gives the maximum likelihood solution that requires the initial pose \(\mathbf{x}_{0}\) and the edges \(\mathbf{z}_{ij}\) to compute the optimal poses. The number of terms in the sum is equal to the number of edges in the graph, and \(\Omega\) is the diagonal information matrix, which weighs the importance of each edge. Since ICP typically provides accurate results, the LC edge measurements are more precise than the odometry edge measurements.
\[\mathbf{e}_{ij} =\mathbf{z}_{ij}-\mathbf{\hat{z}}_{ij}(\mathbf{x}_{i},\mathbf{x}_{j})\,\] \[\mathbf{X}^{*} =\operatorname*{arg\,min}_{\mathbf{X}}\sum_{i,j}\mathbf{e}_{ij}^{T}\Omega \mathbf{e}_{ij}. \tag{3}\]
Running SLAM onboard a resource-constrained device in real-time requires solving the optimization problem in Equation 3 efficiently. Since this is a non-linear problem, there is no closed-form solution, but iterative methods such as Gauss-Newton have been proven effective if a good initial guess is known. In every iteration, the error function \(\mathbf{e}_{ij}(\mathbf{x}_{i},\mathbf{x}_{j})\) is approximated with its first-order Taylor expansion, reducing the problem to a linear equation system. This paper provides an efficient implementation of the graph-based SLAM algorithm derived from [16], which is in charge of PGO. We summarize the algorithm in Listing 2 and discuss it in detail in Section V.
\[\begin{pmatrix}z_{x}\\ z_{y}\end{pmatrix} =\mathbf{R}_{-\psi_{i}}\begin{pmatrix}x_{i+1}-x_{i}\\ y_{i+1}-y_{i}\end{pmatrix}\, \tag{4}\] \[z_{\psi} =\psi_{i+1}-\psi_{i}. \tag{5}\]
The graph-based SLAM algorithm requires the initial pose \(\mathbf{x}_{0}\), the edge measurements \(\mathbf{z}_{ij}\), and an initial guess of the pose values. The initial pose \(\mathbf{x}_{0}\) is the drone's pose right after take-off, and without loss of generality, it is always considered \((0,0,0)^{T}\). Since there is no additional information about the poses, the best initial guess is computed by forward integrating the odometry measurements w.r.t. \(\mathbf{x}_{0}\). Consequently, the poses' initial guess encompasses the same information as the odometry edge measurements, and therefore, it suffices to
store only the poses. This mechanism is convenient because, in many robotics applications, the robot's state estimator directly integrates the odometry measurements and provides the pose values. In this way, the odometry edge measurements are calculated right before the optimization is performed, as shown in the first _for_ loop in Listing 2. Each measurement \(\mathbf{z}_{i,i+1}\) is expressed in a coordinate frame rotated by \(\psi_{i}\) and computed using Equations 4 - 5. The second _for_ loop calculates the LC edges, using the ICP algorithm introduced in Section III-B.
```
#1.Computetheodometryedgemeasurementsforiinrange(\(N-1\)):
#\(\mathbf{z}_{i,i+1}\leftarrow\)Equations4-5
#2.ComputetheLCedegmeasurementsforkinrange(n):
#\(\mathbf{z}_{ij}\leftarrow\)ICP(\(\mathbf{x}_{i},\mathbf{x}_{j}\))
#3.Graphoptimizationforkinrange(\(N_{\text{iter}}^{SLAM}\)):
#3a.Compute\(\mathbf{H}\)and\(\mathbf{b}\) H\(\mathbf{t}\gets\mathbf{0},\ \mathbf{b}\gets\mathbf{0},\ \mathbf{H}_{11}\gets\mathbf{I}_{3}\) foredgeinedges:
#ComputetheJacobians \(\mathbf{A}_{ij}\leftarrow\frac{\mathbf{b}_{ij}(\mathbf{x})}{\mathbf{a}_{ij}}\ \mathbf{B}_{ij} \leftarrow\frac{\mathbf{b}_{ij}(\mathbf{a})}{\mathbf{b}_{ij}}\bigg{|}_{\mathbf{x}=\mathbf{x}^{ \prime}}\)#Constructthelinearsystemmatrix \(\mathbf{H}_{ii}+=\mathbf{A}_{ij}^{T}\mathbf{\alpha}\mathbf{A}_{ij}\ \mathbf{H}_{jj}+=\mathbf{A}_{ij}^{T}\mathbf{\alpha}\mathbf{B}_{ij}\)#Constructthelinearsystemvector \(\mathbf{b}_{i}+=\mathbf{A}_{ij}^{T}\mathbf{\alpha}\mathbf{e}_{ij}\ \mathbf{b}_{j}+\mathbf{B}_{ij}^{T}\mathbf{\alpha}\mathbf{e}_{ij}\)#3b.Solvethelinearsystem\(H\Delta\mathbf{x}=-b\) Permutation\(\mathbf{H}_{P}=\mathbf{P}\mathbf{P}\mathbf{P}^{T},\ \mathbf{b}_{P}=\mathbf{P}\mathbf{b},\ \mathbf{\Delta}\mathbf{x}_{P}=\mathbf{P}\Delta\mathbf{x}\)#Solve\(\mathbf{H}_{P}\Delta\mathbf{x}_{P}=-b_{P}\) Choleskydecomposition\(\mathbf{H}_{P}=\mathbf{L}_{P}\mathbf{L}_{P}^{T}\)Forwardsubstitution\(\mathbf{L}_{P}\mathbf{y}=-\mathbf{b}_{P}\)Backwardsubstitution\(\mathbf{L}_{P}^{T}\Delta\mathbf{x}_{P}=\mathbf{y}\)#Retrievethesolution\(\Delta\mathbf{x}\)Inversepermutation\(\Delta\mathbf{x}=\mathbf{P}^{-1}\Delta\mathbf{x}_{P}\)#3c.Updatethesolution \(\mathbf{x}^{*}\leftarrow\mathbf{x}^{*}+\Delta\mathbf{x}\)
```
**Listing 2** The graph-based SLAM algorithm that performs PGO. The outer _for_ loop runs for \(N_{iter}^{SLAM}\) iterations.
Once all the edge measurements are calculated, the actual graph optimization can start, performed in the double _for_ loop. Minimizing Equation 3 when \(e_{ij}(\mathbf{x}_{i},\mathbf{x}_{j})\) is linearized around the current pose guess is equivalent to solving the linear equation system \(\mathbf{H}\Delta\mathbf{x}=-\mathbf{b}\)[16]. \(\mathbf{H}\) and \(\mathbf{b}\) are computed in the inner _for_ loop. \(\mathbf{A}_{ij}\) and \(\mathbf{B}_{ij}\) are \(3\times 3\) matrices and represent the Jacobians obtained after linearization. Similarly, the \(3\times 3\) blocks \(\mathbf{H}_{ii}\), \(\mathbf{H}_{ij}\), \(\mathbf{H}_{jj}\), and \(\mathbf{H}_{ji}\) represent the contribution on the \(\mathbf{H}\) matrix of each graph edge from node \(i\) to node \(j\). The dimension of the blocks \(\mathbf{b}_{i}\) and \(\mathbf{b}_{j}\) is \(3\times 1\), and they construct the system vector \(\mathbf{b}\). Given the constituent elements of matrix \(\mathbf{H}\) and vector \(\mathbf{b}\), their dimension is \(3N\times 3N\) and \(3N\times 1\), respectively. The \(3N\times 1\) vector \(\mathbf{x}^{*}\) serves as the ongoing estimate of the poses (stacked together), continuously refined during the iterative graph optimization process. Before the optimization starts, the initial guess is loaded into \(\mathbf{x}^{*}\).
The next step is to solve the linear system \(\mathbf{H}\Delta\mathbf{x}=-\mathbf{b}\). Inverting matrix \(\mathbf{H}\) would demand significant memory and computational resources, inefficient for resource-constrained devices [51]. Nonetheless, more efficient alternatives have been suggested in the literature, which leverage the Cholesky decomposition [52, 53]. Since \(\mathbf{H}\) is symmetric positive-definite, the decomposition calculates the lower triangular matrix \(\mathbf{L}\), such that \(\mathbf{H}=\mathbf{L}\mathbf{L}^{T}\). The equation system becomes \(\mathbf{L}\mathbf{L}^{T}\Delta\mathbf{x}=-\mathbf{b}\). In addition, we make the notation \(\mathbf{y}=\mathbf{L}^{T}\Delta\mathbf{x}\). Since \(\mathbf{L}\) is triangular, solving \(\mathbf{L}\mathbf{y}=-\mathbf{b}\) is trivial using the forward substitution method. Having \(\mathbf{y}\), the solution \(\Delta\mathbf{x}\) is easily calculated by solving \(\mathbf{y}=\mathbf{L}^{T}\Delta\mathbf{x}\) using backward substitutions. Lastly, the solution \(\Delta\mathbf{x}\) is added to the current estimate of poses. Typically, the outer loop iterates until \(\Delta\mathbf{x}\) reaches a sufficiently small value or becomes zero.
The ordering of the rows and columns of matrix \(\mathbf{H}\) influences the non-zero count and computation time of matrix \(\mathbf{L}\)[51]. Permuting both the rows and columns of \(\mathbf{H}\) is done by the multiplication \(\mathbf{P}\mathbf{H}\mathbf{P}^{T}\), where \(\mathbf{P}\) is the permutation matrix - i.e., an identity matrix with reordered rows [51]. Exploiting the property \(\mathbf{P}^{-1}=\mathbf{P}^{T}\), the linear system is rewritten as \(\mathbf{H}\mathbf{P}^{T}\mathbf{\rho}\Delta\mathbf{x}=-\mathbf{b}\). Multiplying both sides by \(\mathbf{P}\) on the left and making the substitutions \(\mathbf{H}_{P}=\mathbf{P}\mathbf{H}\mathbf{P}^{T}\), \(\mathbf{b}_{P}=\mathbf{P}\mathbf{b}\), and \(\Delta\mathbf{x}_{P}=\mathbf{P}\Delta\mathbf{x}\) leads to \(\mathbf{H}_{P}\Delta\mathbf{x}_{P}=-\mathbf{b}_{P}\). Lastly, \(\Delta\mathbf{x}\) is retrieved from \(\Delta\mathbf{x}_{P}\), which implies a negligible overhead. Therefore, applying the permutation leads to the same mathematical problem, requiring the additional step of retrieving \(\Delta\mathbf{x}\) from \(\Delta\mathbf{x}_{P}\), which comes with negligible overhead. The process of solving \(\mathbf{H}\Delta\mathbf{x}=-\mathbf{b}\) leveraging the Cholesky decomposition and the permutation mechanism is described in step 3b of Listing 2. In Section V, we discuss how the permutation matrix \(\mathbf{P}\) is obtained.
### _SLAM in Real-world Scenarios_
Figure 2 shows how a robot trajectory can be discretized into a pose graph. In this example, the drone flies along a square loop corridor, following the outer wall until it reaches the start point again. As the drone advances, it keeps adding new poses to the graph at fixed intervals using the information provided by the internal state estimator. The pose \(\mathbf{x}_{0}\) is the starting point and, therefore, error-free, but since the following poses are obtained based on integration w.r.t. \(\mathbf{x}_{0}\), they are affected by errors due to odometry drift. We note the poses as \(\mathbf{x}_{0},\mathbf{x}_{1},\dots\mathbf{x}_{N-1}\) and represent them with empty circles in Figure 2, while the odometry constrains \(\mathbf{z}_{01},\mathbf{z}_{12},\dots\) are the edges connecting the circles. Performing the graph optimization at this point would lead to no change in the poses because any pose \(\mathbf{x}_{i+1}\) is obtained by integrating the measurement \(\mathbf{z}_{i,i+1}\) w.r.t. \(\mathbf{x}_{i}\) and therefore the poses and edge measurements are already in agreement - i.e., the sum from Equation 3 is already zero.
In Figure 2, the filled grey circles denoted as \(\mathbf{x}_{0}^{R},\mathbf{x}_{1}^{R},\dots\), represent the actual (i.e., the ground truth) position and heading of the poses, which are not known to the drone. At the
end of the mission, even if the drone estimates that it crosses the starting point again (i.e., \(\mathbf{x}_{0}=\mathbf{x}_{N-1}\)), its actual pose is \(\mathbf{x}_{N-1}^{R}\). To mitigate the odometry errors, the drone acquires observations (i.e., a scan) in \(\mathbf{x}_{N-1}\), which it compares with the scan acquired in \(\mathbf{x}_{0}\), as shown in Figure 2. We call as _reference scan_ the scan acquired when a place is visited for the first time - e.g., the scan acquired in \(\mathbf{x}_{0}\). Furthermore, we define as _LC scan_ the scan acquired when a place is revisited - e.g., the scan acquired in \(\mathbf{x}_{N-1}\). An LC scan is always paired with a reference scan or another LC scan, and ICP is used to derive a transformation between the two. In the example from Figure 2, ICP is used to derive a transformation between \(\mathbf{x}_{0}\) and \(\mathbf{x}_{N-1}\), and therefore add a new LC edge to the graph - from node \(N-1\) to node \(0\). Once there is at least one new LC edge in the graph, graph-based SLAM can run to correct the existing poses. After the optimization completes, the LC edges are typically kept in the graph. In the context of our approach, we assume an unchanging environment. Yet, should alterations occur within the environment that lead to scans that do not overlap, we identify these situations and discard the LC edge. The procedure for quantifying the degree of overlap between two scans is elaborated upon in Section VII.
### _Optimizing Large Graphs_
The elements of graph-based SLAM were presented in a simple example in Figure 2, but they are representative of any graph and any number of poses or constraints. However, optimizing graphs larger than a few hundred poses with this method might be challenging because embedded platforms are typically constrained to a few hundred of \(\mathrm{kB}\) of RAM. To address this problem, we implement a solution based on the hierarchical optimization approach introduced in [54]. The idea is to divide the graph into multiple subgraphs and apply the graph-based SLAM algorithm from Listing 2 on each subgraph - we refer to this approach as hierarchical graph-based SLAM. For this purpose, a _sparse graph_\(\tilde{\mathbf{X}}=\{\tilde{\mathbf{x}}_{0},\dots,\tilde{\mathbf{x}}_{M-1}\}\) is created first, whose poses are a subset (but still representative) of the complete graph \(\mathbf{X}\). We mention that the poses marked with a tilde are just an alternative notation for the poses already present in \(\mathbf{X}\) to emphasize that we are referring to the sparse graph. We provide a graphical representation of such a hierarchical optimization problem in Figure 3, where the poses of the sparse graph are represented in green. Furthermore, Figure 4 shows a four-step breakdown of the hierarchical optimization. Using the scan-matching constraints (e.g., \(z_{ICP}\) in Figure 2), the sparse graph is optimized, resulting in the new set of poses \(\{\tilde{\mathbf{x}}_{1}^{opt},\dots,\tilde{\mathbf{x}}_{M}^{opt}\}\) - as shown in Figure 4-(b). The idea now is to use the optimized poses of the sparse graph as constraints to correct the entire graph \(\mathbf{X}\).
For each pair of consecutive poses in the sparse graph \((\tilde{\mathbf{x}}_{i}^{opt},\tilde{\mathbf{x}}_{i+1}^{opt})\) we build a subgraph consisted of these poses and the in-between poses of the complete graph - e.g., \(\{\mathbf{x}_{0},\mathbf{x}_{1},\dots,\mathbf{x}_{4}\}\) or \(\{\mathbf{x}_{4},\mathbf{x}_{5},\dots,\mathbf{x}_{9}\}\). To be more general, we consider the subgraph \(\{\mathbf{x}_{k},\mathbf{x}_{k+1},\dots,\mathbf{x}_{l}\}\). We recall that the \(\mathbf{x}_{k}=\mathbf{x}_{i}\) and \(\mathbf{x}_{l}=\mathbf{x}_{i+1}\). Since we have already corrected the extremes (i.e., \(\tilde{\mathbf{x}}_{i}^{opt}\) and \(\tilde{\mathbf{x}}_{i+1}^{opt}\)), this information can be further used to derive the constraint that allows optimizing the whole subgraph. In this scope, we firstly offset every pose in the subgraph as shown in Figure 4-(c), so that \(\mathbf{x}_{k}/\tilde{\mathbf{x}}_{i}\) matches \(\tilde{\mathbf{x}}_{i}^{opt}\) - necessary because PGO never corrects the first pose. Then Equations 4 - 5 are used to derive a constraint \(\tilde{\mathbf{x}}_{i+1,i}\) between poses \(\tilde{\mathbf{x}}_{i+1}^{opt}\) and \(\tilde{\mathbf{x}}_{i}^{opt}\), which is added to the subgraph as an LC edge from node \(l\) to \(k\) - this only simulates the effect of loop closure, as the LC edge is not provided by ICP directly. After these operations are performed on the \(M-1\) subgraphs as shown in Figure 4-(d), the optimization of \(\mathbf{X}\) is complete. This section, therefore, introduces two manners of performing PGO: directly applying graph-based SLAM on
Fig. 4: A breakdown of the hierarchical graph-based SLAM.
Fig. 3: Representation of the sparse graph (green) as a subset of the complete graph. The black arrows represent the odometry edges, the dashed black arrow represents the LC edge, and the dashed green arrows represent the additional constraints derived for optimizing the subgraphs.
Fig. 2: The figure shows how a robot trajectory is discretized into a pose graph when passing through a square loop corridor. The drone keeps adding poses to the graph using information from the internal state estimator. The poses are affected by errors due to odometry drift.
the existing pose graph or dividing the graph into multiple smaller subgraphs and optimizing every subgraph individually. The advantages of every approach are discussed in Section V.
Sampling the poses of the sparse graph from the complete graph \(\mathbf{X}\) is based on a threshold on the robot movement. The elements comprising the complete graph are chronologically traversed, and a new node is exclusively incorporated into the sparse graph if the Euclidean distance from the most recently added node exceeds a threshold value \(d_{min}\) or if the difference in heading surpasses a threshold value \(\Delta\psi_{min}\). The sparse graph must also include all scan poses as a mandatory requirement, in addition to the threshold-based added poses. This is because the LC edges resulting from ICP only play a role in optimizing the sparse graph and not also the subgraphs. However, the number of scan poses is usually negligible compared to the sparse graph size.
## IV Nano-UAV System Setup
Our mapping system is designed to be flexible and cover a large set of robotic platforms. The only prerequisite concerns the sensor, which has to be a depth camera. Thus the algorithm and the implementation can be adapted to support a different hardware setting, e.g., various processors or sensing elements. In this paper, we selected the Commercial-off-the-Shelf (COTS) nano-UAV Crazyflie 2.1 from Bitcraze to demonstrate the effectiveness of our solution in ultra-constrained platforms. In this way, our results can be easily replicated using commercially available hardware.
The open-source firmware of Crazyflie 2.1 provides capabilities for flight control, state estimation, radio communication, and setpoint commander. The drone's main PCB also acts as a frame, comprising the electronics such as an Inertial Measurement Unit (IMU), a radio transceiver (Nordic nRF51822), and an STM32F405 processor. The latter features a maximum clock frequency of \(168\,\mathrm{MHz}\) and \(192\,\mathrm{kB}\) of RAM, but over 70% of the resources are already used by the firmware to perform the control and estimation. Furthermore, the drone features extension headers that can be used to add additional decks (i.e., plug-in boards). We, therefore, also included the commercial Flow deck v2, which exploits a downward-facing optical flow camera and single-zone ToF ranging sensor to enable velocity and height measurements fused by the on-board Extended Kalman Filter (EKF) to perform position and heading estimation. In addition to the Flow deck, we equip the drone with two custom-designed boards: one containing four lateral depth ToF sensors to enhance the drone's capabilities to sense the surroundings, and the second deck contains the GAP9 SoC, used as a co-processor to extend the Crazyflie 2.1 computation capabilities. In this configuration, the total weight at take-off is \(44\,\mathrm{g}\), including all the hardware used for the scope of this paper. The fully integrated system featuring our custom hardware is shown in Figure 4(a).
### _Custom Quad ToF Deck_
The VL53L5CX is a lightweight multi-zone 64-pixel ToF sensor, weighing only \(42\,\mathrm{mg}\). Its suitability for nano-UAV applications was evaluated in a study by Niculescu _et al._[24]. This sensor offers a maximum ranging frequency of \(15\,\mathrm{Hz}\) for an 8\(\times\)8 pixel resolution, with a FoV of \(45\,\mathrm{\SIUnitSymbolDegree}\). Additionally, the VL53L5CX provides a pixel validity matrix alongside the 64-pixel measurement matrix, automatically identifying and flagging noisy or out-of-range measurements. To accommodate the use of multi-zone ranging sensors on the Crazyflie 2.1 platform, a custom deck was developed specifically for the VL53L5CX ToF sensors, as shown in Figure 4(b). This deck can be used in conjunction with the Flow deck v2 and incorporates four VL53L5CX sensors positioned to face the front, back, left, and right directions, enabling obstacle detection from a cumulative FoV of \(180\,\mathrm{\SIUnitSymbolDegree}\). As a result, the final design of the custom deck weighs a mere \(4.2\,\mathrm{g}\).
### _Co-processor Deck - GAP9 SoC_
The second custom deck included in the system setup weighs \(5\,\mathrm{g}\) and features the GAP9 SoC, the commercial embodiment of the PULP platform [25], produced by Greenwaves Technologies. Figure 4(c) shows the main elements of the GAP9 architecture. The GAP9 SoC features 10 RISC-V-based cores, which are grouped into two power and frequency domains. The first domain is the fabric controller (FC), which features a single core operating at up to \(400\,\mathrm{MHz}\) coupled with \(1.5\,\mathrm{MB}\) of SRAM (L2 memory). The FC acts as the supervisor of the SoC, managing the communication with the
Fig. 5: (a) Our prototype based on Crazyflie 2.1 extended with the ToF Deck and the Co-processor Deck. (b) The custom quad ToF deck featuring four ToF multi-zone sensors. (c) A simplified diagram showing the blocks of the GAP9 that are most relevant for this work.
peripherals and orchestrating the on-chip memory operations. The second domain is the cluster (CL) consisting of nine RISC-V cores that can operate up to \(400\,\mathrm{MHz}\), specifically designed to handle highly parallelizable and computationally intensive workloads. Among the nine cores of the cluster, one acts as a "master core", receiving a job from the FC and delegating it to the other eight cores in the cluster, which carry the computation. The CL is coupled with \(128\,\mathrm{kB}\) of L1 memory, and the transfers between L2 and L1 are performed via the direct memory access (DMA) peripheral, requiring no involvement from the FC or CL during the transfers. To achieve an optimal execution time of a CL task, the data associated with the task should be transferred to L1 before the task is started. When the CL task completes, the result can be transferred back to L2 and further used by the FC. The GAP9 is interfaced with the STM32 via SPI and carries all the intensive computation required by PGO and scan-matching.
## V Implementation
Our system features two computational units: the STM32 MCU, part of the commercial Crazyllie 2.1 platform, and the more powerful GAP9 SoC, which extends the computational capabilities of the former. We extend the base firmware of the STM32 with our application - implemented through the Bitcraze Application Layer - containing only lightweight functionalities such as the ToF sensor data acquisition and the flight strategy, which have a negligible impact on the MCU load. Instead, we delegate the memory and computationally demanding tasks to the GAP9, which continuously communicates with the STM32 during the mission. Thus, computationally intensive solutions such as ICP, the graph-based SLAM, scan computation, or map generation run entirely on the GAP9. In the following, we provide the implementation details of NanoSLAM, which is based on the algorithms introduced in Section III.
### _Sensor Processing_
As mentioned before, our system performs mapping in 2D. However, since each of the four ToF sensors provides an 8\(\times\)8 distance matrix, we must process this information and reduce it to one plane (i.e., one row). For this reason, we discard the first two rows from the bottom and the top, leaving only the middle four rows that better represent the drone's plane. In the following, we select the median of the four remaining pixels for each column, obtaining a row vector of size eight for each ToF sensor. In case there are no valid pixels in a particular column (e.g., no obstacle within \(4\,\mathrm{m}\)), the entire column is discarded. This approach ensures more robustness to outliers than simply selecting one of the middle rows from each matrix.
### _Scan-matching Implementation_
Before detailing the actual scan-matching implementation, we provide the values of the scan parameters introduced in Section III-A. Indeed, our setup matches the configuration shown in Figure 1, featuring four ToF sensors of eight zones each - i.e., \(n_{s}=4\) and \(n_{z}=8\). During a scan, the drone undergoes a \(45^{\circ}\) rotation while adding new scan frames to the scan with a frequency of \(7.5\,\mathrm{Hz}\). We empirically choose \(n_{sf}=20\) as a trade-off between scan-matching accuracy and memory footprint, resulting in a scan duration of about \(2.7\,\mathrm{s}\). Given these settings, the scan size is at most \(n_{scan}=n_{sf}n_{s}n_{s}n_{z}=640\) points.
We recall that the ICP algorithm introduced in section III-B has two stages: determining the correspondences and calculating the transformation given the correspondence pairs. The latter exhibits a time complexity of \(O(n_{scan})\) and is typically very fast. The correspondences calculation, represented by the inner _for_ loop in Listing 1, takes more than 95% of execution time, operating with \(O(n_{scan}^{2})\) complexity. Furthermore, since the correspondences are calculated independently of each other, we leverage the parallel capabilities of GAP9, distributing the inner _for_ loop from Listing 1 to eight cores of the CL in GAP9. In our implementation, we choose a fixed number of iterations \(N_{iter}^{ICP}\) to ensure a deterministic execution time. We empirically determined with in-field experiments that ICP always converges within \(N_{iter}^{ICP}=25\) iterations, and after that, the solution \((\mathbf{R}^{\star},\mathbf{t}^{\star})\) does not change anymore.
### _Graph-based SLAM Implementation_
In the following, we provide the implementation details of the graph-based SLAM algorithm, which is presented in Listing 2. Having introduced how ICP is implemented, we now focus on step 3 from Listing 2, the heart of graph-based SLAM. Once all the odometry constraints are computed, each iteration of the algorithm consists of two main phases: _(i)_ calculating \(\mathbf{H}\) and \(\mathbf{b}\), and _(ii)_ solving the equation system \(\mathbf{H}\Delta\mathbf{x}=-\mathbf{b}\). The main challenge is to enable the onboard execution, given the limited available amount of RAM. Storing all entries of the \(3N\times 3N\)\(\mathbf{H}\) matrix would result in about \(1.44\,\mathrm{MB}\) for a realistic pose number of 200 and a 4-byte float representation of the matrix entries. This requirement is infeasible for resource-constrained platforms - even for our capable target platform, GAP9, which would rapidly run out of memory storing such a matrix.
However, as Listing 2 shows, constructing matrix \(\mathbf{H}\) implies looping through all edges and modifying the blocks \(\mathbf{H}_{ii}\), \(\mathbf{H}_{ij}\), \(\mathbf{H}_{ji}\), and \(\mathbf{H}_{jj}\), for each graph edge from \(i\) to \(j\). Due to the highly accurate results offered by ICP, we experimentally
Fig. 6: (a) The figure shows how the odometry and LC edges differently impact the sparsity of the \(\mathbf{H}\) matrix. (b) The computation of the Cholesky decomposition and how each element’s calculation is distributed among the CL cores.
selected an information matrix \(\Omega=20\mathbf{I}\) for the LC edges and \(\Omega=\mathbf{I}\) for the odometry edges. This deliberate choice assigns greater significance to the LC edges during the optimization process. The number of odometry edges (i.e., \(N-1\)) is typically much larger than the number of LC edges, and for most of the blocks \(\mathbf{H}_{ij}\), it holds that \(j=i+1\). Thus, most non-zero elements of \(\mathbf{H}\) are concentrated around the main diagonal. Figure 5(a), provides a graphical representation of the \(\mathbf{H}\) matrix, where the contribution of blocks \(\mathbf{H}_{ii}\) and \(\mathbf{H}_{i+1,i+1}\) is represented in green, and the contribution of \(\mathbf{H}_{i,i+1}\) and \(\mathbf{H}_{i+1,i}\) in yellow. Blocks in blue correspond to the LC edges, and their placement in the matrix does not follow a pattern.
Furthermore, by calculating the individual elements of each block with the equations from Listing 2, one could notice that some elements are always zero and represented in white in Figure 5(a). This fact increases, even more, the sparsity of \(\mathbf{H}\), resulting in \(17N-13+10n\) non-zero elements according to the filling pattern of Figure 5(a). Moreover, due to the fact that \(\mathbf{H}\) is symmetric, it is sufficient only to store the elements below and including the main diagonal, implying \(10N-5+5n\) non-zero elements. As a numerical example, for the realistic values of \(N=200\) poses and \(n=10\) LC constraints, the ratio between non-zero elements and the total number of elements \(3N\times 3N\) is about 0.56%, which proves that it is extremely memory inefficient to store a matrix in a dense form.
To exploit the high sparsity level, we propose storing \(\mathbf{H}\) in a CSR sparse matrix representation [51]; we note the non-zero element count as \(nz\). This representation uses three arrays: _(i) the values_: it has size \(nz\) and stores the non-zero elements; _(ii) the column index_: it has size \(nz\) and stores the column index associated with each value in the values array; _(iii) the row pointer_: it has size \(3N+1\), and its elements mark where in the values array a new row starts. Inserting a new element in the CSR matrix requires modifying the three arrays accordingly. Our software implementation solely utilizes static memory allocation to prevent memory leaks and overflows. Consequently, inserting elements in the sparse matrix must occur row-wise, in the ascending order of column indices - in this way, the arrays of the sparse matrix are never modified, only extended. Otherwise, a random insertion order would imply memory moves within the sparse matrix, slowing the execution. Since the filling pattern of \(\mathbf{H}\) is deterministic given the graph, an ordered element insertion is possible.
The stages of graph-based SLAM exhibit a computational complexity not exceeding \(O(N)\), except for the Cholesky decomposition. To leverage the parallel capabilities of the system, we employ the Cholesky-Crout scheme [51]. This scheme efficiently computes the matrix \(\mathbf{L}\) column by column, as outlined in Listing 3. To better illustrate the distribution of computation across eight cores of the GAP9 CL, we complement Listing 3 with Figure 5(b). In each iteration for column \(j\), the algorithm initially calculates the variable \(sum0\), which represents the sum of squared elements from line \(j\), excluding the diagonal. In Figure 5(b), these elements are visually depicted by the upper yellow line, with each CL core responsible for computing the sum of \(j/8\) elements. The value of \(\mathbf{L}(j,j)\) (highlighted in green) is subsequently determined based on \(sum0\), and afterward, all column elements are computed within the inner loop. To offload the computation of the inner loop, we employ the CL, where each core performs the calculation for a predetermined number of \(\mathbf{L}(i,j)\) entries. Each element \(\mathbf{L}(i,j)\) (depicted in orange) is derived from \(sum1\), which is computed as the dot product between row \(i\) and row \(j\) - considering only the elements to the left of column \(j\).
It is imperative to note the direct dependence of \(\mathbf{L}(i,j)\) on \(\mathbf{H}(i,j)\), signifying that any non-zero element below the main diagonal in matrix \(\mathbf{H}\) will correspondingly yield a non-zero element in matrix \(\mathbf{L}\) with identical indices. Furthermore, each element \(\mathbf{L}(i,j)\) is contingent upon the elements located to its left within the same row. Consequently, the existence of a non-zero element \(\mathbf{H}(i,j)\) implies the existence of a non-zero element \(\mathbf{L}(i,j)\), which, in turn, can influence the non-zero status of all subsequent elements in the same row of \(\mathbf{L}\).
To analyze a concrete example, Figure 6-(a) illustrates the \(\mathbf{H}\) matrix of a graph with six poses and two LC edges, where the black entries are the non-zero elements. For the symmetric \(\mathbf{H}\) matrix, the figure provides \(nz_{low}\), which is the non-zero count of the elements below or on the main diagonal. Figure 6-(b) illustrates the \(\mathbf{L}\) matrix obtained from the Cholesky decomposition of \(\mathbf{H}\). The non-zero entries originating from the LC
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline & & **40** & **80** & **120** & **160** & **200** & **240** & **280** & **320** \\ \hline \multirow{4}{*}{**Crout**} & **H** & 405 & 805 & 1205 & 1605 & 2005 & 2405 & 2787 & 3185 \\ & **L** & 1149 & 2349 & 3549 & 4749 & 5949 & 7149 & 8349 & 9549 \\ \cline{1-1} & **Lp** & 896 & 1817 & 2735 & 3657 & 4576 & 5496 & 6262 & 7155 \\ \hline \hline \end{tabular}
\end{table} TABLE II: The non-zero count of \(\mathbf{H}\), \(\mathbf{L}\) and \(\mathbf{L}_{P}\). For the symmetric \(\mathbf{H}\) matrix, only the non-zero elements from the main diagonal or lower are counted.
Fig. 6: An example of the \(\mathbf{H}\) and \(\mathbf{L}\) matrices for a graph with six poses and two LC edges (left) and how these matrices change when the RCM permutation is applied to \(\mathbf{H}\) (right).
Fig. 7: An example of the \(\mathbf{H}\) and \(\mathbf{L}\) matrices for a graph with six poses and two LC edges (left) and how these matrices change when the RCM permutation is applied to \(\mathbf{H}\) (right).
edges in the corner of matrix \(\mathbf{H}\) resulted in corresponding non-zero elements in matrix \(\mathbf{L}\), subsequently leading to non-zero elements throughout the entire row to the right. Although one might argue that the LC edges have a negligible impact on the non-zero count of matrix \(\mathbf{H}\), this is obviously not the case for matrix \(\mathbf{L}\). The further the non-zero elements of \(\mathbf{H}\) are from the main diagonal, the more non-zero entries they will yield in \(\mathbf{L}\).
To mitigate this problem, we employ the permutation solution introduced in Section III-C, which brings the elements of \(\mathbf{H}\) closer to the main diagonal. The Reverse Cuthill-McKee (RCM) algorithm [55] computes the permutation vector \(\mathbf{\pi}\) that defines the rearrangement of the rows of the identity matrix to obtain the permutation matrix \(\mathbf{P}\). The \(\mathbf{P}\) obtained through RCM minimizes the bandwidth of a given matrix - i.e., how spread apart the elements are from the main diagonal. Note that accessing any element \(\mathbf{H}_{P}(i,j)\) is equivalent to accessing \(\mathbf{H}(\mathbf{\pi}(i),\mathbf{\pi}(j))\) and therefore it is not necessary to store and compute matrix \(\mathbf{H}_{P}\). Similarly, \(\mathbf{b}_{P}(i)=\mathbf{b}(\mathbf{\pi}(i))\).
Applying this algorithm to the example \(\mathbf{H}\) matrix from Figure 7-(a) leads to the permuted \(\mathbf{H}_{P}\) shown in Figure 7-(c), whose bandwidth is visibly reduced. Applying the Cholesky decomposition to \(\mathbf{H}_{P}\) leads to the matrix \(\mathbf{L}_{P}\) from Figure 7-(d) with 103 non-zero entries, about 20% less than \(\mathbf{L}\). Furthermore, Table II provides the resulting non-zero count of \(\mathbf{L}\) and \(\mathbf{L}_{P}\) for a graph with two LC edges, varying the number of poses in the range 40 - 320. Also, in this case, the reduction in the non-zero count of \(\mathbf{L}\) after permutation is at least 22%. Our system uses the RCM algorithm to determine the permutation matrix \(\mathbf{P}\) and then solve the linear system as described in Listing 2. All steps involved in calculating \(\Delta x\) constitute a graph-based SLAM iteration. We empirically determined that the entries of \(|\Delta\mathbf{x}|\) are always smaller than \(10^{-4}\) after three iterations, so we set \(N_{iter}^{SLAM}=3\).
### _Hierarchical SLAM Implementation_
In Section III-E, the hierarchical graph-based SLAM method was introduced as an alternative approach to performing PGO, allowing to optimize graphs that exceed 440 poses. This approach utilizes the parameters \(d_{min}\) and \(\Delta\psi_{min}\), which determine the inclusion of new poses in the sparse graph based on the robot's movement and rotation. Typically, in exploration scenarios, significant variations in the heading are rare, as the robot primarily rotates when encountering walls or obstacles. As a result, the parameter \(d_{min}\) has the greatest impact on the size of the sparse graph.
We investigate the influence of the \(d_{min}\) parameter on the accuracy of the optimized graph. Increasing the value of \(d_{min}\) reduces the size of the sparse graph, allowing for the mapping of larger environments. However, this also leads to a loss in capturing fine drone movements, resulting in decreased accuracy. To assess this impact, we conducted an experiment using a square loop corridor, creating an associated graph with 2000 poses, which significantly exceeds the limits of the graph-based SLAM algorithm when executed onboard. We varied the \(d_{min}\) parameter within the range of \(0.1\,\mathrm{m}\) to \(6.4\,\mathrm{m}\), as detailed in Table III, and measured the accuracy of the resulting optimized poses for each case. To evaluate the accuracy, we computed the Pearson correlation coefficient between the optimized poses obtained using the hierarchical approach with varying \(d_{min}\) values and the poses derived from directly applying graph-based SLAM (performed on an external base station). As an additional metric, we also compare the root-mean-squared-error (RMSE) w.r.t. the directly optimized graph, considering only the \(x\) and \(y\) components of each pose. Table III shows that values of \(d_{min}\leq 0.8\) provide almost the same accuracy as optimizing the graph directly with graph-based SLAM, leading to a correlation coefficient larger than 99% and an RMSE smaller than \(1\,\mathrm{cm}\). Consequently, any value of \(d_{min}\) smaller than \(0.8\,\mathrm{m}\) is appropriate for creating the sparse graph.
### _The Exploration Strategy and Corner Detection_
In the following, we explain how the drone explores the environment and decides which areas are appropriate for acquiring scans. We mention that our mapping solution is completely independent of the type of trajectory and applicable to any environment. Thus, to demonstrate the capabilities of our system, we use a simple _exploration strategy_ that drives a drone through the environment, always following the wall on the right. In case of no walls around, the drone moves forward. If a frontal wall or obstacle is detected, the drone changes direction to the left or right, depending on which direction is free - if both are free, the drone chooses left. Conversely, if a dead-end is detected, the drone lands, and the mission ends. This simple exploration strategy is configured with the target velocity of \(0.5\,\mathrm{m}/\mathrm{s}\) - correlated with the size of the rooms we explore.
Since our system should work autonomously in any environment, it must possess the ability to determine when a new scan should be acquired. Regions that exhibit rich textures, such as corners, are highly suitable for acquiring scans that facilitate precise scan-matching. To achieve this objective, we implemented a corner detector that takes a scan frame as input and utilizes the Hough transform [56] to identify all the straight lines defined by the points within the scan frame. The presence of any pair of lines that create an angle of at least \(30^{\circ}\) indicates that the scan frame represents a corner. The corner detector runs in the STM32 in less than \(1\,\mathrm{ms}\).
### _The STM32 Application_
The STM32 MCU is the manager of all the processes running onboard the nano-UAV. Even if it does not carry any heavy computation, it is responsible for off-loading it to the GAP9 via SPI communication. The application we developed on the STM32 is structured in three Free-RTOS tasks: _the mission task_, _the flight task_, and _the sensor task_. Overall, these tasks require less than \(4\,\mathrm{kB}\) and only 2% of additional
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \(\mathbf{d_{min}(m)}\) & **0.1** & **0.2** & **0.4** & **0.8** & **1.6** & **3.2** & **6.4** \\ \hline Correlation (\%) & 99.9 & 99.9 & 99.9 & 99.7 & 98.9 & 96.7 & 93.4 \\ RMSE (\(\mathrm{cm}\)) & 0.04 & 0.12 & 0.27 & 0.59 & 24.3 & 46.7 & 84.0 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Hierarchical graph-based SLAM.
CPU load in total. The sensor task communicates with the ToF sensors via I2C. It configures each sensor before the mission starts, fetches data from the ToF matrix sensors, and passes it to the other tasks. The flight task runs the wall-following exploration strategy introduced in Section V-E. However, other tasks can notify the flight process via a Free-RTOS queue to perform other maneuvers, such as stopping and spinning the drone to acquire a scan. The loops of the flight task and the sensor task run with a frequency of \(15\,\mathrm{Hz}\).
The mission task manages the scan acquisition and the communication with the GAP9 using SPI packets. The flowchart from Figure 8-left presents a detailed illustration of the mission flow. In every iteration, the task fetches the ToF data from the sensor task and the current pose from the internal state estimator and sends this information to the GAP9. In the absence of any previous scans in the current location, if the current scan frame corresponds to a corner and the drone has traveled a minimum distance of \(1.2\,\mathrm{m}\) from the last scan, the drone captures a reference scan. Then, the scan pose is stored in a structure called _scan pose list_, which stores the locations of all acquired scans. On the other hand, if the current location is actually revisited - i.e., the drone is closer than \(0.6\,\mathrm{m}\) to one entry of the scan pose list - an LC scan is acquired, given at least \(1\,\mathrm{m}\) from the last scan. The STM32 informs the GAP9 about the LC and then sends a PGO command. Lastly, the STM32 updates the scan pose list, fetching the updated scan pose values from the GAP9. The loop of a mission task runs at \(7.5\,\mathrm{Hz}\), skipping every second ToF frame.
Note that the scan acquisition is identical for the reference and LC scans, and only the subsequent steps differ. During a scan, the flight task is spinning the drone by \(45^{\circ}\), while the mission task continues sending new poses to the GAP9 - this was omitted in Figure 8 for the sake of readability. Furthermore, it is important to impose a minimum distance between scans. Conducting consecutive scans at the same location does not enhance the system's efficacy but leads to a substantial surge in memory utilization. The distance thresholds were determined experimentally.
### _The GAP9 Application_
The STM32 MCU performs several crucial functions, including managing sensor communication, controlling the drone, and determining when to acquire new scans. In contrast, the GAP9 processor assumes a subordinate role by handling computationally intensive tasks. The STM32 sends SPI packets to the GAP9, and each packet consists of a command and a corresponding data field, with the interpretation of the data contingent upon the specific command type. The GAP9 continually awaits the arrival of a new SPI packet, upon which it proceeds to decode and execute the command. Four possible SPI commands are defined in the system, distinguished by a command ID. The first is the _new pose_ command, signaling that a new pose should be added to the graph, along with its associated ToF data. The graph is stored in memory as a table (i.e., _the graph table_), where each graph table entry contains the pose ID, the timestamp, the pose values, and the ToF data from the four sensors. The structure of each graph table entry is presented in Table IV, and the total size of an entry is \(84\,\mathrm{B}\). Note that one graph table entry carries all the necessary information to compute one scan frame.
The second command is the _LC information_ command. Within this command, the STM32 communicates that an LC edge should be added to the graph, from an id \(j\) to id \(i\), communicated in the SPI packet. The GAP9 application fetches the graph table entries from \(i\) to \(i+19\) and from \(j\) to \(j+19\) and then calculates their associated scans \(\mathbf{S}_{i}\) and \(\mathbf{S}_{j}\). Having the two scans, it then computes the LC edge measurement using ICP and stores it into the _LC edge list_. The third command defined within our system is the _PGO_ command, which informs the GAP9 to optimize the existing poses in the graph, given the LC edge list. Our system always used the hierarchical graph-based SLAM for PGO, as it allows for mapping larger environments. When PGO completes, the graph table is updated with the new pose values. Optionally, the map is regenerated by combining the scan frames computed from every graph pose entry. Lastly, the fourth command is the _pose request_, which enables the STM32 to obtain the value of a particular pose in the graph table by communicating its ID in the SPI packet. This is necessary because the STM32 application must update the scan pose list after every PGO. Furthermore, it must also update the drone's state estimator with the updated value of the most recent pose. Figure 8 illustrates the behavior of each command and the interaction with the STM32. NanoSLAM represents the whole logic running in the GAP9, which stores the graph, fetches scans, uses scan-matching to add new LC edges, and exploits hierarchical PGO to correct the poses.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Field** & **Representation** & **Size** \\ \hline Pose ID & _int32_ & \(4\,\mathrm{B}\) \\ Timeestamp & _int32_ & \(4\,\mathrm{B}\) \\ Pose & \(3\times\textit{float}\) & \(12\,\mathrm{B}\) \\ ToF Data & \(4\times 8\times\textit{int16}\) & \(64\,\mathrm{B}\) \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Structure of a graph table entry
Fig. 8: The flow diagram of our software and the interaction between the STM32 and the GAP9.
## VI Performance Analysis
In this section, we provide a breakdown of the execution time of our algorithms. We mainly evaluate the ICP, graph-based SLAM, and its hierarchical extension, emphasizing the benefits of the eight-core parallelization. Lastly, we also provide a power breakdown for the individual stages of graph-based SLAM. Power measurements are conducted upstream of the buck converter on the GAP9 co-processor deck, which receives a voltage supply of \(4\,\mathrm{V}\) and generates an output voltage of \(1.8\,\mathrm{V}\) intended for supplying the GAP9 SoC. The GAP9 always operates at the maximum frequency - i.e., \(400\,\mathrm{MHz}\) for both FC and CL.
### _Execution Time of ICP_
Table V shows the ICP execution time as a function of the scan size. The first line of the table provides the total execution time when the algorithm is parallelized and computed with the aid of the CL. To highlight the benefit of parallelization, we also provide the execution time when ICP runs entirely in the FC - i.e., second table line. We notice a speedup, defined as the ratio between the execution time on the FC and the CL, that increases with the scan size. This is due to the memory transfer overhead; for larger scan sizes, the computation time of the correspondences is significantly higher than the time necessary to transfer the scans from L2 to L1, which the CL can access. For a scan size larger than 640, the achieved speedup is above seven with eight cores. Furthermore, for the scan size used for the scope of this paper (i.e., 640), the ICP executes in \(55\,\mathrm{ms}\).
### _Execution Time of Graph-based SLAM_
In the previous section, we have explained how every constitutive stage of graph-based SLAM is implemented. In the following, we provide the execution time and complexity of every stage. In this regard, we first analyze the Cholesky decomposition, which is the most complex and computationally intensive part of graph-based SLAM. As explained in Section V-C, the decomposition is offloaded to the CL of GAP9 to accelerate its execution through parallelization over eight cores. To highlight the advantages of parallelization, we also evaluate the execution time of the decomposition solely on the FC using a single core. Subsequently, we analyze the resulting measurements in comparison to those obtained on the CL. Similarly to the example analyzed in Section V-C, we consider a graph with two LC edges, and we vary the number of poses in the range of 20 - 440 with a step of 60. Table VI provides the results of this comparative analysis, showing the execution time as a function of the number of poses. The _Speedup_ line gives the ratio between the execution time on the FC and the CL for each pose number. The maximum speedup is achieved for 440 poses, reducing the execution time from \(232\,\mathrm{ms}\) to \(34.96\,\mathrm{ms}\) and resulting in a speedup of 6.64. Overall, the table shows an increasing trend of the speedup with the number of poses. This is because the overhead for moving the input matrix to L1 (accessible by the CL) becomes more and more negligible w.r.t. the computation time for larger graphs. Furthermore, the _Speedup SLAM_ line shows how many times the whole graph-based SLAM algorithm is accelerated when the Cholesky decomposition is offloaded to the CL. The maximum speed is 5.08, achieved with 440 poses.
Table VII presents the execution time analysis of the main stages of graph-based SLAM. The stages are paired with step 3 of Listing 2. The experiment involves a graph comprising two LC edges and a varying number of poses ranging from 20 to 440. The initial four rows of the table provide detailed information about the individual execution times for each stage within a single iteration, while the subsequent row presents the total iteration time. Notably, the Cholesky decomposition
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**N.e. of poses** & **20** & **80** & **140** & **200** & **260** & **320** & **380** & **440** \\ \hline CL (8 cores) & 0.51 & 2.52 & 5.51 & 9.39 & 14.71 & 20.18 & 26.88 & 34.96 \\ FC (1 core) & 0.86 & 8.76 & 24.88 & 49.72 & 82.74 & 124.1 & 173.9 & 232.0 \\
**Speedup** & **1.68** & **3.47** & **4.52** & **5.29** & **5.63** & **6.15** & **6.47** & **6.64** \\
**Speedup SLAM** & **1.28** & **2.24** & **2.97** & **3.55** & **4.04** & **4.43** & **4.79** & **5.08** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: The execution time in \(\mathrm{ms}\) of the Cholesky decomposition. CL is the cluster with 8+1 cores, while the FC is the single-core fabric controller of the GAP9 SoC.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**N.e. of poses** & **20** & **80** & **140** & **200** & **260** & **320** & **380** & **440** \\ \hline H and b & 0.3 & 1.1 & 2 & 2.9 & 3.9 & 4.6 & 5.4 & 6.3 \\ RCM & 0.2 & 0.9 & 1.5 & 2.2 & 2.9 & 3.5 & 4.2 & 4.9 \\ Cholesky & 0.5 & 2.5 & 5.5 & 9.4 & 14.7 & 20.2 & 26.96 & 35.0 \\ Fwd+Bwd & 0.2 & 0.8 & 1.3 & 1.8 & 2.3 & 2.8 & 3.3 & 3.8 \\
**Iter. time** & **1.3** & **5.4** & **10.4** & **16.3** & **23.7** & **31.1** & **39.9** & **50** \\
**Total (3 iter.)** & **3.7** & **15.1** & **29.5** & **46.8** & **67.3** & **90** & **116.1** & **144.9** \\ \hline \hline \end{tabular}
\end{table} TABLE VII: The execution time in \(\mathrm{ms}\) of graph-based SLAM.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**N.e. of poses** & **20** & **80** & **140** & **200** & **260** & **320** & **380** & **440** \\ \hline CL (8 cores) & 0.51 & 2.52 & 5.51 & 9.39 & 14.71 & 20.18 & 26.88 & 34.96 \\ FC (1 core) & 0.86 & 8.76 & 24.88 & 49.72 & 82.74 & 124.1 & 173.9 & 232.0 \\
**Speedup** & **1.68** & **3.47** & **4.52** & **5.29** & **5.63** & **6.15** & **6.47** & **6.64** \\
**Speedup SLAM** & **1.28** & **2.24** & **2.97** & **3.55** & **4.04** & **4.43** & **4.79** & **5.08** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: The execution time in \(\mathrm{ms}\) of the Cholesky decomposition. CL is the cluster with 8+1 cores, while the FC is the single-core fabric controller of the GAP9 SoC.
Fig. 9: The execution time in \(\mathrm{ms}\) of graph-based SLAM for multiple configurations of the number of poses and LC edges.
accounts for approximately 40% to 70% of the total iteration time. Additionally, it is observed that all stages, except for the Cholesky decomposition, exhibit linear complexity. Although the conventional decomposition has a complexity of \(O(N^{3})\), the numbers in Table VII demonstrate a purely quadratic relationship with the number of poses, showing a correlation of 99.9% with a second-order polynomial fit. This is due to our efficient implementation that exploits the sparsity properties. The last row provides the total execution time of the three iterations. This is approximately equal to the iteration time multiplied by three, but inter-iteration differences are possible due to different non-zero counts of the \(\mathbf{H}\) and \(\mathbf{L}\) matrices.
In the next experiment, we investigate the impact of varying both the number of poses and LC edges on the execution time of graph-based SLAM. The ranges considered for the number of poses and LC edges are 20 - 440 and 1 - 64, respectively, as depicted in Figure 9. The figure illustrates that increasing either parameter leads to an increase in the execution time, although the relationship is not strictly monotonic. Interestingly, for instance, the scenario with 64 LC edges demonstrates a faster execution time with 320 poses compared to 260 poses. This behavior can be attributed to the Cholesky decomposition's execution time, which is affected by the non-zero count of the matrix \(\mathbf{L}\), determined by the permutation obtained through RCM. Since RCM does not guarantee the same non-zero reduction for all matrices, some configurations could benefit from a higher non-zero reduction in \(\mathbf{L}\) after applying the permutation to \(\mathbf{H}\). Given the \(128\,\mathrm{kB}\) of L1 available to the CL, the graph-based SLAM algorithm can optimize at most 440 poses at a time, requiring \(321\,\mathrm{ms}\) with 64 LC edges and \(148.5\,\mathrm{ms}\) with one LC edge.
The execution time of the hierarchical graph-based SLAM strongly depends on the structure of the sparse graph and subgraphs. Using a large value for \(d_{min}\) would result in a small, sparse graph and few large subgraphs. On the other hand, using a small \(d_{min}\) would result in a large sparse graph and many small subgraphs. As a numerical example, for a graph of 2000 poses associated with a square loop corridor, a \(d_{min}=0.3\,\mathrm{m}\) results in a total execution time of \(406\,\mathrm{ms}\), where the size of the sparse graph is 162 poses. Assuming that the drone is flying with a constant velocity through the maze, the size of the subgraphs is about the same. Under the assumption of a uniform subgraph size, the total execution time is \(t_{sg}+(M-1)t_{subgraph}\), where \(t_{sg}\) is the time required to optimize the sparse graph and \(t_{subgraph}\) is constant.
### _Power Analysis_
Table VIII shows the power and energy consumption of the ICP as a function of the scan size. An observable rising pattern in the average power is observed, attributed to the correspondence calculation occupying a larger proportion of the overall execution time for larger scan sizes. The maximum power consumption is \(177.6\,\mathrm{mW}\) for a scan size of 1024. However, for the scan size that we use (i.e., 640), the average power and energy consumption are \(172.5\,\mathrm{mW}\) and \(10.07\,\mathrm{mJ}\), respectively. In conclusion, for every LC, the system consumes about \(10\,\mathrm{mJ}\) plus the energy consumed to optimize the graph.
In Figure 10, the power trace of the GAP9 deck during the execution of graph-based SLAM is presented. The experiment involves 440 poses and 2 LC edges. The labels positioned above the plot represent the four primary stages of the algorithm: calculation of matrices \(\mathbf{H}\) and \(\mathbf{b}\), computation of the RCM permutation, Cholesky decomposition, and solution computed through forward and backward propagation. An observation can be made that the power consumption is notably higher during the Cholesky decomposition phase, primarily due to the activity of the CL. The peak of the power curve reaches \(153\,\mathrm{mW}\), while the average power value amounts to \(119.3\,\mathrm{mW}\). When the CL is inactive, and the FC solely handles the computation, the power instant tends to remain below \(80\,\mathrm{mW}\). The total energy consumed for executing the graph-based SLAM is calculated to be \(18.2\,\mathrm{mJ}\). In Table IX, the average power and energy are tabulated for the experiment conducted with various numbers of poses ranging from 20 to 440. The average power shows a monotonic trend, decreasing with a smaller number of poses due to reduced Cholesky decomposition execution time.
## VII In-Field Experiments
In this section, we evaluate the algorithms introduced in Section III and recall that NanoSLAM is the framework that leverages hierarchical PGO to optimize the graph and correct the drone's trajectory while considering the LC edges provided by ICP. We, therefore, present three main classes of results: _(i)_ an evaluation of the rotation and translation error
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Nr. of poses** & **20** & **80** & **140** & **200** & **260** & **320** & **380** & **440** \\ \hline Avg. power (mW) & 72.3 & 86.3 & 94.7 & 88.9 & 108.7 & 111.6 & 114.9 & 119.3 \\ Energy (mJ) & 0.31 & 1.5 & 3.1 & 4.38 & 7.8 & 10.73 & 14.29 & 18.2 \\ \hline \hline \end{tabular}
\end{table} TABLE IX: Power and energy consumption of graph SLAM.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Scan Size** & **128** & **256** & **384** & **512** & **640** & **768** & **896** & **1024** \\ \hline Avg. power (mW) & 121.5 & 149.5 & 165.0 & 169.9 & 172.5 & 175.2 & 176.3 & 177.6 \\ Energy (mJ) & 0.54 & 1.76 & 3.77 & 6.53 & 10.07 & 14.28 & 19.17 & 25.07 \\ \hline \hline \end{tabular}
\end{table} TABLE VIII: Power and energy consumption of ICP.
Fig. 10: The power curve associated with the graph-based SLAM execution for 440 poses, showing the power consumption during every main stage of the algorithm.
achieved by the scan-matching algorithm; _(ii)_ an investigation on how NanoSLAM improves the trajectory estimation; _(iii)_ coherent maps generated out of the pose graph and the ToF measurements. Our results are experimentally acquired and demonstrate the effectiveness of our closed-loop system that leverages NanoSLAM and carries the computation entirely onboard. The ground truth (GT) used in our evaluation is provided by the Vicon Vero 2.2 motion capture system (mo-cap) installed in our testing arena. To assess our system's localization and mapping capabilities, we build mazes of different complexities out of \(1\,\mathrm{m}\times 0.8\,\mathrm{m}\) chipboard panels.
### _Scan-matching Evaluation_
In the following, we analyze the scan-matching capabilities of the ICP algorithm. In this scope, we position the drone inside a \(90^{\circ}\) pipe of \(1\,\mathrm{m}\) width made out of chipboard panels. The drone is then commanded to take off and acquire a scan - i.e., _Scan 1_. Then, we manually change the position of the drone by about \(30\,\mathrm{cm}\) and \(30^{\circ}\) and repeat the same procedure to obtain _Scan 2_. The two scans are shown in _Iteration 0_ of Figure 11. The drone position is changed to simulate the odometry drift that the drone normally acquires when it revisits a location and evaluates the ICP performance in matching two non-overlapping scans. Therefore, the resulting rotation and translation values by the ICP are compared with the ground truth - obtained out of the ground truth of each individual pose. We obtain a translation error of \(e_{T}=$3.5\,\mathrm{cm}$\) and a rotation error of \(e_{R}=$2.3^{\circ}$\) - the reported translation error \(e_{T}\) represents the norm of the two components of the error - i.e., on \(x\) and \(y\). To ensure the validity of the results, we repeated the experiment multiple times, always obtaining a translation error \(e_{T}<$6\,\mathrm{cm}$\) and a rotation error \(e_{R}<$5^{\circ}$\).
Figure 11 shows the result of the ICP algorithm after every three iterations. We recall that ICP aims to determine the rotation and translation that, once applied to _Scan 2_, results in an optimal overlapping with _Scan 1_. This is represented by the green curve, which represents the scan obtained by applying the current ICP estimate to _Scan 2_. Furthermore, \(e_{ICP}\) represents the arithmetic mean of the Euclidean distances between each correspondence pair of the red and green curves. This metric evaluates the overlapping degree between _Scan 1_ and the ICP solution applied to _Scan 2_, and it is a good indicator of when the algorithm should stop. We observe that in this case, as well as in other experiments we conducted, the ICP solution that leads to \(e_{ICP}\approx$0.001\,\mathrm{m}$\) is found in about 20 iterations. We note that the rotation and translation errors (\(e_{T},e_{R}\)) provide a quantitative indication of the precision of the solution found by ICP. Conversely, \(e_{ICP}\) is only an intrinsic parameter indicating the convergence progress. For example, if the input scans are affected by large amounts of noise or biases, \(e_{ICP}\) could still indicate a small value, while the actual transformation found by ICP is inaccurate.
### _SLAM Results_
In the following, we demonstrate our system's capabilities to correct trajectories and generate coherent maps in three different mazes of increasing complexity. The first maze
Fig. 11: A breakdown of the ICP algorithm, indicating the solution found by the algorithm after every three iterations. Assuming a target overlapping error of \(\approx$0.001\,\mathrm{m}$\), the algorithm converges in about 20 iterations.
Fig. 12: (a) The tracking error on both the \(x\) and \(y\) axis and (b) the evolution of the optimized (i.e., with NanoSLAM) and unoptimized trajectories represented against the ground truth.
consists of a square loop corridor similar to the one used as an example in Section III-D, whereas the latter two mazes exhibit greater complexity and are illustrated in Figure 13.
#### V-D1 Maze 1
We start with a simple circular maze (i.e., _Maze 1_), shown in Figure 14(a). The drone's mission starts from the bottom left corner and flies three laps - i.e., cycle through the maze three times, relying on the wall following strategy, introduced in Section V-E. During the first lap, the drone identifies every corner and acquires a reference scan in each of the four. In the second and third laps, the drone acquires new scans in the revisited corners, creating LC constraints with the reference scans and adding new LC edges to the graph. To allow for a consistent comparison between the unoptimized poses (i.e., no drift correction) and the optimized poses (i.e., with NanoSLAM), we only perform graph optimization at the end of the mission. In this way, we show the benefits of using NanoSLAM on data from the same mission.
We define as _trajectory_ the set of all poses acquired during a mission and as positioning error the pose-wise Euclidean distance between the poses and their GT. Figure 11(a) shows the positioning error (in black) of the optimized and unoptimized trajectories, calculated by subtracting the GT from the optimized and unoptimized poses, respectively. The vertical grey lines indicate when the drone passes again through the starting point. Furthermore, in the same figure, we represent the \(x\) and \(y\) components of the trajectory over time (in yellow) to highlight a pattern between the positioning error and the trajectory. While the error of the unoptimized trajectory drifts unbounded, the optimized trajectory shows a rather repetitive pattern, with the error being zero every time the drone crosses the starting point. This is because the reference scan (of _pose 0_) acquired right after take-off is error-free and, therefore, any LC constraint between a _pose k_ and _pose 0_ will correct the positioning error of _pose k_ almost completely - the correction is only bounded by the accuracy of the ICP.
Despite using NanoSLAM, the positioning error takes considerable values of about \(0.4\,\mathrm{m}\) throughout one lap. A fundamental assumption when using SLAM is that the odometry errors are increasing slowly [57], and therefore the poses associated with the reference scans acquired at the beginning of the mission are accurate. However, this assumption does not hold in our case, as the poses of the second and third reference scans already have errors of up to \(0.4\,\mathrm{m}\), and these errors will represent a lower bound for a future LC correction. Throughout a lap, the positioning error increases while the drone moves toward the top left corner of the maze and decreases during the last half of every lap. In other words, forward movement on the \(x\) and \(y\) axis increases the positioning error, while a backward movement along one axis decreases the positioning error. This translates into a direction-dependent odometry bias that is positive for forward movements and negative for backward movements. Since the takeoff position is always \((0,0)\), the effect of the bias results in a scaled trajectory w.r.t. the GT.
The scaling effect can also be seen in Figure 11(b), which shows information from the same mission but represents the \(x\) and \(y\) components of the optimized and unoptimized trajectories as well as the GT. The optimized trajectory (black) is very similar to the ground truth (red), but scaled by a factor which we determined to be \(\approx 11\%\). On the other hand, we stress that the shape of the unoptimized trajectory (yellow) is often very different compared to the ground truth, which proves the effectiveness of NanoSLAM in correcting the trajectory and making it match the ground truth. So far, we have proved that the errors uncorrectable by NanoSLAM (i.e., the direction-dependent drift) are deterministic, and simply scaling the poses obtained from the drone's state estimator by \(0.9\) mitigates their effect. To demonstrate that this scaling actor generalizes for any environment, we apply the same correction for all mazes considered for our experiments and presented later in this section. The most likely cause of the direction-dependent errors is the down-pointing optical flow camera onboard the drone, which estimates the drone's velocity and enables the state estimator to determine the position by integrating the velocity. However, since the drone tilts when moving in a particular direction, this also rotates the camera frame, which is no longer parallel to the ground and leads to errors.
Next, we also analyze the capability of NanoSLAM to correct the heading estimation error (i.e., yaw). Figure 14 shows the yaw estimation error for the optimized and unoptimized poses over time. The error curves were again computed with the aid of the GT. One can notice that the heading estimation tends to drift unbounded for the unoptimized trajectory. Furthermore, the spikes in the error curves are associated with the scan acquisition when the drone rotates by \(45^{\circ}\) and subsequently returns to its initial heading. While these error spikes are also visible in the yaw corrected with NanoSLAM, the estimate's mean is stable, only exhibiting a steady state error mainly bounded by the precision of ICP.
Fig. 14: The heading error with and without NanoSLAM.
Fig. 13: Maze 2 (left) and Maze 3 (right).
In the presented experiments within this section, we conducted optimization according to the approach detailed in Section III, wherein an LC edge is incorporated into the graph upon acquisition of an LC scan. With the NanoSLAM methodology that we introduced, the LC edges integrated into the graph are not subject to removal. Consequently, the LC edges count can only increase throughout the mission, and as shown in the experiments conducted in Section VI, this can even triple the execution time of PGO. For this purpose, we also analyze the _1-LC NanoSLAM_, which always discards the LC edge after optimization. Therefore, within the 1-LC NanoSLAM, the graph will always have one LC edge. Although this lightweight approach discards the prior constraints and consequently cannot guarantee future alignment of previously matched regions, we consider it a compelling exploration due to its superior scalability when many LCs are performed. In the following, we analyze the trajectory correction and mapping performance of both optimization techniques: NanoSLAM and 1-LC NanoSLAM.
\[RMSE_{pos}=\sqrt{\frac{\sum_{i=0}^{N-1}\|\mathbf{x}_{i}-\mathbf{x}_{i}^{GT}\|^{2}}{N}} \tag{6}\]
Figure (b)b shows the unoptimized trajectory, the trajectory optimized with NanoSLAM and how they compare to the GT. Although the unoptimized trajectory exhibits a noticeable deviation from the GT, we observe a substantial alignment between the optimized trajectory and the GT, providing compelling evidence of the efficacy of NanoSLAM. While the 1-LC NanoSLAM trajectory shown in Figure (c)c also leads to satisfactory results, we notice a trajectory misalignment within the laps due to the LC constraint relaxation. We introduce a quantitative metric for evaluating how close each trajectory is to the ground truth, which we call positioning root-mean-squared-error (RMSE). Given an arbitrary trajectory represented by the poses \(\mathbf{x}_{0}\ldots\mathbf{x}_{N-1}\), and the corresponding set of ground truth poses \(\mathbf{x}_{0}^{GT}\ldots\mathbf{x}_{N-1}^{GT}\), the positioning RMSE is calculated as in Equation 6. The formula uses a reduced pose representation, considering only each pose's \(x\) and \(y\) components. Applying Equation 6 for the trajectories in Figure (b)b, we obtain a positioning RMSE of \(0.46\,\mathrm{m}\) for the unoptimized trajectory and \(0.146\,\mathrm{m}\) for the trajectory optimized with NanoSLAM, showing a reduction in the positioning RMSE of about three times. Despite the inter-lap trajectory misalignment in Figure (c)c, it leads to a positioning RMSE of \(0.18\,\mathrm{m}\), which is 23% higher than the trajectory optimized with NanoSLAM.
After proving the capability of NanoSLAM to correct trajectories, we further to explore the mapping performance. Applying Equation 1 for each trajectories from Figures (b)b - (c)c, we obtain the maps shown in Figures Figures (d)d - (f)f. The maps in Figure (d)d and Figure (e)e are associated with the unoptimized and optimized trajectories in Figure (b)b, while the map in Figure (f)f corresponds to the trajectory optimized
Fig. 15: Maze 1: (a) Illustration of the maze layout, showing the takeoff and landing locations and where an LC is performed. (b)-(c) The trajectories obtained with the two optimization approaches (in black) and the GT (in red). (d)-(f) The dense maps generated based on the unoptimized, NanoSLAM-corrected, and 1-LC NanoSLAM-corrected trajectories.
with 1-LC NanoSLAM. Visibly, Figure 15d shows the poorest accuracy, as no graph optimization is used, and therefore the position and heading drift heavily impact the alignment of the three laps. The map corrected with NanoSLAM is the most accurate w.r.t. the GT. Similarly, the map computed from the 1-LC NanoSLAM-optimized trajectory is definitely usable, but due to imposing one constraint at a time in the optimization process, not all corners corresponding to the three laps are perfectly aligned. We also propose a quantitative metric to analyze the mapping accuracy, which we call _the mapping RMSE_. This metric first calculates the length of the projection to the closest straight line for every point in a given dense map - the extensions of the maze lines are also considered. The mapping RMSE represents the RMSE of all projection lengths, as shown in Equation 7. This metric penalizes how far each map point is from a wall, and in a noise-free case, when all map points are on a maze line, the mapping RMSE is zero. Applying Equation 7 on the three maps from Figures 15d - 15f leads to \(21.5\,\mathrm{cm}\), \(5.8\,\mathrm{cm}\), and \(7.3\,\mathrm{cm}\) for the cases with no optimization, NanoSLAM and 1-LC NanoSLAM, respectively. This proves that NanoSLAM reduces the mapping RMSE by about 3.7 times.
\[RMSE_{map}=\sqrt{\frac{\sum_{i=0}^{N-1}{(\min_{\forall w\in W}dist(w,\mathbf{p}_{ i}))^{2}}}{N}} \tag{7}\]
#### Vi-B2 **Maze 2**
In the following, we present the trajectories and maps obtained by employing identical optimization methodologies on a marginally more intricate maze, as illustrated in Figure 13-(left). A top-view layout of the maze depicted in Figure 16a reveals the presence of non-straight-angle walls, distinguishing it from Maze 1. The trajectory followed by the drone is represented in Figure 16a, where the green cross marks the take-off point, and the arrow indicates the flying direction. Since only the left half of the maze is revisited, the LC is only performed four times, as indicated by the grey crosses. Table X presents the positioning and mapping RMSE for the experiments in Maze 2. We report a reduction of 67% and 64% in the positioning RMSE when applying NanoSLAM and 1-LC NanoSLAM, respectively, compared to the scenario without any optimization. Looking at the maps from Figures 16d-Figures 16f, we notice a significant heading drift that rotates the map w.r.t. the GT when no optimization is performed.
Mapping with the 1-LC NanoSLAM approach leads to a mapping RMSE of \(7.5\,\mathrm{cm}\), about 53% smaller than the case without optimization. Employing NanoSLAM reduces the mapping RMSE even further, to \(4.5\,\mathrm{cm}\), representing a reduction of 72% compared to the case without optimization.
While the positioning RMSE is somewhat similar for the NanoSLAM and 1-LC NanoSLAM approaches, the relative difference in the mapping RMSE is more significant - about 40%. This observation is also evident in Figure 15(f), where a comparison with the map depicted in Figure 15(e) reveals the presence of certain artifacts. For instance, the bottom side of the triangle-shaped wall is distorted, or the bottom maze wall appears thicker. This is again the effect of dropping previous LC edges and therefore failing to satisfy the constraints associated with all corners.
#### V-B3 Maze 3
The last maze we propose is the most complex among the three because it also contains obstacles such as pillars, boxes, or a bin, and therefore it better generalizes a real-world indoor environment. Figure 13-(right) shows an image of the maze, while the top-view layout is shown in Figure 16(a). The drone starts from the middle position marked with the green cross and performs two maze loops flying in a counterclockwise direction. Similar to the previous cases, throughout the first lap, the drone only acquires reference scans, while in the second lap, it closes the loop in every corner, as indicated by the grey crosses in Figure 16(a). Table XI shows the positioning and mapping RMSE obtained with Maze 3. The optimized trajectories depicted in Figures 16(b) - 16(c) exhibit a positioning RMSE reduced by 65% and 60%, respectively, compared to the unoptimized trajectory. Figures 16(d) - 16(f) show the three maps, and unlike the experiments in the first two mazes, the maps generated with NanoSLAM and 1-LC NanoSLAM are very similar. One can still notice a misalignment on the right side of the map in Figure 16(f), but it is not significant. This is also visible in the mapping RMSE, where the error of the NanoSLAM-based map is only 14% smaller than in the map obtained with the 1-LC NanoSLAM approach. Both approaches bring a significant improvement w.r.t. the map in Figure 16(i.e., without optimization), reducing the mapping RMSE by 63% with NanoSLAM and 57% with 1-LC NanoSLAM.
In the NanoSLAM experiment, the optimization problem results in a graph with 1355 poses and five LC edges. The subgraph has a size of 109 poses, and the total hierarchical optimization requires \(247\,\mathrm{ms}\). For the experiment employing 1-LC NanoSLAM, the graph has 1293 poses, and the final optimization requires \(223\,\mathrm{ms}\). The graphs associated with
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Metric** & **No SLAM** & **NanoSLAM** & **1-LC NanoSLAM** \\ \hline Positioning RMSE & \(44.1\,\mathrm{cm}\) & \(15.4\,\mathrm{cm}\) & \(17.3\,\mathrm{cm}\) \\ Mapping RMSE & \(20.3\,\mathrm{cm}\) & \(7.5\,\mathrm{cm}\) & \(8.7\,\mathrm{cm}\) \\ \hline \hline \end{tabular}
\end{table} TABLE XI: Positioning and mapping RMSE for Maze 3
Fig. 17: Maze 3: (a) The maze layout that contains various objects. (b)-(c) The trajectories obtained with the two optimization approaches (in black) and the GT (in red). (d)-(f) The dense maps generated based on the unoptimized, NanoSLAM-corrected, and 1-LC NanoSLAM-corrected trajectories.
Maze 3 are larger than those of Maze 1 (\(\approx\) 1200 poses) and Maze 2 (\(\approx\) 850 poses). For the experiment in Maze 3, running graph optimization results in an average power consumption of \(69.1\,\mathrm{mW}\) and a total energy consumption of \(17.06\,\mathrm{mJ}\) - note how optimizing a 1355 pose graph with the hierarchical graph-based SLAM results in the same energy consumption as optimizing a 440 pose graph with the direct graph-based SLAM. Overall, for every LC, NanoSLAM implies an average power consumption of \(87.9\,\mathrm{mW}\) and an energy consumption of \(27.13\,\mathrm{mJ}\) - accounting for both PGO and ICP.
### _Discussion_
We discuss the limitations of our system and possible future improvements. We first recall the importance of odometry calibration to maximize mapping accuracy. The calibration approach depends on the type of sensors available on the platform and, thus, the odometry calibration is a platform-specific tuning. In the following, we estimate the maximum area that can be mapped with our system. This limitation mainly comes from the maximum size (i.e., 440 poses) of the sparse graph used by the hierarchical PGO. We attempt to perform such an estimation starting from Maze 3, the most complex environment we map in our work. Note that when mapping Maze 3, we let the drone fly two laps to acquire more maze details. On the other hand, this is not mandatory, and flying just one lap and closing the loop when the drone crosses again through the start would as well lead to good results. Furthermore, in our experiments, we employed a \(d_{min}=$0.3\,\mathrm{m}$\) when adding poses to the sparse graph to demonstrate that our system can work with large graphs. However, as shown in Table III, setting \(d_{min}=$0.8\,\mathrm{m}$\) results in almost the same optimization accuracy. Mapping one lap of Maze 3 implies the drone to travel about \(14\,\mathrm{m}\), which results in 18 poses for the sparse graph when using \(d_{min}=$0.8\,\mathrm{m}$\). Therefore, it is possible to map an environment of about \(13\,\mathrm{m}^{2}\) using 18 poses. By extrapolation, 440 poses would allow our system to map an environment of about \(317\,\mathrm{m}^{2}\).
Another limitation of our system comes from the ToF sensors, which replace the conventional LiDARs. This substitution introduces several distinctions between the two technologies. Firstly, LiDARs typically possess a greater operational range, enabling robots to map distant areas of the environment. In contrast, our ToF sensor is restricted by a maximum range of \(4\,\mathrm{m}\). Additionally, LiDARs often exhibit a higher angular resolution, resulting in measurement accuracy that is less reliant on the distance magnitude. Conversely, our employed ToF sensor operates with distinct zones, assuming that any detected obstacle within a zone is located at the zone's center. Considering the angle of one zone of \(\theta_{zone}=45^{\circ}/8=5.625^{\circ}\), the maximum distance error induced by this effect is approximately \(e_{max}=d\cdot\tan(\theta_{zone}/2)\approx 0.05\cdot d\). While this error is negligible for short distances, it exceeds \(5\,\mathrm{cm}\) for distances beyond \(1\,\mathrm{m}\). Consequently, these errors can lead to map misalignments, regardless of the effectiveness of PGO.
A potential extension that we mention involves expanding our system's capabilities to accommodate a swarm of drones. Given that ICP can derive transformations between scans captured by different drones, it would be feasible to merge pose graphs from multiple drones and optimize them collectively to align their trajectories. This would enable faster mapping of an environment through parallel sensing of distinct areas by the swarm of drones, consequently reducing the overall mapping time. Another area of future work is to perform mapping in 3D. While our system assumes a flat environment, enabling absolute altitude estimation would allow to create a 3D map by acquiring depth measurements at various heights. Furthermore, implementing a mechanism that discards the depth measurements associated with moving people or obstacles would enable our system to operate even in dynamic environments. Lastly, another possible exploration entails leveraging the map generated by our approach to enable additional functionalities, such as optimal path planning. Converting the dense maps produced by our approach into binary maps - using the approach from [58] - would further reduce the memory footprint of the maps. For example, Figure 18 depicts the binary representation of the dense map from Figure 17(e) for resolutions of \(7.5\,\mathrm{cm}\) and \(5\,\mathrm{cm}\).
## VIII Conclusions
The paper presented NanoSLAM, a lightweight SLAM for autonomous drones, and the methodology to enable fully onboard mapping for small robotic platforms, which before was only possible with larger and more power-intensive computational platforms. NanoSLAM is the first system that enables SLAM for autonomous nano-UAVs and performs the computation entirely onboard exploring a novel RISC-V parallel low-power processor. We demonstrated the effectiveness of NanoSLAM by mapping three different real-world mazes, achieving a mapping error down to \(4.5\,\mathrm{cm}\) and reducing the trajectory estimation error by up to 67%. The SLAM algorithm runs onboard in less than \(250\,\mathrm{ms}\) and the whole mapping pipeline requires less than \(500\,\mathrm{kB}\) of RAM. In spite of its remarkably lightweight configuration (\(44\,\mathrm{g}\)), the system introduced in this study achieves mapping accuracy on par with SoA approaches developed for standard-size UAVs, but consumes only \(87.9\,\mathrm{mW}\). The system presented in this paper sets the foundation for increased autonomy in small form-factor robots with highly constrained hardware, thus introducing novel technology to the field of nano-UAVs. By enabling a comprehensive environmental map, this advancement opens up possibilities for advanced navigation solutions, including enhanced flight autonomy through optimal path planning.
Fig. 18: Binary occupancy maps resulted from Maze 3.
## Acknowledgments
This work is partly supported by BRAINSEE project (#8003528831) funded by armasuisse Science and Technology of the Swiss Confederation.
|
2309.03105 | The Secrets of Non-Blind Poisson Deconvolution | Non-blind image deconvolution has been studied for several decades but most
of the existing work focuses on blur instead of noise. In photon-limited
conditions, however, the excessive amount of shot noise makes traditional
deconvolution algorithms fail. In searching for reasons why these methods fail,
we present a systematic analysis of the Poisson non-blind deconvolution
algorithms reported in the literature, covering both classical and deep
learning methods. We compile a list of five "secrets" highlighting the do's and
don'ts when designing algorithms. Based on this analysis, we build a
proof-of-concept method by combining the five secrets. We find that the new
method performs on par with some of the latest methods while outperforming some
older ones. | Abhiram Gnanasambandam, Yash Sanghvi, Stanley H. Chan | 2023-09-06T15:43:00Z | http://arxiv.org/abs/2309.03105v1 | # The Secrets of Non-Blind Poisson Deconvolution
###### Abstract
Non-blind image deconvolution has been studied for several decades but most of the existing work focuses on blur instead of noise. In photon-limited conditions, however, the excessive amount of shot noise makes traditional deconvolution algorithms fail. In searching for reasons why these methods fail, we present a systematic analysis of the Poisson non-blind deconvolution algorithms reported in the literature, covering both classical and deep learning methods. We compile a list of five "secrets" highlighting the do's and don't's when designing algorithms. Based on this analysis, we build a proof-of-concept method by combining the five secrets. We find that the new method performs on par with some of the latest methods while outperforming some older ones.
photon-limited, deconvolution, inverse problems, deblurring, shot noise
## I Introduction
### _From Gaussian to Poisson deconvolution_
Image deconvolution is one of the most fundamental problems in image restoration. When the blur kernel is fixed and given, the problem is known as _non-blind deconvolution_. For spatially invariant blur and additive i.i.d. Gaussian noise, the goal of deconvolution is to recover \(\mathbf{x}\in\mathbb{R}^{N}\) from the equation
\[\mathbf{y}=\mathbf{H}\mathbf{x}+\mathbf{n}, \tag{1}\]
where \(\mathbf{n}\in\mathbb{R}^{N}\) is the i.i.d. Gaussian noise, and \(\mathbf{H}\in\mathbb{R}^{N\times N}\) is the blur kernel represented as a convolution matrix [1, 2]. The inverse problem associated with Eq. (1) has been studied for a few decades, with an extensive list of methods, both classical [3, 4, 5, 6, 7, 8, 9, 10, 11] and deep-learning based [12, 13, 14, 15, 16, 17, 18].
With such a large volume of prior work, it would appear that the problem is solved. However, as we push the limit of image deconvolution to _low-light_ conditions, the problem remains wide open. Moreover, the growth of advanced photon counting image sensors and the need for extreme low light imaging applications [19, 20, 21, 22, 23, 24] makes the problem even more interesting than before. As people have shown in [25], even an ideal image sensor with zero read noise cannot escape from the photon shot noise. Thus, signal processing at this limit remains critical.
The change from a well-illuminated condition to a low-light condition is not just about switching the Gaussian model to a Poisson model 1
Footnote 1: The Poisson model we study in this paper is a simplification of the actual image formation process which should involve dark current, read noise, etc.. However, given that the Poisson problem is already difficult enough, we decided to focus on it in this paper.
\[\mathbf{y}=\text{Poisson}\{\alpha\mathbf{H}\mathbf{x}\}, \tag{2}\]
where \(\alpha\) is the average number of photons in the scene [26]. The increased difficulty is not associated with the unbounded-below and the non-differentiable-at-origin property of the Poisson negative-log-likelihood, but the magnitude of the noise exhibited in the data. In a typical low light condition, the mean photon count can be as low as one to ten photons per pixel. At this photon level, the random fluctuation of the signal would cause many algorithms to fail.
The impact of noise in Poisson deconvolution is noticeable in every step of a deconvolution algorithm. Since there is noise, it becomes much harder for an algorithm to invert the blur (usually in the Fourier space) and remove the noise. Deep learning algorithms also suffer from heavy noise because extracting features from the image becomes more difficult. In fact, Poisson deconvolution has only been discussed in a few deep-learning papers [28, 29, 30, 31].
### _Scope and contributions_
Given the success of Gaussian-noise based image deconvolution algorithms, we believe that the lessons learned in the
Fig. 1: **Overview**. The goal of this paper is to identify factors that will benefit Poisson image deblurring. Shown in this example are a simulated blurry and noisy image (where the noise is Poisson), and the corresponding image reconstruction results. The proposed method (to be discussed in Section IV) is just a combination of the five factors we identified, _without_ introducing any new architectures.
past can shed light on understanding the Poisson problem. To this end, we analyze a large collection of non-blind deconvolution algorithms reported in the literature. We look into the design details of each method and compile a list of do's and don't's we learned from these methods.
As a preview of our results, we show in Figure 1 the image reconstruction results of three methods published in the literature: PURE-LET [27] (T-IP, 2017), DWDN [14] (T-PAMI, 2022), and USRNet [18] (CVPR, 2020). All three methods are fine-tuned using Poisson data. In the same figure, we also report a proof-of-concept method by combining the "secrets" we learned in this paper. We stress that this proof-of-concept method is not meant to become a state-of-the-art but rather a confirmation of ideas described in the paper. Interestingly, the performance of this proof-of-concept is quite satisfactory.
So, what are our observations? We found the following five "secrets" of non-blind Poisson deconvolution:
1. [label=()]
2. **Wiener filter is recommended**. While some networks perform deconvolution and denoising simultaneously, we find that it is better to decouple the deconvolution part using a Wiener filter so that we can leverage the fact that the blur kernel is known. Of course, we assume that the blur is spatially invariant.
3. **Iteration is recommended**. Many networks estimate the image in a single shot. We find that iterative algorithms are more effective. For deep neural networks, the iterative algorithms can be implemented via algorithm unrolling.
4. **Feature space is recommended**. It is better to perform deconvolution in the feature space than in the spatial domain.
5. **Poisson likelihood is not needed**. When handling Poisson noise, there is no need to use customized tools such as variance stabilizing transform or the Poisson likelihood. Any architecture for Gaussian noise also works for Poisson.
6. **Learning the hyper-parameters is recommended**. Some algorithms estimate the hyperparameters using an off-the-shelf method or a heuristic rule. We find that end-to-end learning of the hyperparameters helps the performance.
This paper focuses on non-generative methods. Our analysis does not cover generative models (e.g., generative adversarial networks or denoising diffusion probabilistic models) because they belong to a different category of approaches. We do not consider _blind_ deconvolution algorithms because we do not estimate the blur kernel.
## II Analysis of Prior Methods
Given the large number of papers published for non-blind image deconvolution, it would be unrealistic to comment on every single method. The approach we take here is to focus on a _representative_ subset of existing methods. However, the selection of the representative methods would require some work. In what follows, we first list a number of Poisson deconvolution methods. We group them, and discuss their attributes. Afterward, we select the representative methods and discuss their design philosophies.
### _Prior Methods_
To help readers visualize the methods being studied in this paper, we summarize them in Table I. These methods can be categorized into two main classes:
**Classical Methods**. By classical methods, we mean methods that do not require learning. These methods are typically developed before the deep-learning era. In this paper, we select three representative methods with code publicly available:
* PURE-LET, by Li and Blu [27], is a non-iterative deblurring algorithm that uses the Poisson unbiased risk estimator (PURE) as a metric to guide the steps in linear expansion thresholding (LET). The thresholding idea used here is similar to several other paper [36, 37, 38].
* VSTP, by Azzari and Foi [32], uses the variance stabilization transform (VST) to equalize the variance of the Poisson random variable. Then, a deblurring algorithm is applied to handle the blur.
* Deconvtv, by Chan et al. [33], uses total variation for Gaussian noise removal. Its performance is not necessarily the best compared to other total variation solvers such as [39, 40, 41, 42, 43, 44, 45], but its code is readily available for experiments.
We acknowledge that there are plenty of other classical methods, such as [46, 47, 48, 49, 50, 51, 11]. These papers made great contributions in improving the prior models of the images so that the deblurring and denoising can be more effective. Some of these methods perform very well whereas some are similar to the three abovementioned methods. For concreteness of this
paper and considering the availability of their codes, we decide to focus on the ones we mentioned above.
**Deep-Learning Methods**. While deep learning based deconvolution algorithms are abundant, many of them are _blind_ algorithms. For non-blind methods, we consider nine of them.
* Deep Wiener Deconvolution Network (DWDN) [14] is a deep neural network that performs Wiener deconvolution in the feature space followed by a decoder. A follow-up method INFWIDE [52] adds a cross-residual fusion module. In this paper, we focus on DWDN for clarity and simplicity.
* KerUnc [16], CPCR [34], USRNet [18], PhDNet [29], and [15] are perform fixed iteration unrolling of alternating direction method of multipliers (ADMM), half quadratic splitting or gradient descent methods followed by end-to-end training.
* DPIR [35] uses the plug-and-play (PnP) based ADMM optimization to solve the problem.
* DWKF [17] is an iterative method that uses kernel prediction networks for imposing the image priors.
### _Attributes of the Methods_
With more than ten methods listed in Table I, it would be helpful if we could further categorize them according to their attributes. The attributes we highlight here will be used to inform the do's and don't's of designing an algorithm.
* either trained end-to-end [13] or as a pretrained block [18]. By definition, all classical methods are treated as non-neural network methods in this paper.
* **Decoupling?** Decoupling means that a method handles the deblurring step and the denoising step separately. The decoupling can be realized via variable splitting (e.g., in ADMM), or via a two-stage operation (e.g., in PURE-LET). For neural networks, we say that it employs a decoupling strategy if there are modules explicitly performing deblurring and are separated from denoising.
* **Poisson likelihood?** If a method explicitly uses the Poisson likelihood in an algorithm, then this attribute is satisfied. Some methods, usually deep neural networks, do not incorporate the Poisson likelihood in its algorithm design, for example [14, 34]
* **Iterative?** Both classical and deep learning methods can be iterative. The iteration can occur in the form of an actual iteration (as in optimization steps) or algorithm unrolling in deep learning methods.
* **Learned parameters?** All restoration methods have a set of hyperparameters. If these hyperparameters are picked manually, we say that the parameters are not learned. In contrast, if the hyper-parameters are simultaneously selected by the learning algorithm, then we say that the parameters are learned.
* **Feature space?** For some deep learning methods, the deconvolution does not take place in the spatial domain [27] but in the feature space [13, 52]. We check this box to reflect the property.
### _Design Principles_
We now discuss the design principles of the methods shown in Table I. To narrow down the discussion to a smaller set of methods, we compared the methods' performance on a testing dataset. The execution of the experiment is described in Section III when we discuss the five secrets of Poisson deconvolution. For the sake of brevity, the detailed numbers are reported in the Supplementary Material. Based on the performance of the methods, we select five leading methods that cover four categories. They are:
1. Traditional, non-iterative: PURE-LET [27]
2. Traditional, iterative: VSTP [32]
3. Neural-network, non-iterative: DWDN [14]
4. Neural-network, iterative: USRNet [18], PhDNet [28]. Two methods were chosen because of their similar performance.
#### Iii-B1 PURE-LET [27]
The core idea of PURE-LET is to construct multiple initial estimates using the Wiener filter, which is essentially a Fast-Fourier transform (FFT) based deconvolution. Given the blur matrix \(\mathbf{H}\), PURE-LET estimates a set of \(K\) initial guesses via
\[\widehat{\mathbf{x}}_{k}^{\text{Wiener}}=\texttt{Wavelet}[\left(\mathbf{H}^{T }\mathbf{H}+\lambda_{k}\mathbb{I}\right)^{-1}\mathbf{H}^{T}\mathbf{y}], \tag{3}\]
where \(k=\{1,2,\ldots K\}\) denotes the \(k\)th Wiener estimate, and \(\lambda_{k}\) is the \(k\)th hyperparameter. The operator Wavelet denotes the wavelet thresholding, which is the method PURE-LET used to clean up the estimates. The estimates are then linearly combined in such a way that they minimize the mean square error, i.e.,
\[\widehat{\mathbf{x}}=\sum_{k=1}^{K}a_{k}\cdot\widehat{\mathbf{x}}_{k}^{\text {Wiener}}, \tag{4}\]
where \(\{a_{k}\,|\,k=1,\ldots,K\}\) are the optimal combination weights determined by minimizing the Poisson unbiased risk estimate (PURE).
A conceptual diagram of PURELET is shown in Figure 2. Referring to Table I, PURELET employs a decoupling strategy by separating the deconvolution step and the denoising step. The Poisson likelihood is used to compute the risk estimate, but it was not used for the deconvolution step which is a filter bank of Wiener filters.
#### Iii-B2 DWDN [14]
DWDN has many similarities to PURE-LET. Instead of applying the Wiener filter on the images, DWDN applies it to the features:
\[\widehat{\mathbf{x}}_{k}^{\text{feature}}=\left(\mathbf{H}^{T}\mathbf{H}+ \lambda_{k}\mathbb{I}\right)^{-1}\mathbf{H}^{T}\mathcal{F}_{k}^{\text{feature}}( \mathbf{y}), \tag{5}\]
Fig. 2: PURE-LET [27] constructs a bank of Wiener filters to deblur the image, followed by image denoisers.
where \(\mathcal{F}_{k}^{\text{feature}}(\cdot)\) is a neural network trained to produce features. The estimated deblurred features \(\{\widehat{\mathbf{x}}_{1}^{\text{feature}},\widehat{\mathbf{x}}_{2}^{\text{ feature}},\dots,\widehat{\mathbf{x}}_{K}^{\text{feature}}\}\) are then fed to another neural network for refinement \(\mathcal{F}_{\text{refine}}\) to obtain the final output \(\widehat{\mathbf{x}}\):
\[\widehat{\mathbf{x}}=\mathcal{F}_{\text{refine}}\{\widehat{\mathbf{x}}_{1}^{ \text{feature}},\widehat{\mathbf{x}}_{2}^{\text{feature}},\dots,\widehat{ \mathbf{x}}_{K}^{\text{feature}}\}. \tag{6}\]
The feature networks \(\{\mathcal{F}_{k}^{\text{feature}}\,|\,k=1,\dots,K\}\) and the refinement network \(\mathcal{F}_{\text{refine}}\) are trained end-to-end. When the mean squared error (MSE) loss is used, DWDN and PURE-LET both aim to find the MMSE estimate.
A schematic diagram of DWDN is shown in Figure 3. If we compare DWDN with PURE-LET, we recognize that the overall multi-channel filter bank idea is the same. The only difference is that DWDN performs the deconvolution operations in the feature space. The denoisers are also replaced by neural networks. Moreover, since DWDN does not need to estimate the risk (as in PURE-LET), the Poisson likelihood is not considered.
#### Iii-B3 Vstp [32]
VSTP extends the idea of PURE-LET to make it iterative. VSTP starts with a single estimate of the deblurred image \(\widehat{\mathbf{x}}^{\text{Wiener}}\) instead of the multiple estimates used in PURE-LET. However, the overall concept of decoupling the deconvolution and the denoising steps remain the same.
An interesting idea of VSTP is to iteratively update the denoising step so that each denoising step can be "mild". To do so, a linear combination of \(\widehat{\mathbf{x}}^{\text{Wiener}}\) and the denoised estimate from the previous iteration \(\widehat{\mathbf{x}}_{t-1}\) is obtained via
\[\widehat{\mathbf{x}}_{t}^{\text{data}}=\lambda_{t}\widehat{\mathbf{x}}_{t}+(1 -\lambda_{t})\widehat{\mathbf{x}}^{\text{Wiener}} \tag{7}\]
A variance stabilizing transform (VST) is then used to stabilize the spatially varying noise strength of \(\widehat{\mathbf{x}}_{t}^{\text{data}}\), which is then denoised with Denoiser,
\[\widehat{\mathbf{x}}_{t}=\texttt{Denoiser}\left[\text{VST}\left(\widehat{ \mathbf{x}}_{t}^{\text{data}}\right)\right]. \tag{8}\]
The iteration continues until the stopping criteria are met.
In VSTP, the variance stabilizing transform is more of a technical need because the noise is spatially varying. The rationale of using VST is that when the photon level is not too low, VST is able to stabilize the variance so that the spatially varying variance will become invariving.
A schematic diagram of VSTP is shown in Figure 4. In the literature, people sometimes refer the denoising module as transform-denoise [19].
#### Iii-B4 PhDNet [28] and USRNet [18]
Both methods are based on maximizing the posterior probability (hence they are a maximum-a-posteriori (MAP) estimator). More specifically, the estimate is obtained by solving the optimization:
\[\widehat{\mathbf{x}}=\underset{\mathbf{x}}{\text{argmax}}\,\Big{[}\log\mathbb{ P}(\mathbf{y}|\mathbf{x})+\log\mathbb{P}(\mathbf{x})\Big{]}, \tag{9}\]
where \(\mathbb{P}(\mathbf{y}|\mathbf{x})\) is the likelihood term and \(\mathbb{P}(\mathbf{x})\) is the natural image prior. USRNet models the problem by assuming that the noise is Gaussian (without considering the fact that the true noise distribution is Poisson). Thus, in USRNet, the likelihood term is
\[\log\mathbb{P}(\mathbf{y}|\mathbf{x})=-\|\mathbf{y}-\alpha\mathbf{H}\mathbf{x }\|^{2}. \tag{10}\]
PhDNet explicitly takes into consideration of the Poisson noise, which leads to the following likelihood term
\[\log\mathbb{P}(\mathbf{y}|\mathbf{x})=-\alpha\mathbf{1}^{T}\mathbf{H} \mathbf{x}+\mathbf{y}^{T}\log(\alpha\mathbf{H}\mathbf{x}), \tag{11}\]
where \(\mathbf{1}\) is a vector with all ones.
Both methods solve the optimization using an unrolled neural network. Two steps are common for both methods:
* The inversion module is similar to a Wiener filter. For iteration \(t\), it is given by \[\widehat{\mathbf{x}}_{t}^{\text{data}}=\left(\mathbf{H}^{T}\mathbf{H}+\alpha \mathbb{I}\right)^{-1}\left(\mathbf{H}^{T}\mathbf{y}+\alpha\widehat{\mathbf{x} }_{t-1}\right).\] (12)
* The Gaussian denoising module, which can be considered as a refinement step: \[\widehat{\mathbf{x}}_{t}=\mathcal{F}^{\text{refine}}(\widehat{\mathbf{x}}_{t} ^{\text{data}})\] (13)
PhDNet has an additional step in each iteration to deal with the Poisson noise.
A schematic diagram of the methods is shown in Figure 5. On neural networks, the iterations are implemented via algorithm unrolling. That is, we unfold the optimization algorithm into a fixed number of blocks where each block is implemented via a neural network. When looping through this fixed number of blocks, effectively
Fig. 4: VSTP [32] applies variance stabilizing transform and a denoiser for the denoising step. The denoising step is also repeated in an iterative manner to improve the performance.
Fig. 5: USRNet [18], PhDNet [28] is an optimization-based algorithm where the problem is decoupled into deconvolution, Poisson data, and image denoising. The method is iterative; in deep neural networks, the iterations are realized via algorithm unrolling.
Fig. 3: DWDN [14]. While it shares similarities with PURELET, it performs Wiener deconvolution in the feature space instead of the image space.
algorithm. For additional details about algorithm unrolling, we refer the readers to [28, 29, 53].
## III The Secrets
In Section II we analyzed the structures of the prior methods, but this alone does not tell us much about the secrets of Poisson deconvolution. In this section, our goal is to dive into the details by conducting a series of experiments. From the experimental results, we then draw conclusions about the influencing factors for Poisson deconvolution. Some of the discussions are based on the main experimental result Table VI, which are presented in Section V.
### _Experimental setting_
Our approach to analyzing the performance of the prior methods is based on a series of carefully designed experiments. Since this is an empirical approach, we first state the background experimental settings.
First of all, we consider classical methods and deep learning methods separately, because deep learning methods require training. To make sure that the comparisons are fair, we retrain all the deep learning methods with the exact same training dataset, same training loss, and fine-tune the hyper-parameters to maximize their performances.
For training, we use images from the Flickr2K [54] dataset. We generate 500 random kernels based on [55]. These 500 kernels consist of five groups of sizes where each group has 100. The sizes are \(9\times 9\), \(18\times 18\), \(27\times 27\), \(36\times 36\), and \(45\times 45\). In addition, we generate 64 Gaussian kernels of varying anisotropy with the blur parameter \(\sigma\) between \(0.1\) and \(5\). Images of size \(128\times 128\) are cropped randomly from the dataset and then each image is blurred using a random kernel among these 500+64 = 564 kernels. For noise, we assume that the photons per pixel (ppp) is ranged between 1 and 80. 2
Footnote 2: The average ppp can be adjusted by varying \(\alpha\) in Eq. (2)
During training, we use the \(\ell_{1}\) loss between the reconstructed image \(\widehat{\mathbf{x}}\) and the ground truth image \(\mathbf{x}\) to train the networks. The loss function is defined by
\[\mathcal{L}(\widehat{\mathbf{x}},\mathbf{x})=\|\widehat{\mathbf{x}}-\mathbf{x }\|_{1}, \tag{14}\]
where \(\|\cdot\|_{1}\) denotes the \(\ell_{1}\) norm. We train all the networks for 500 epochs, with the Adam optimizer. The learning rate is initialized as \(10^{-4}\) which gets halved every 100 epochs. The batch size was set to \(2\) for all the methods. We do so to ensure a fair comparison because some methods consume more GPU power. The inputs to the networks include the degraded image \(\mathbf{y}\) and the blur kernel \(\mathbf{h}\). Some methods like [18, 28] take the noise level as inputs. In such cases, the photon level \(\alpha\) corresponding to each image was sent as the input.
For testing, we evaluate the methods using synthetically degraded images obtained by blurring 100 images from the BSD300 dataset [56]. We use 3 different sets of 5 motion kernels of size \(9\times 9\) (Small), \(27\times 27\) (Medium), \(45\times 45\) (Large) using [55]. Each combination of the image and motion is evaluated at three different photon levels (10, 30, and 50).
### _Secret 1: Using Wiener filters is recommended_
We observe that the five methods discussed in Section II-C all have a separate Fourier-based deconvolution module - irrespective of whether they are traditional methods or deep learning-based methods. The presence of the Fourier-based deconvolution module hints that a black-box neural network might have some limitations.
The decoupling approach makes sense in classical methods. In these methods, Poisson deconvolution is often posed as MAP-based optimization. Since it is very difficult for a simple optimization step to simultaneously handle blur and noise, it makes sense to decouple them.
How about deep neural networks? One would expect that since they have a large capacity, they wouldn't need to adopt a decoupling strategy. To examine the need for decoupling, we compare the two configurations as shown in Figure 6 - neural networks with and without a Wiener filter.
In this experiment, we use the ResUNet from [18] for the task shown in Figure 6(b). We train the network at a particular light level of 10 photons per pixel (ppp). To ensure that there is no domain gap, we train the network for _one_ single blur kernel and test it for the exact same blur kernel. For the configuration shown in Figure 6(a), we use a single-iteration USRNet. A single-iteration USRNet is nothing but a deconvolution module followed by a refinement network. We train the network with a large range of photon levels and blur kernels, as described in Section III-A. Our argument is that if a specialized network in Figure 6(b) cannot beat a generic network in Figure 6(a), then there must be some fundamental limits in the network itself.
The results are shown in Figure 7. We observe that the black-box neural network cannot handle blur and noise simultaneously. In contrast, a network with an explicit deconvolution step performs much better. Our conjecture is that since we know the blur kernel, it is better to incorporate this forward model in the solution when deblurring the image.
### _Secret 2: Iterations are recommended_
Iterative methods, regardless if they are traditional or neural-network-based, tend to perform better according to Table VI. Let us explain why this is the case.
Fig. 6: **How Wiener filters are used. We consider two neural networks, where in (a) we decouple the inversion step by a Fourier-based deconvolution module which is the Wiener filter, and in (b) we use only a neural network. The added computational complexity of the Wiener filter is minimal because it is a simple inversion in the Fourier space.**
Consider USRNet as an example: It is an iterative algorithm where the iterations are given by Eq. (12) and Eq. (13). The first step Eq. (12) is the deconvolution module which produces an estimate \(\widehat{\mathbf{x}}_{t}^{\text{data}}\), and the second step Eq. (13) is a neural network denoiser that refines the estimate to generate \(\widehat{\mathbf{x}}_{t}\). The performance of a denoiser is directly related to the input quality. The noisier the input is, the worse the reconstruction performance will be [57]. In a single-shot low-light deconvolution, we need to have a very good estimate from Eq. (12), and this needs to be computed directly from the noisy image itself. In iterative schemes, even though the initial estimate \(\widehat{\mathbf{x}}_{t}^{\text{data}}\) is not good, the mild refinement steps will gradually improve the image quality because they use both the previous estimate \(\widehat{\mathbf{x}}_{t-1}^{\text{data}}\) and the current estimate.
Figure 8 shows a typical per-iteration PSNR of an iterative scheme USRNet. Putting aside the initial estimate (which shows a downward PSNR trend), the performance generally goes up as the number of iterations increases. Specifically, we see that after each pair of \(\widehat{\mathbf{x}}_{t}\) and \(\widehat{\mathbf{x}}_{t}^{\text{data}}\), the performance improves. The exact dynamics of the PSNR is difficult to track because it is image-dependent. However, the trend confirms our hypothesis that iterations are helpful.
For unrolled algorithms, the number of iterations is realized by the number of blocks. A natural question is the number of such iterative blocks -- will more blocks improves the overall deconvolution result? Figure 9 shows the results of four USRNets trained at different number of iterative blocks. It is clear from the result that more iterations leads to a better final performance, although there is a diminishing return after several blocks.
Another question we ask is the _type_ of iterations. Among the methods reported in Table VI, there are two different kinds of iterative schemes as shown in Figure 10. The first one is the USRNet where the estimate \(\widehat{\mathbf{x}}\) is fed back to the data module (i.e., the inversion module), and is combined with the raw input \(\mathbf{y}\) and kernel \(\mathbf{h}\) to construct a new intermediate estimate. The second iterative scheme is the one used in VSTP. In this scheme, the estimate \(\widehat{\mathbf{x}}\) is used to form a linear combination with the output of the Wiener filter.
To evaluate the performance of the two schemes, we modify USRNet to incorporate the VSTP mechanism. We argue that this is a fairer comparison than directly using VSTP because VSTP uses a traditional denoiser BM3D. In our modification, we ensure that the two networks are trained with the same type and same amount of data. The results are shown in Figure 11,
Fig. 8: **Secret 2: Iteration is recommended.** We plot the PSNR of the estimates at each iteration of USRNet. We can notice that ignoring the first iteration, the plot aligns with our claim - A better deconvolution leads to a better refinement, which turn again leads to a better deconvolution.
Fig. 10: **What kind of iterations help?** By comparing USRNet and VSTP, we observe two types of iterations. (a) USRNet sends \(\widehat{\mathbf{x}}\) back to the data module iteratively to improve the estimate. (b) VSTP uses \(\widehat{\mathbf{x}}\) to form a linear combination with the Wiener filter outputs. We find that iteration in (a) is more effective.
Fig. 7: **Secret 1: Wiener filter is recommended** (a) Clean. (b) Degraded image. (c) Blur kernel. (d) A deconvolution U-Net trained on a variety of kernels. (e) A deconvolution U-Net trained on the specific blur kernel defined in (b). Note that even if we train the network _specifically for the blur kernel_, the result is still not satisfactory. (f) Deconvolution when the Wiener step is included in the U-Net.
Fig. 9: **Number of iterations.** We retrain USRNet with different numbers of iterations. We can see that we obtain a performance boost by increasing the number of iterations.
where we see that the iterative scheme by USRNet has a clear advantage over the VSTP scheme.
Based on the above experiments, our recommendation regarding the iterative scheme is that iterative schemes offer better performance than one-shoot methods. Among the different iterative schemes, we recommend feeding back the estimate \(\widehat{\mathbf{x}}\) directly to the inversion module so that the features of \(\widehat{\mathbf{x}}\) will be utilized better.
### _Secret 3: Feature space is recommended_
The next secret about Poisson deconvolution is that it is better to deconvolve the image in the _feature space_ instead of the image space. This observation is based on the difference between PURE-LET and DWDN in Table VI. Both PURE-LET and DWDN use multiple Wiener filters. PURE-LET uses different deblurring strengths (as specified by the hyper-parameter \(\lambda\)), whereas DWDN uses the same Wiener filter for different feature maps. But the biggest difference is that PURE-LET performs the deconvolution in the image space whereas DWDN performs the deconvolution in the feature space. We show in this subsection that the superior performance of DWDN is partially driven by feature space deconvolution.
To prove the usefulness of feature space deconvolution instead of image space, we consider the following four modifications of DWDN by placing the Wiener filters in different ways.
1. **Configuration I** in Figure 12(a) uses a single Wiener filter followed by a refinement network. This is the vanilla network for baseline analysis.
2. **Configuration II** in Figure 12(b) uses three Wiener filters as in PURE-LET. Each Wiener filter uses a different regularization parameter \(\lambda\). We use a deep neural network as the refinement step so that it is a fair comparison with DWDN.
3. **Configuration III** in Figure 12(c) uses a feature extraction unit to pull the features before sending them to Wiener filters. This is the same as DWDN. In our experiment, there are 16 feature maps. The regularization parameter \(\lambda\) is the same across the 16 Wiener filters.
4. **Configuration IV** in Figure 12(d) uses 16 Wiener filters where each has three sub-configurations. Each sub-configuration uses a different \(\lambda\). We regard Configuration IV as the ultimate modification we can make within the context of our analysis.
The comparisons between these configurations are shown in Table II. We see that across the different photon levels, the ones that perform deconvolution in the feature space are _significantly_ better. Our intuitive argument is that in the feature space, the signals are already _decomposed_. If the feature extraction unit is powerful, signals will be captured in a few leading feature dimensions whereas noise will be concentrated in the other dimensions. Therefore, the strong signal features will be deconvolved well by the Wiener filter with a smaller \(\lambda\), whereas the noise features will be attenuated by a large \(\lambda\). As a result, the overall deconvolution will be better. As for how much regularization \(\lambda\) is needed, Configuration III and IV tell us that the benefit is marginal.
Fig. 11: **What kind of iterations help?** This figure is a follow-up of Figure 10 where here we plot the PSNR as a function of the photon level.
Fig. 12: **Secret 3: Feature space is recommended**. The two types of deconvolutions: image space and feature space. (a)-(b) perform deconvolution in the spatial domain, whereas (c)-(d) perform deconvolution in the feature space.
Based on these findings, our recommendation here is that whenever possible, deconvolution should be performed in the feature space. Using different regularization parameter \(\lambda\) does not seem to have a significant difference.
### _Secret 4: Poisson likelihood is not needed_
By virtue of Poisson deconvolution, the likelihood function should be Poisson. However, several observations make us believe that the Poisson likelihood is not needed in a neural network based solution.
Our first observation is the comparison between USRNet and PhDNet in Table VI. USRNet uses a Gaussian likelihood whereas PhDNet uses a Poisson likelihood. Because of the Poisson likelihood, PhDNet needs to introduce a variable splitting technique to specifically handle the Poisson part, see the added Poisson module illustrated in Figure 13. However, from Table III we observe that the difference in performance between the two methods is negligible.
Readers may argue that the vanishing performance gap is due to the iterations, i.e., as the number of iterations increases, the network capacity increases and hence they are more capable of handling the Poisson statistics. To prove that this is not the case, we train four versions of USRNet and PhDNet with a fixed number of 1, 2, 4, and 8 iterative blocks in their unrolled networks. We can see from Table III that irrespective of the number of iterations used by the method, USRNet performs as well as PhDNet. Therefore, whether or not we use an explicit Poisson module does not matter.
Another "indirect" observation is about the design of PURE-LET. In PURE-LET, the Poisson statistics is used to estimate the PURE score which is an unbiased risk estimator of the mean squared error. However, the actual deconvolution step is performed by a bank of Wiener filters - which is derived from Gaussian statistics.
If Poisson modules are not needed, we expect that techniques associated with the Poisson likelihood would not have any significance to the restoration problem. This observation is supported by inspecting methods using the variance stabilizing transform (VST). Figure 14 shows a typical VST-based image denoising algorithm. In the VST case, we first apply VST to stabilize the Poisson variance. We then denoise the image, and transform back via the inverse VST. In our experiment, we use the ResUNet from [18] as the denoiser.
Table IV shows the performance between using VST or not. We observe that using VST does not offer the denoiser any advantage. The network without VST even marginally outperforms the denoiser with VST. This finding is consistent with what was reported in [20] for binomial noise.
Based on the above analysis, our conclusion is that when handling the Poisson noise in low-light, network architectures designed for Gaussian likelihood will work just as well. There is no clear advantage of using the more complicated Poisson likelihood and/or variance stabilizing transforms. As long as we can synthesize the Poisson noise for training, the explicit Poisson modules are unimportant.
### _Secret 5: Learning hyperparameters is recommended_
Among the learning-based methods, USRNet and PhDNet use networks to learn the hyperparameters that get used in the data module, Poisson module and the refinement net. However, DWDN uses a heuristic method for estimating this parameter. To understand if it is important to learn the hyperparameters, we modify DWDN and learn the hyperparameter that is being input to the Wiener filters using the same network structure
Fig. 14: **Variance Stabilizing Transform.** VSTs are often used in Poisson noise. (a) A denoising method using VST. (b) A denoising method without VST.
Fig. 13: **Secret 4: Poisson likelihood is not needed.** USRNet and PhDNet are both iterative unrolled networks. The difference is that in PhDNet, an explicit Poisson module is used to handle the Poisson noise.
used by PhDNet to learn its parameters. Figure 15 illustrates the conceptual difference between the two.
The result of this experiment can be found in Table V. We notice that when DWDN is augmented with a small network for learning the hyperparameters, it performs slightly better than using a heuristic for finding the parameters. The improvement is less than 0.1dB which is not very noticeable. However, since the computational cost of adding a hyper-parameter learning module is so small compared to the whole network, it does not hurt to include it.
Based on the above experiments, we conclude that hyper-parameter learning is helpful but it is not necessary. We still recommend it because it saves us from hand-tuning the hyper-parameters.
## IV Combining the Secrets
After presenting the five secrets, a natural question is: "what if we combine these ideas?" To this end, we create a method called the Five-in-One Network (FIO-Net). We make two remarks before we discuss this network: Firstly, we do not regard FIO-Net as a novel invention or claim it to be a state-of-the-art. We view FIO-Net a check point of the five secrets. We are more interested in checking whether its performance is consistent with the five secrets, rather than expecting it to beat other methods by a big margin. Secondly, although FIO-Net is a combination of the five secrets, it would still require some design because otherwise there is no guarantee it should work. We will present a way to integrate these five ideas.
To elaborate on the design principle of the FIO-Net, we first use Secret 4 to replace Poisson likelihood with the Gaussian likelihood. This implies that as far as the network structure is concerned, we can focus on the Gaussian forward model:
\[\mathbf{y}=\alpha\mathbf{H}\mathbf{x}+\mathbf{n}, \tag{15}\]
where \(\alpha\) defines the photon level. We remark that this is _not_ the original Poisson deconvolution problem that we want to solve. However, since Secret 4 tells us that utilizing the Poisson likelihood is not needed, we consider the Gaussian model when designing the neural network. When training the model, we take blurred images and add synthetic Poisson noise instead of Gaussian noise.
Remark: The concept using a sub-optimal forward model in exchange of better reconstruction performance is perhaps counter-intuitive. The general line of argument is known as computational image formation which we refer readers to [58] for detailed elaborations.
Our next step is to use Secret 3, which suggests us to perform the deconvolution in the feature space. To this end, we consider a set of linear filters \(\{\mathbf{F}_{i}\mid i=1,2,\ldots,M\}\) and apply them to
\[\mathbf{F}_{i}\mathbf{y}=\alpha\mathbf{F}_{i}\mathbf{H}\mathbf{x}+\mathbf{F}_ {i}\mathbf{n}. \tag{16}\]
Since \(\mathbf{F}_{i}\) and \(\mathbf{H}\) represent convolutional operations in matrix form, we can switch the order using the commutative property of convolution to obtain
\[\mathbf{F}_{i}\mathbf{y}=\alpha\mathbf{H}\mathbf{F}_{i}\mathbf{x}+\mathbf{F}_ {i}\mathbf{n}. \tag{17}\]
The question now becomes how to recover \(\mathbf{x}\).
Solving Eq. (17) would require an optimization. In FIO-Net, we consider a generic regularized least squares:
\[\widehat{\mathbf{x}}=\underset{\mathbf{x}}{\text{argmin}}\sum_{i=1}^{M}\lVert \mathbf{F}_{i}\mathbf{y}-\alpha\mathbf{H}\mathbf{F}_{i}\mathbf{x}\rVert^{2}+ \lambda g(\mathbf{x}). \tag{18}\]
where \(g(\mathbf{x})\) is the prior. Since an unconstrained optimization problem with a sum of two different functions is difficult to optimize, we split the original problem into two simpler sub-problems. We introduce a set of new variables \(\{\mathbf{z}_{i}=\mathbf{F}_{i}\mathbf{x},i=1,2,\ldots M\}\), and collectively define \(\mathbf{z}=\{\mathbf{z}_{1},\ldots,\mathbf{z}_{M}\}\). The new constrained optimization problem now becomes
\[\{\widehat{\mathbf{x}},\widehat{\mathbf{z}}\}= \underset{\mathbf{x},\mathbf{z}}{\text{argmin}}\sum_{i=1}^{M} \Big{\{}\lVert\mathbf{F}_{i}\mathbf{y}-\alpha\mathbf{H}\mathbf{z}_{i}\rVert^{2 }+\lambda g(\mathbf{x})\Big{\}}\] \[\text{subject to}\ \ \mathbf{z}_{i}=\mathbf{F}_{i}\mathbf{x},\ i=1,2, \ldots,M. \tag{19}\]
Eq. (19) is a standard optimization that can be solved using the half-quadratic splitting (HQS) [18]. HQS formulates an alternative optimization:
\[\{\widehat{\mathbf{x}},\widehat{\mathbf{z}}\}=\underset{\mathbf{ x},\mathbf{z}}{\text{argmin}}\sum_{i=1}^{M} \Big{\{}\lVert\mathbf{F}_{i}\mathbf{y}-\alpha\mathbf{H}\mathbf{z}_{i}\rVert^{2 }+\lambda g(\mathbf{x})\] \[+\mu_{i}\lVert\mathbf{F}_{i}\mathbf{x}-\mathbf{z}_{i}\rVert^{2 }\Big{\}}, \tag{20}\]
where \(\mu_{i}\) is the penalty strength.
In what follows, we briefly summarize the equations to solve Eq. (IV-A). During the discussion, we will explain how the secrets are used. The algorithm to solve Eq. (IV-A) involve two steps:
\[\mathbf{z}_{i}^{k}=\underset{\mathbf{z}_{i}}{\text{argmin}}\lVert \mathbf{F}_{i}\mathbf{y}-\mathbf{H}\mathbf{z}_{i}\rVert^{2}+\mu_{i}^{k}\lVert \mathbf{F}_{i}\mathbf{x}^{k-1}-\mathbf{z}_{i}\rVert^{2} \tag{21}\] \[\mathbf{x}^{k}=\underset{\mathbf{x}}{\text{argmin}}\sum_{i=1}^{M} \mu_{i}^{k}\lVert\mathbf{F}_{i}\mathbf{x}-\mathbf{z}_{i}^{k}\rVert^{2}+ \lambda g(\mathbf{x}), \tag{22}\]
Fig. 15: **Secret 5: Hyper-parameter learning is recommended. We can use heuristics or train a network to select the hyper-parameters.**
where we use the fact that the optimization of \(\mathbf{z}_{i}\) in Eq. (20) is separable so that we can solve for individual \(\mathbf{z}_{i}\)'s.
Next, we apply Secret 5 which says that we should learn the hyperparameters end-to-end. Thus, we replace the penalty \(\mu_{i}\) with \(\mu_{i}^{k}\) so that they change over iterations. Similar to [28], we use a small fully connected neural network for estimating the hyperparameters \(\mu_{i}^{k}\) with the kernel \(\mathbf{H}\) and the photon level \(\alpha\) used as the input.
Let's solve Eq. (21) and Eq. (22). Eq. (21) is a least squares minimization problem, and it has a closed form expression given by
\[\mathbf{z}_{i}^{k}=(\mathbb{I}+\mu_{i}^{k}\mathbf{H}^{T}\mathbf{H})^{-1}( \mathbf{F}_{i}\mathbf{x}^{k-1}+\mu_{i}^{k}\mathbf{H}^{T}\mathbf{F}_{i}\mathbf{ y}). \tag{23}\]
Assuming that the convolution operation represented by \(\mathbf{H}\) is carried out with circular boundary conditions, Eq. (23) has a FFT based solution given by
\[\mathbf{z}_{i}^{k}=\mathcal{F}^{-1}\left[\frac{\mathcal{F}(\mathbf{F}_{i} \mathbf{x}^{k-1})+\mu_{i}^{k}\overline{\mathcal{F}(\mathbf{H})}\cdot\mathcal{ F}(\mathbf{F}_{i}\mathbf{y})}{1+\mu_{i}^{k}|\mathcal{F}(\mathbf{H})|^{2}} \right], \tag{24}\]
where \(\mathcal{F}(\cdot)\) and \(\mathcal{F}^{-1}(\cdot)\) denote the FFT and inverse FFT respectively, and \((\cdot)\) denotes the complex conjugate function. Following the idea of [14], we replace the linear filters with learnable non-linear convolutional neural network \(\mathcal{D}_{\text{feat}}(\cdot)\). Similar to [14], we note that while the solution Eq. (24) was obtained for linear filters, using non-linear neural networks works well and even better than linear filters. Therefore, Eq. (24) is modified as
\[\mathbf{z}_{i}^{k}=\mathcal{F}^{-1}\left[\frac{\mathcal{F}(\mathcal{D}_{i}^{ \text{feat}}(\mathbf{x}_{k-1}))+\mu_{i}^{k}\overline{\mathcal{F}(\mathbf{H}) }\cdot\mathcal{F}(\mathcal{D}_{i}^{\text{feat}}(\mathbf{y}))}{1+\mu_{i}^{k}| \mathcal{F}(\mathbf{H})|^{2}}\right], \tag{25}\]
where \(\{\mathcal{D}_{1}^{\text{feat}}(\cdot),\dots,\mathcal{D}_{M}^{\text{feat}}( \cdot)\}=\mathcal{D}^{\text{feat}}(\cdot)\) represents the features generated by the neural network.
The other subproblem in Eq. (22), in the absence of the filters \(\mathbf{F}_{i}\), can be thought of as a image denoising problem [59]. However, the presence of the filters makes this problem not so straight-forward. However, we can still think of Eq. (22) as a restoration task where we want to recover the image \(\mathbf{x}\) from a set of features \(\mathbf{z}_{i}\). We want to minimize the residue between the input features and the features generated from \(\mathbf{x}\), while enforcing the prior \(g(\mathbf{x})\). Given the complex nature of restoring the image from a set of features, and the difficulty of defining a good prior term \(g(\mathbf{x})\), we propose to solve this problem using a convolutional neural network as
\[\mathbf{x}^{k}=\mathcal{D}_{\text{refine}}\left(\mathbf{z}_{1}^{k},\dots, \mathbf{z}_{M}^{k},\frac{\lambda}{\mu^{k}}\right), \tag{26}\]
where we have assumed that the penalties \(\mu_{i}^{k}=\mu^{k}\) do not vary over the features. The entire algorithm is summarized in algorithm 1.
```
1:Input: Degraded Image \(\mathbf{y}\), Kernel \(\mathbf{H}\), Photon level \(\alpha\)
2:\(\mathbf{x}^{0}\leftarrow\mathbf{y}\)
3:\(\{\mathcal{F}_{i}^{\mathbf{y}}\}_{i=1,\dots,M}=\mathcal{D}^{\text{feat}}( \mathbf{y})\)\(\triangleright\) Feature extraction from \(y\)
4:\(\{\mu_{k}\}_{k=1,\dots,K}=\mathcal{D}^{\text{hyp}}(\mathbf{H},\alpha)\)\(\triangleright\) Hyperparameters from \(\mathcal{D}^{\text{hyp}}(\cdot)\)
5:for\(k=1,2,\cdot\cdot,K\)do
6: Update \(\mathbf{z}_{i}^{k}\) using Equation (24)
7: Update \(\mathbf{x}^{k}\) using Equation (26)
8:endfor
9:return \(\mathbf{x}^{K}\)
```
**Algorithm 1**_FIO-Net_: Fixed Iteration Unrolling
The method is iterative, based on unrolled optimization that uses the convolutional neural networks only for image refinement and a traditional FFT based method is used for deconvolution. The iterative scheme described in Algorithm 1 is unrolled for K = 8 iterations and then trained end-to-end using the same training process as that described in Section III-A. The method incorporates the idea of deconvolving in the feature space, and does not have any specific Poisson design.
## V Experiments
After elaborating on the proposed method, we present the quantitative results on BSD300 dataset in Table VI. We use the same testing process as described in Section III-A so that the testing conditions are fair to all methods. We make three comments:
* Compared to classical methods such as PURE-LET and VSTP, FIO-Net outperforms by a big margin. This should not be a surprise, because all deep learning methods outperform these two classical methods.
Fig. 16: **Schematic diagram of FIO-Net. The Five-in-One Network (FIO-Net) utilizes all the five secrets we observed in the previous section. It is an iterative scheme performing deconvolution in the feature space. It uses Wiener filter, but no Poisson likelihood. Hyperparameters are automatically tuned.**
* Compared to a single-pass deep learning method DWDN, the performance of FIO-Net is substantially better, especially for bigger blur kernels. This stresses the importance of iterative methods.
* Compared to PhD-Net and USR-Net, the performance of FIO-Net is marginal. This is caused by the fact that some of the attributes have overlapping influences, e.g., feature space and iteration. While Secret 3 says that feature space deconvolution could help single-iteration methods, its impact may be diminished when more iterations are used.
In Figure 17 we show the visual comparisons. The visual comparisons apparently show another perspective of FIO-Net. If we compare USRNet, PhDNet, and FIO-Net, we see that all three perform similarly. However, as we zoom in to see the details, e.g., the lines on the roof in the first image, the bars on the windows in the second image, and the tail of the alphabet in the third image, we can see the visual improvement of FIO-Net. We remark that all models are trained using the exact same training dataset and tested on the same testing dataset. Therefore, the restored details are due to the network itself rather than data overfitting.
## VI Conclusion
With the growth of photon-limited imaging applications, we recognize the importance of understanding the performance limits of Poisson deconvolution algorithms. To this end, we present a systematic analysis of a large number of existing non-blind Poisson deconvolution methods. Based on this analysis, we deduce five "secrets" that are needed for an effective non-blind Poisson deconvolution algorithm design:
1. Use Wiener filter for spatially invariant blur
2. Use iterative neural networks instead of single forward-pass neural networks
3. Use feature space deblurring instead of image space deblurring
4. Do not incorporate Poisson likelihood in the network architecture design
5. Learn hyperparameters for iterative algorithms in an end-to-end manner.
By combining these five secrets, we obtain a proof-of-concept named the Five-In-One Network (FIO-Net). The results offered by FIO-Net are consistent with the five secrets we presented. Considering that FIO-Net is not a novel design but a combination of five existing ideas, the consistency and the on-par performance with the state-of-the-art result provide additional support to our findings.
|
2309.08044 | How many Neurons do we need? A refined Analysis for Shallow Networks
trained with Gradient Descent | We analyze the generalization properties of two-layer neural networks in the
neural tangent kernel (NTK) regime, trained with gradient descent (GD). For
early stopped GD we derive fast rates of convergence that are known to be
minimax optimal in the framework of non-parametric regression in reproducing
kernel Hilbert spaces. On our way, we precisely keep track of the number of
hidden neurons required for generalization and improve over existing results.
We further show that the weights during training remain in a vicinity around
initialization, the radius being dependent on structural assumptions such as
degree of smoothness of the regression function and eigenvalue decay of the
integral operator associated to the NTK. | Mike Nguyen, Nicole Mücke | 2023-09-14T22:10:28Z | http://arxiv.org/abs/2309.08044v1 | # How many Neurons do we need?
###### Abstract
We analyze the generalization properties of two-layer neural networks in the neural tangent kernel (NTK) regime, trained with gradient descent (GD). For early stopped GD we derive fast rates of convergence that are known to be minimax optimal in the framework of non-parametric regression in reproducing kernel Hilbert spaces. On our way, we precisely keep track of the number of hidden neurons required for generalization and improve over existing results. We further show that the weights during training remain in a vicinity around initialization, the radius being dependent on structural assumptions such as degree of smoothness of the regression function and eigenvalue decay of the integral operator associated to the NTK.
**Keywords:** Neural Tangent Kernel \(\bullet\) Early Stopping \(\bullet\) Gradient Descent
## 1 Introduction
The rapid advancement of artificial intelligence in recent years has been largely propelled by the remarkable capabilities of neural networks. These computational models have revolutionized numerous domains, including image recognition, natural language processing, and autonomous systems. The effectiveness of neural networks in solving complex tasks has led to their widespread adoption across academia and industry.
Understanding why standard optimization algorithms often find globally optimal solutions despite the intricate non convexity of training loss functions has become a focal point of research. Furthermore, deep neural networks, despite their vast number of parameters, tend to exhibit impressive generalization capabilities, achieving high accuracy on unseen data [11]. These optimization and generalization phenomena lie at the heart of deep learning theory, presenting fundamental challenges.
In this paper, we explore the learning properties of shallow neural networks in the NTK regime, when trained with gradient descent. It is studied in [12], how wide neural networks behave
during gradient descent training. In the limit of infinite width, these networks can be approximated as linear models via the first-order Taylor expansion around their initial parameters. Moreover, [16] established that training a neural network with a specific parameterization is equivalent to employing a kernel method as the network's width approaches infinity. It is shown that gradient flow (GF) on the parameter space becomes kernel GF on the function space. This explains why local minima of the train error become global minima in the infinite width limit, see also [14, 15].
A line of research [17, 1, 18, 19, 2], ADH\({}^{+}\)19a, ADH\({}^{+}\)19b, LCRN22, CCGZ20] explored this kernel method analogy and demonstrated that, with adequate over-parameterization, a certain initialization scale, and an appropriate learning rate schedule, gradient descent effectively learns a linear classifier on top of the initial random features.
The works [19, 1, 10, 11] investigate gradient descent convergence to global minima. They demonstrate that for i.i.d. Gaussian initialization, wide networks experience minimal parameter changes during training. This is key to the phenomenon where wide neural networks exhibit linear behavior in terms of their parameters during training. For a survey we also refer to [11].
Of particular interest is establishing optimal bounds for the generalization error with a minimal number of neurons required. Compared to the number of results for the train error, only a few investigations can be found for generalization, see e.g. [11, 1, 12] for shallow neural networks and [15, 10] for deep neural networks.
In [14], the authors prove for GF that \(O(n)\) many neurons are sufficient to obtain an optimal generalization bound of order \(O(\sqrt{n})\). However they only trained the outer layer and had restrictive assumptions on the target function. The authors in [15] prove fast rates of convergence for SGD in the case that the regression function belongs to the reproducing kernel Hilbert space (RKHS) associated to the NTK. However, exponentially many neurons are needed to obtain this result. Also closely related to our work is the work [1], analyzing the \(L^{2}\)-error of neural network regression estimates with one hidden layer with a logistic squasher, trained with gradient descent, under a smoothness assumption on the Fourier transform of the regression function. The authors show a rate of convergence of \(\sqrt{n}\), up to a polylogarithmic factor after \(n^{7/4}\) iterations (up to a polylog factor). In order to achieve this, the number of hidden neurons increases as \(\sqrt{n}\). We refer to Table 1 for a comparison.
**Our contribution.** We improve the above results in different directions:
* We derive an early stopping time \(T_{n}=\mathcal{O}\Big{(}n^{\frac{1}{2r+b}}\Big{)}\) that leads to minimax optimal rates of convergence. This depends on the smoothness \(r>0\) of the regression function and the (polynomial) decay rate \(b\in(0,1]\) of the eigenvalues of the kernel integral operator associated to the NTK. Our results hold for so called _easy learning_ problems, where \(2r+b>1\).
* We present a refined number of neurons that are needed for optimality of order \(M_{n}\geq O\Big{(}n^{\frac{2r}{2r+b}}\Big{)}=O(T_{n}^{2r})\) for \(r\geq\frac{1}{2}\), i.e. for well-specified cases where the regression function belongs to the RKHS and \(M_{n}\geq O\Big{(}n^{\frac{3-4r}{2r+b}}\Big{)}=O(T_{n}^{3-4r})\) for \(r\in(0,\frac{1}{2})\). The latter includes the case where the regression does not necessarily belongs to the RKHS associated to the NTK.
* We also overcome the saturation effect appearing in [15] by providing fast rates of convergence for smooth objectives, i.e. for \(r>1\).
* Furthermore, we prove that during GD with constant step size, in the well-specified case, the weights stay bounded if \(M\geq O(T_{n}^{2r})\) (for \(r\geq 1/2\)), up to a logarithmic factor. If \(r\leq 1/2\), the weights are in a ball of radius \(O(T_{n}^{1/2-r})\), up to a logarithmic factor. To the best of our knowledge, previous work only had been able to bound the weights with an decaying step size or with exponentially many neurons.
* Notably, the number of hidden neurons that are sufficient to establish our results is comparable to the number of random features for learning in RKHSs, see e.g. [14, 15, 16, 17].
**Organization.** In Section 2 we define the mathematical framework needed to present our main results in Section 3. We defer all proofs to the Appendices.
**Notation.** For two Hilbert spaces \(\mathcal{H}_{1},\mathcal{H}_{2}\) and a linear operator \(A:\mathcal{H}_{1}\to\mathcal{H}_{2}\), we write \(A^{*}:\mathcal{H}_{2}\to\mathcal{H}_{1}\) to denote the adjoint operator. If \(\theta\in\mathcal{H}_{1}\) we write \(\theta\otimes\theta:=\langle\cdot,\theta\rangle\theta\) to denote the tensor product. For \(n\in\mathbb{N}\), we write \([n]=\{1,...,n\}\). For two positive sequences \((a_{n})_{n}\), \((b_{n})_{n}\) we write \(a_{n}\lesssim b_{n}\) if \(a_{n}\leq cb_{n}\) for some \(c>0\) and \(a_{n}\simeq b_{n}\) if both \(a_{n}\lesssim b_{n}\) and \(b_{n}\lesssim a_{n}\). If \(\mu\) is a finite measure on some set \(\mathcal{X}\), we denote the \(L^{2}(\mathcal{X},\mu)\)-norm of a function \(f:\mathcal{X}\to\mathbb{R}\) by \(||f||_{L^{2}}:=\left(\int_{\mathcal{X}}|f(x)|^{2}d\mu\right)^{1/2}\). For a finite set \(\{x_{1},...,x_{n}\}\subset\mathcal{X}\) we denote the empirical \(L^{2}\)-norm as \(||f||_{n}:=\left(\frac{1}{n}\sum_{j=1}^{n}|f(x_{j})|^{2}\right)^{\frac{1}{2}}\).
## 2 Setup
In this section we provide the mathematical framework for our analysis.
### Two-Layer Neural Networks
We let \(\mathcal{X}\subset\mathbb{R}^{d}\) be the input space and \(\mathcal{Y}\subseteq[-C_{Y},C_{Y}]\), \(C_{Y}>0\), be the output space. The unknown data distribution on the data space \(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\) is denoted by \(\rho\) while the marginal distribution on \(\mathcal{X}\) is denoted as \(\rho_{X}\) and the regular conditional distribution on \(\mathcal{Y}\) given \(x\in\mathcal{X}\) is denoted by \(\rho(\cdot|x)\), see e.g. [18].
Given a measurable function \(g:\mathcal{X}\to\mathbb{R}\) we further define the expected risk as
\[\mathcal{E}(g):=\mathbb{E}[\ell(g(X),Y)]\, \tag{2.1}\]
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline References & Width \(M\) & Iterations \(T\) & Method \\ \hline \hline
[14][WEW20] & \(O(n)\) & \(O(\sqrt{n})\) & GF \\ \hline
[14][NS20] & \(O(\exp(n))\) & \(O(n)\) & SGD \\ \hline
[1][BKLW23] & \(O(\sqrt{n})\) & \(O\big{(}n^{7/4}\big{)}\) & GD \\ \hline \hline Our work & \(O(\sqrt{n})\) & \(O(\sqrt{n})\) & GD \\ \hline \end{tabular}
\end{table}
Table 1: Number of neurons and iterations needed to provide a generalization bound of order \(O(n^{-\frac{1}{2}})\).
where the expectation is taken w.r.t. the distribution \(\rho\) and \(\ell:\mathbb{R}\times\mathcal{Y}\rightarrow\mathbb{R}_{+}\) is the least-squares loss \(\ell(t,y)=\frac{1}{2}(t-y)^{2}\). It is known that the global minimizer of \(\mathcal{E}\) over the set of all measurable functions is given by the regression function \(g_{\rho}(x)=\int_{\mathcal{Y}}y\rho(dy|x)\).
The hypothesis class considered in this paper is given by the following set of two-layer neural networks: Let \(M\in\mathbb{N}\) be the network width, i.e. the number of hidden neurons. The network parameters are denoted by \(a=(a_{1},...,a_{M})^{T}\in\mathbb{R}^{M}\), the parameter of the input layer are denoted by \(B=(b_{1},...,b_{M})\in\mathbb{R}^{d\times M}\) and \(c=(c_{1},...,c_{M})^{T}\in\mathbb{R}^{M}\) is the bias. We condense all parameters in \(\theta=(a,B,c)\in\Theta\), with \(\Theta=\mathbb{R}^{M}\times\mathbb{R}^{d\times M}\times\mathbb{R}^{M}\) being the parameter space with the euclidean vector norm
\[||\theta||_{\Theta}^{2}=||a||_{2}^{2}+\sum_{m=1}^{M}||b_{m}||_{2}^{2}+||c||_{2 }^{2}=||a||_{2}^{2}+||B||_{F}^{2}+||c||_{2}^{2},\]
for any \(\theta\in\Theta\).
Given an activation function \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\), we finally consider the class
\[\mathcal{F}_{M} :=\Big{\{}g_{\theta}:\mathcal{X}\rightarrow\mathbb{R}\;:\;g_{ \theta}(x)=\frac{1}{\sqrt{M}}\sum_{m=1}^{M}a_{m}\sigma(\langle b_{m},x\rangle+ \gamma c_{m})\;,\] \[\qquad\qquad\theta=(a,B,c)\in\mathbb{R}^{M}\times\mathbb{R}^{d \times M}\times\mathbb{R}^{M}\;,\gamma\in[0,1]\big{\}}\;.\]
The activation \(\sigma\) is supposed to satisfy the following assumption:
**Assumption 2.1** (Activation Function).: _the activation \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\) is two times differentiable with Lipschitz continuous second derivative and there exists a constant \(C_{\sigma}<\infty\), such that \(\|\sigma^{\prime}\|_{\infty}\leq C_{\sigma}\), \(\|\sigma^{\prime\prime}\|_{\infty}\leq C_{\sigma}\)._
Our goal is to minimize the expected risk (2.1) over the set \(\mathcal{F}_{M}\), i.e.
\[\min_{g\in\mathcal{F}_{M}}\mathcal{E}(g)\;.\]
Here, the distribution \(\rho\) is known only through an i.i.d. sample \(((x_{1},y_{1}),...,(x_{n},y_{n}))\in(\mathcal{X}\times\mathcal{Y})^{n}\). Hence, we seek for a solution for the empirical risk minimization problem
\[\min_{g\in\mathcal{F}_{M}}\hat{\mathcal{E}}(g)\;,\quad\hat{\mathcal{E}}(g)= \frac{1}{n}\sum_{j=1}^{n}\ell(g(x_{j},y_{j}))\;.\]
We aim at analyzing the generalization properties of gradient descent, whose basic iterations are given by
\[\theta_{t+1} =\theta_{t}-\alpha\nabla_{\theta}\hat{\mathcal{E}}(g_{\theta_{t}})\] \[=\theta_{t}-\frac{\alpha}{n}\sum_{j=1}^{n}\ell^{\prime}(g_{ \theta_{t}}(x_{j}),y_{j})\nabla g_{\theta_{t}}(x_{j})\;,\]
with \(\alpha>0\) being the stepsize and for some initialization \(\theta_{0}\in\Theta\).
**Initialization.** Similar as in [20] we assume a symmetric initialization. The parameters for the output layer are initialized as \(a_{m}^{(0)}=\tau\) for \(m\in\{1,\ldots,M/2\}\) and \(a_{m}^{(0)}=-\tau\) for
\(\{M/2+1,\ldots,M\}\), for some \(\tau>0\). Let \(\mu_{0}\) be a uniform distribution on the sphere \(\mathbb{S}^{d-1}_{1}=\{b\in\mathbb{R}^{d}\mid\|b\|_{2}=1\}\subset\mathbb{R}^{d}\). The parameters for the input layer are initialized as \(b_{m}^{(0)}=b_{m+M/2}^{(0)}\) for \(m\in\{1,\ldots,M/2\}\), where \((b_{m}^{(0)})_{m=1}^{M/2}\) are independently drawn from the distribution \(\mu_{0}\). The bias parameters are initialized as \(c_{m}^{(0)}=0\) for \(m\in\{1,\ldots,M\}\). The aim of the symmetric initialization is to make an initial function \(g_{\theta_{0}}=0\), where \(\theta_{0}=\big{(}a^{(0)},B^{(0)},c^{(0)}\big{)}\). Note that this symmetric trick does not have an impact on the limiting NTK, see [20], and is just for theoretical simplicity. Indeed, we can relax the symmetric initialization by considering an additional error stemming from the nonzero initialization in the function space.
### Neural Tangent Kernel
The connection between kernel methods and neural networks is established via the NTK, see [1, 1]. The gradient of \(g_{\theta}\) w.r.t. \(\theta\) at initialization \(\theta_{0}\in\Theta\) defines a feature map \(\Phi_{M}:\mathcal{X}\to\Theta\) by
\[\Phi_{M}(x)=\nabla_{\theta}g_{\theta}(x)\mid_{\theta=\theta_{0}}\.\]
This defines a kernel via
\[K_{M}(x,x^{\prime}) =\big{\langle}\Phi_{M}(x),\Phi_{M}(x^{\prime})\big{\rangle}_{\Theta}\] \[=\frac{1}{M}\sum_{r=1}^{M}\sigma\Big{(}b_{r}^{(0)\top}x\Big{)} \sigma\Big{(}b_{r}^{(0)\top}x^{\prime}\Big{)}+\frac{\big{(}x^{\top}x^{\prime} +\gamma^{2}\big{)}}{M}\sum_{r=1}^{M}(a_{r}^{(0)})^{2}\sigma^{\prime}\Big{(}b_ {r}^{(0)\top}x\Big{)}\sigma^{\prime}\Big{(}b_{r}^{(0)\top}x^{\prime}\Big{)}\;,\]
for any \(x,x^{\prime}\in\mathcal{X}\). By [2, Theorem 4.21], this kernel defines a unique RKHS \(\mathcal{H}_{M}\), given by
\[\mathcal{H}_{M}=\{h:\mathcal{X}\to\mathbb{R}|\ \exists\,\theta\in\Theta\ \text{ s.t. }\ h(x)=\langle\nabla g_{\theta_{0}}(x),\theta\rangle_{\Theta}\}\;,\]
Note that by the assumptions 2.1 and 3.1, we see \(K_{M}(x,x^{\prime})\leq 4+2c_{\sigma}^{2}\tau^{2}\eqqcolon\kappa^{2}\) for any \(x,x^{\prime}\in\operatorname{supp}(\rho_{X})\).
We can consider \(K_{M}\) as a random approximation of a kernel \(K_{\infty}\), the neural tangent kernel (NTK). This is defined as a proper limit as \(M\to\infty\):
\[K_{\infty}\big{(}x,x^{\prime}\big{)}\coloneqq\mathbb{E}_{b^{(0)}}\Big{[} \sigma\Big{(}b^{(0)\top}x\Big{)}\sigma\Big{(}b^{(0)\top}x^{\prime}\Big{)} \Big{]}+\tau^{2}\Big{(}x^{\top}x^{\prime}+\gamma^{2}\Big{)}\mathbb{E}_{b^{(0)} }\Big{[}\sigma^{\prime}\Big{(}b^{(0)\top}x\Big{)}\sigma^{\prime}\Big{(}b^{(0) \top}x^{\prime}\Big{)}\Big{]},\]
see [1] for more information. Again, this kernel defines a unique RKHS \(\mathcal{H}_{\infty}\).
## 3 Main Results
### Assumptions and Main Results
In this section we formulate our assumptions and state our main results.
**Assumption 3.1** (Data Distribution).: _We assume that \(|Y|\leq C_{Y}\) almost surely, for some \(C_{Y}<\infty\)._
We let \(\mathcal{L}_{\infty}:L^{2}(\mathcal{X},\rho_{X})\to L^{2}(\mathcal{X},\rho_{X})\) denote the kernel integral operator associated to the NTK \(K_{\infty}\). Note that \(\mathcal{L}_{\infty}\) is bounded and self-adjoint. Moreover, it is compact and thus has discrete spectrum \(\{\mu_{j}\}_{j}\), with \(\mu_{j}\to 0\) as \(j\to\infty\). By our assumptions, \(\mathcal{L}_{\infty}\) is of trace-class, i.e., has summable eigenvalues.
**Assumption 3.2** (Source Condition).: _Let \(R>0\), \(r\geq 0\). We assume_
\[g_{\rho}=\mathcal{L}_{\infty}^{r}h_{\rho}\;, \tag{3.1}\]
_for some \(h\in L^{2}(\mathcal{X},\rho_{X})\), satisfying \(||h_{\rho}||_{L^{2}}\leq R\)._
This assumption characterizes the hypothesis space and relates to the regularity of the regression function \(g_{\rho}\). The bigger \(r\) is, the smaller the hypothesis space is, the stronger the assumption is, and the easier the learning problem is, as \(\mathcal{L}_{\infty}^{r_{1}}\big{(}L^{2}\big{)}\subseteq\mathcal{L}_{\infty} ^{r_{2}}\big{(}L^{2}\big{)}\) if \(r_{1}\geq r_{2}\). Note that \(g_{\rho}\in\mathcal{H}_{\infty}\) holds for all \(r\geq\frac{1}{2}\).
The next assumption relates to the capacity of the hypothesis space.
**Assumption 3.3** (Effective Dimension).: _For any \(\lambda>0\) we assume_
\[\mathcal{N}_{\mathcal{L}_{\infty}}(\lambda):=\operatorname{tr}\bigl{(} \mathcal{L}_{\infty}(\mathcal{L}_{\infty}+\lambda I)^{-1}\bigr{)}\leq c_{b} \lambda^{-b}, \tag{3.2}\]
_for some \(b\in[0,1]\) and \(c_{b}>0\)._
The number \(\mathcal{N}_{\mathcal{L}_{\infty}}(\lambda)\) is called _effective dimension_ or _degrees of freedom_[10]. It is related to covering/entropy number conditions, see [10]. The condition (3.2) is naturally satisfied with \(b=1\), since \(\mathcal{L}_{\infty}\) is a trace class operator which implies that its eigenvalues \(\{\mu_{i}\}_{i}\) satisfy \(\mu_{i}\lesssim i^{-1}\). Moreover, if the eigenvalues of \(\mathcal{L}_{\infty}\) satisfy a polynomial decaying condition \(\mu_{i}\sim i^{-c}\) for some \(c>1\), or if \(\mathcal{L}_{\infty}\) is of finite rank, then the condition (3.2) holds with \(b=1/c\), or with \(b=0\). The case \(b=1\) is referred to as the capacity independent case. A smaller \(b\) allows deriving faster convergence rates for the studied algorithms.
**Analysis of NTK Spectrum.** It is known that a certain eigenvalue decay of the kernel integral operator \(\mathcal{L}_{\infty}\) implies a bound on the effective dimension. Thus, assumptions on the decay of the effective dimension directly relate to the approximation ability of the underlying RKHS, induced by the NTK.
So far, only a few results are known that shed light on the RKHSs that are induced by specific NTKs and activation functions. The authors in [1] analyze the inductive bias of learning in the NTK regime by analyzing the NTK and the associated function space. They characterize the RKHS of the NTK for two-layer ReLU networks by providing a spectral decomposition of the kernel. This decomposition is based on a Mercer decomposition in the basis of spherical harmonics. Their analysis reveals a polynomial decay of eigenvalues, which leads to improved approximation properties compared to other function classes based on the ReLU activation.
The authors in [11], [12] show that the NTK for fully connected networks with ReLU activation is closely related to the standard Laplace kernel. For normalized data on the hypersphere both kernels have the same eigenfunctions and their eigenvalues decay polynomially at the same rate, implying that their RKHSs include the same sets of functions. Finally, [1] show that for ReLU activations, the kernels derived from deep fully-connected networks have essentially the same approximation properties as their shallow two-layer counterpart, namely
the same eigenvalue decay for the corresponding integral operator.
Little is known for other activations beyond ReLU. A notable exception is [13] that studies the eigenvalue distributions of the finite-width Conjugate Kernel and of the finite-width NTK associated to multi-layer feedforward neural networks for twice differentiable activations. In an asymptotic regime, where network width is increasing linearly in sample size, under random initialization of the weights, and for input samples satisfying a notion of approximate pairwise orthogonality, they show that the eigenvalue distributions of the CK and NTK converge to deterministic limits.
**Rates of convergence.** Our first general result establishes an upper bound for the excess risk in terms of the stopping time \(T\) and the number of neurons \(M\), under the assumption that the weights remain in a vicinity of the initialization. The proof is outlined in Section 3.2 and further detailed in Section B.
**Theorem 3.4**.: _Suppose Assumptions 2.1, 3.1, 3.2 and 3.3 are satisfied. Assume further that \(\alpha\in(0,\kappa^{-2})\). Let \((\varepsilon_{T})_{T\geq 2}\) be a decreasing sequence of positive real numbers. Assume that for all \(M\geq\widetilde{M}_{0}(\delta,T)\), with probability at least \(1-\delta\)_
\[\forall\;\;t\in[T]\;:\;\;\|\theta_{t}-\theta_{0}\|_{\Theta}\leq B_{\tau}( \delta,T)\;. \tag{3.3}\]
_There exist an \(M_{0}:=M_{0}(\delta,\varepsilon_{T},d)>0\) and \(n_{0}(\delta)\in\mathbb{N}\), such that for all \(n\geq n_{0}\) and \(M\geq M_{0}\), with probability at least \(1-\delta\) we have_
\[\|g_{\theta_{T}}-g_{\rho}\|_{L_{2}(\rho_{x})}\leq\frac{C_{\sigma}B_{\tau}^{3} (\delta,T)}{\sqrt{M}}+\varepsilon_{T}+C\cdot\log^{3}(6/\delta)\;T^{-r}\;, \tag{3.4}\]
_with \(C<\infty\), \(C_{\sigma}<\infty\) independent of \(n,M,T\)._
We immediately can derive the rates of convergence by balancing the terms on the right hand side in (3.4).
**Corollary 3.5** (Rate of Convergence).: _Let the assumptions of Theorem 3.4 be satisfied and choose \(\varepsilon_{T}=T^{-r}\), \(T_{n}=n^{\frac{1}{2r+b}}\) and \(2r+b>1\). There exist an \(n_{0}\in\mathbb{N}\), depending on \(\delta,r,b\), such that for all \(n\geq n_{0}\), with probability at least \(1-\delta\) we have_
\[\|g_{\theta_{T_{n}}}-g_{\rho}\|_{L_{2}(\rho_{x})}\leq C\cdot\log^{3}(6/\delta )\;n^{-\frac{r}{2r+b}}\;,\]
_provided that_
\[M\geq d^{\frac{5}{2}}\tilde{C}\cdot\log^{6}(T_{n})\cdot\begin{cases}\log^{10} (96/\delta)\;T_{n}^{3-4r}&:r\in(0,\tfrac{1}{2})\\ T_{n}^{2r}&:r\in[\tfrac{1}{2},\infty)\;.\end{cases}\]
_Here, the constants \(C<\infty\), \(\tilde{C}<0\) depend on \(\kappa,\alpha,r,b\), but not on \(n\)._
Up to a logarithmic factor, the rate of convergence in Corollary 3.5 is known to be minimax optimal in the RKHS framework [12, 13]. Compared to [25], who establish rates of convergence in the same setting for SGD, we are able to circumvent the saturation observed
there, i.e. our result holds for any \(r>0\), satisfying the constraint \(2r+b>1\) (the _easy learning regime_). In contrast, the rates in [20] are optimal only in the case where \(r\in[1/2,1]\).
Notably, the number of hidden neurons that are sufficient to establish this rate is comparable to the number of random features for learning in RKHSs, see e.g. [14, 15, 16].
**The weights barely move.** Our next result shows that the Assumption (3.3) is indeed satisfied and the weights remain in a vicinity of the initialization \(\theta_{0}\). The proof is provided in Appendix C.
**Theorem 3.6** (Bound for the Weights).: _Let \(\delta\in(0,1]\) and \(T\geq 3\). There exists an \(\widetilde{M}_{0}(\delta,T)\in\mathbb{N}\), defined in (C.16), such that for all \(M\geq\widetilde{M}_{0}(\delta,T)\), with \(\rho^{\otimes n}\)-probability at least \(1-\delta\) it holds_
\[\forall\;\;t\in[T]\;:\;\;\;\|\theta_{t}-\theta_{0}\|_{\Theta}\leq B_{\tau}\;,\]
_where_
\[B_{\tau}:=B_{\tau}(\delta,T):=80\cdot\log(T)\cdot\mathcal{B}_{\delta}(1/T)\;,\]
_with_
\[\mathcal{B}_{\delta}(\lambda):=\frac{3}{2}+14\kappa\log\biggl{(}\frac{60}{ \delta}\biggr{)}\sqrt{\frac{\mathcal{N}_{\mathcal{L}_{\infty}}(\lambda)\log( 60/\delta)}{\lambda n}}\]
_and for any \(n\geq\tilde{n}_{0}\), given in (C.17)._
**Corollary 3.7** (Refined Bounds).: _Suppose the assumptions of Theorem 3.6 are satisfied. Let \(\lambda_{n}=T_{n}^{-1}\), with \(T_{n}=n^{\frac{1}{2r+\delta}}\), \(2r+b>1\) and set \(\varepsilon_{T}=T^{-r}\)._
1. _Let_ \(r\geq\frac{1}{2}\) _and_ \(n\geq n_{0}\)_, for some_ \(n_{0}\in\mathbb{N}\) _depending on_ \(\delta,r,b\)_. With probability at least_ \(1-\delta\)__ \[\sup_{t\in[T]}||\theta_{t}-\theta_{0}||_{\Theta}\leq 160\cdot\log(T_{n})=160 \cdot\log(n^{\frac{1}{2r+\delta}})\;.\] (3.5) _The number of neurons required_1 _is_ Footnote 1: We can improve the factor of \(d^{5}\) at the expense of increasing the sample complexity. \[M\geq C_{\kappa,\sigma,\alpha}\;d^{5}\log^{4}(T_{n})T_{n}^{2r}\;.\]
2. _Let_ \(r\leq\frac{1}{2}\)_. With probability at least_ \(1-\delta\)__ \[B_{\tau}(\delta,T_{n}) \leq 1200\cdot\kappa\log^{3/2}(60/\delta)\;\log(T_{n})\;T_{n}^{1/2 -r}\] \[=1200\cdot\kappa\log^{3/2}(60/\delta)\;\log\Bigl{(}n^{\frac{1}{2r+ \delta}}\Bigr{)}\,n^{\frac{1-2r}{2(2r+b)}}\;.\] _This holds if we choose_ \[M\geq d^{5}\log^{4}(T_{n})T_{n}^{3-4r}\;.\]
### Outline of Proof
Our proof is based on a suitable error decomposition. To this end, we further introduce additional linearized iterates in \(\mathcal{H}_{M}\):
\[f_{t+1}^{M} =f_{t}^{M}-\frac{\alpha}{n}\sum_{j=1}^{n}\ell^{\prime}(f_{t}^{M}(x_ {j}),y_{j})K_{M}(x_{j},\cdot)\;, \tag{3.6}\] \[h_{t} =\left\langle\nabla g_{\theta_{0}}(x),\theta_{t}-\theta_{0} \right\rangle_{\Theta}, \tag{3.7}\]
with initialization \(f_{0}^{M}=h_{0}=0\).
We may split
\[\|g_{\theta_{T}}-g_{\rho}\|_{L^{2}}\leq\|g_{\theta_{T}}-\mathcal{S}_{M}h_{T} \|_{L^{2}}+\|\mathcal{S}_{M}(h_{T}-f_{T}^{M})\|_{L^{2}}+\|\mathcal{S}_{M}f_{T }^{M}-g_{\rho}\|_{L^{2}}\;, \tag{3.8}\]
where \(\mathcal{S}_{M}:\mathcal{H}_{M}\hookrightarrow L^{2}(\mathcal{X},\rho_{X})\) is the inclusion of \(\mathcal{H}_{M}\) into \(L^{2}(\mathcal{X},\rho_{X})\).
For the first error term in (3.8) we use a Taylor expansion in \(\theta_{t}\) around the initialization \(\theta_{0}\). For any \(x\in\mathcal{X}\) and \(t\in[T]\), we have
\[g_{\theta_{t}}(x) =g_{\theta_{0}}(x)+\mathcal{S}_{M}\langle\nabla g_{\theta_{0}}(x ),\theta_{t}-\theta_{0}\rangle_{\Theta}+r_{(\theta_{t},\theta_{0})}(x)\] \[=\mathcal{S}_{M}h_{t}(x)+r_{(\theta_{t},\theta_{0})}(x)\;. \tag{3.9}\]
Here, \(r_{(\theta_{t},\theta_{0})}(x)\) denotes the Taylor remainder and can be uniformly bounded by
\[\|r_{(\theta_{t},\theta_{0})}\|_{\infty}\lesssim B_{\tau}\;\frac{\|\theta_{t} -\theta_{0}\|_{\Theta}^{2}}{\sqrt{M}}\;,\]
as Proposition D.2 shows. This requires the iterates \(\{\theta_{t}\}_{t\in[T]}\) to stay close to the initialization \(\theta_{0}\), i.e.
\[\sup_{t\in[T]}||\theta_{t}-\theta_{0}||_{\Theta}\leq B_{\tau}\;,\]
with high probability, for some \(B_{\tau}<\infty\). We show in Theorem 3.6 that this is satisfied for sufficiently many neurons.
The second error term in (3.8) can be made arbitrarily small, see Theorem B.4. More precisely, there exists a decreasing sequence \(\{\varepsilon_{T}\}_{T}\) of positive real numbers such that
\[\|\mathcal{S}_{M}(h_{T}-f_{T}^{M})\|_{L^{2}}\lesssim\varepsilon_{T}\;,\]
with high probability and for sufficiently many neurons, depending on \(\varepsilon_{T}\).
For the last error term in (3.8) we apply the results in [11] and find that with high probability,
\[\|\mathcal{S}_{M}f_{T}^{M}-g_{\rho}\|_{L^{2}}\lesssim T^{-r}\;,\]
for sufficiently many neurons, see Proposition B.9.
As a result, we arrive at an overall bound of Theorem 3.4
\[\|g_{\theta_{T}}-g_{\rho}\|_{L_{2}(\rho_{x})}\lesssim\frac{B_{\tau}^{3}(\delta,T )}{\sqrt{M}}+\varepsilon_{T}+T^{-r}\;.\]
|
2309.11515 | Towards Differential Privacy in Sequential Recommendation: A Noisy Graph
Neural Network Approach | With increasing frequency of high-profile privacy breaches in various online
platforms, users are becoming more concerned about their privacy. And
recommender system is the core component of online platforms for providing
personalized service, consequently, its privacy preservation has attracted
great attention. As the gold standard of privacy protection, differential
privacy has been widely adopted to preserve privacy in recommender systems.
However, existing differentially private recommender systems only consider
static and independent interactions, so they cannot apply to sequential
recommendation where behaviors are dynamic and dependent. Meanwhile, little
attention has been paid on the privacy risk of sensitive user features, most of
them only protect user feedbacks. In this work, we propose a novel
DIfferentially Private Sequential recommendation framework with a noisy Graph
Neural Network approach (denoted as DIPSGNN) to address these limitations. To
the best of our knowledge, we are the first to achieve differential privacy in
sequential recommendation with dependent interactions. Specifically, in
DIPSGNN, we first leverage piecewise mechanism to protect sensitive user
features. Then, we innovatively add calibrated noise into aggregation step of
graph neural network based on aggregation perturbation mechanism. And this
noisy graph neural network can protect sequentially dependent interactions and
capture user preferences simultaneously. Extensive experiments demonstrate the
superiority of our method over state-of-the-art differentially private
recommender systems in terms of better balance between privacy and accuracy. | Wentao Hu, Hui Fang | 2023-09-17T03:12:33Z | http://arxiv.org/abs/2309.11515v2 | # Towards Differential Privacy in Sequential Recommendation: A Noisy Graph Neural Network Approach
###### Abstract
With increasing frequency of high-profile privacy breaches in various online platforms, users are becoming more concerned about their privacy. And recommender system is the core component of online platforms for providing personalized service, consequently, its privacy preservation has attracted great attention. As the gold standard of privacy protection, differential privacy has been widely adopted to preserve privacy in recommender systems. However, existing differentially private recommender systems only consider static and independent interactions, so they cannot apply to sequential recommendation where behaviors are dynamic and dependent. Meanwhile, little attention has been paid on the privacy risk of sensitive user features, most of them only protect user feedbacks. In this work, we propose a novel DIfferentially Private Sequential recommendation framework with a noisy Graph Neural Network approach (denoted as DIPSGNN) to address these limitations. To the best of our knowledge, we are the first to achieve differential privacy in sequential recommendation with dependent interactions. Specifically, in DIPSGNN, we first leverage piecewise mechanism to protect sensitive user features. Then, we innovatively add calibrated noise into aggregation step of graph neural network based on aggregation perturbation mechanism. And this noisy graph neural network can protect sequentially dependent interactions and capture user preferences simultaneously. Extensive experiments demonstrate the superiority of our method over state-of-the-art differentially private recommender systems in terms of better balance between privacy and accuracy.
## 1 Introduction
Recent years have witnessed the tremendous development of various online platforms such as Facebook, Amazon and eBay, they are playing an increasingly important role in users' daily life. And recommender system is the core component of online platforms for providing personalized services, it takes advantage of abundant personal information to recommend items or services that match user preference [1]. The direct access to sensitive personal information makes recommender system a common target of privacy attacks and thus aggravates users privacy concern. [2] find that the actions of arXiv users would be potentially "visible" under targeted attack and they propose to change the privacy settings of the recommender algorithm to mitigate such privacy risk. [3] points out that mobile health applications pose new risks to privacy as they need a large volume of health data to be collected, stored and analyzed. And [4, 5] show that users' sensitive attributes such as racial information, sexual orientation and political inclinations can be precisely predicted from their interaction history in online platforms by attribute inference attack [6, 7]. Even the outputs of recommender systems are likely to reveal users' sensitive attributes and actual interactions to malicious attackers [8, 9]. In a nutshell, despite the ubiquity of recommender system in various online platforms, it is vulnerable to privacy attacks and may cause leakage of sensitive personal information. Besides, the enactment of General Data Protection Regulation (GDPR) [10] raises users' awareness of privacy and makes it more urgent to devise privacy-preserving recommender systems.
Existing recommender systems (RSs) can be mainly classified into two categories: one category is traditional RSs which includes content-based and collaborative filtering RSs; the other category is sequential RSs [11, 12]. Traditional
RSs model users' historical interactions in a static and independent way, so they can only capture the static long-term preference but ignore to consider short-term interest and the sequential dependencies among user interactions. In contrast, sequential RSs treat user interactions as a dynamic sequence and take sequential dependencies into account to capture both long-term and short-term interest [13]. Figure 1 is an illustration of sequential RS, where each user's interactions are indexed chronologically to form a historical interaction sequence. And sequential RSs need to predict users' next interacted items based on their historical behavior sequences [14; 15; 16; 17]. Due to the capability of capturing users' dynamic and evolving preferences, sequential RSs are quite important and popular in modern online platforms and have attracted much attention in academia. Therefore, we focus on sequential RSs in this paper and derive to build a privacy-preserving sequential RS that can simultaneously resist to privacy attacks and retain considerable recommendation performance.
Previous studies on privacy-preserving RSs mainly adopt anonymisation [18], encryption [19; 20] and differential privacy [21; 22] to protect sensitive user information. The drawback of anonymisation-based RSs is that they cannot provide a provable privacy guarantee or resist to reconstruction attacks [23; 24]. Meanwhile, encryption-based RSs bring heavy computation overhead and fail to prevent attackers from inferring sensitive user information based on the exposed output of RSs [4]. So we resort to differential privacy [25] to build a privacy-preserving sequential RS on account of its provable privacy guarantee and lightweight computation overhead. On the other hand, differentially private sequential RSs are quite under-explored because of the challenge to consider sequential dependencies in differential privacy. Existing differentially private RSs are all based on traditional RSs, which can further be divided into two categories. The first category focuses on neighbor-based collaborative filtering [4; 26; 27], where noise is added into the calculation of item/user similarity matrix to protect users' sensitive interactions. The second category is based on matrix factorization [22; 21; 28; 29]. They add noise into the objective function or gradients in order to protect users' ratings or the fact that a user has rated an item. Despite their effectiveness being partially validated, we argue that these solutions suffer from the following three major limitations.
First, they model users' preferences based on static rating matrix, thus cannot capture dynamic and evolving preferences in users' interaction sequences. Second, interactions are considered to be independent and equally important in existing differentially private RSs. Nevertheless, users' behavior sequences are characterized by complicated sequential dependencies [15; 30] and the most recent interactions may have greater influence on user preferences [11], thus these solutions are not applicable in sequential recommendation. Third, they only protect users' explicit ratings or implicit feedbacks while neglect protection on users' side information such as user demographics. [31] show that there are privacy risks on users' side information, for example, user gender can be accurately inferred from users' ratings. Although [6] design a dual-stage perturbation strategy to protect sensitive user features, they ignore to protect on users' interactions let alone dependent behavior sequences. In short, none of these differentially private RSs focus on protecting users' sensitive features and interactions simultaneously in order to achieve a better balance between privacy and utility.
To bridge the above research gaps, we propose a differentially private sequential recommendation framework called DIPSGNN to protect user features and sequentially dependent interactions at the same time. Specifically, we first take advantage of piecewise mechanism [32] to protect users' sensitive features at input stage and use the perturbed features to initialize user embedding. Then, a gated graph neural network [33] is employed to capture sequential dependencies and dynamic preferences in users' behavior sequences. In this gated graph neural network, we innovatively add calibrated noise into the aggregation step based on aggregation perturbation mechanism [34] to prevent attackers from inferring users' private interactions based on the exposed recommendation results. To summarize, the main contributions of our work are as follows:
Figure 1: A toy example of sequential RS. Each user’ interactions are indexed chronologically to form a interaction sequence. And sequential RSs need to predict the next items that users will interact based on historical interaction sequences.
1. To the best of knowledge, we are the first to achieve differential privacy for dependent interactions in sequential recommendation.
2. We propose a novel aggregation scheme that can protect time-dependent interactions and capture user preferences without considerably impairing performance.
3. Both users' features and interactions are well-protected in DIPSGNN which offers a better balance between privacy and accuracy.
4. Theoretical analysis and extensive experiments on three real-world datasets demonstrate the effectiveness and superiority of DIPSGNN over state-of-the-art differentially private recommender systems.
The rest of this article is structured as follows. Section 2 reviews the related work. Section 3 introduces the preliminaries and problem setup. Section 4 elaborates on the technical details of our proposed DIPSGNN. Section 5 discusses the experimental results and analyses. Finally, we conclude this article and propose several future directions in Section 6.
## 2 Related Work
In this section, we review three lines of studies related to our work including: sequential recommendation, differentially private recommender systems and privacy-preserving graph neural network.
### Sequential Recommendation
Sequential recommendation recommends the next item based on the chronological sequence of users' historical interactions. The earliest work [35] on sequential recommendation leverages Markov decision process to model item transition. Later, FPMC [36] fuses the idea of Markov chains with matrix factorization, it learns the first order transition matrix by assuming next item is only relevant with the previous one. Nevertheless, these conventional methods combine past components independently and neglect long range dependency. For stronger ability to model long-term sequential dependency, deep learning based methods represented by recurrent neural networks (RNN) [37; 38; 39] and attention mechanism [15; 40; 41] have been in blossom in sequential recommendation. For example, [39] combines the architecture of RNN and Convolutional Neural Network to capture complex dependencies in user behavior sequences. And attention-based neural networks such as Transformer [42; 43] and BERT [16] use attention scores to explore item-item relationships and achieve remarkable performance in sequential recommendation. Recently, graph neural networks (GNN) [44; 45; 17] have attracted much interest in sequential recommendation, as the input data can be represented by graphs. SRGNN [46] converts user behavior sequences into directed graphs and learns item embeddings on these graphs with gated graph neural network [33]. APGNN [47] promotes SRGNN by fusing personalized user characteristics with item transition patterns in user behavior graphs in order to better model user preferences. SUGRE [30] reconstructs loose item sequences into tight item-item interest graphs based on metric learning, which further improves the performance of GNN in sequential recommendation. By elaborately modeling user interest based on graphs constructed from interaction sequences, GNN-based methods have demonstrated great effectiveness in sequential recommendation.
### Differentially private recommender systems
Differential privacy [25] has been introduced into recommender systems since [4]. It adds noise into calculation of item-similarity matrix in order to protect users' explicit ratings. After that, [27] protect users' implicit feedbacks by applying binary randomized response [48] on them and then send the perturbed feedbacks to the centralized server to calculate a private item-similarity matrix. Aside from neighborhood-based collaborative filtering, there is another line of works which are based on matrix factorization (MF) [49]. [22] integrate objective perturbation into MF to make sure the final item embedding learned by MF satisfy differential privacy. Besides, they decompose the noise component into small pieces so that it can fit with the decentralized system. [21] build a differentially private MF framework by using a novel connection between differential privacy and bayesian posterior sampling via stochastic gradient langevin dynamics. And [29] further divide user ratings into sensitive and non-sensitive ones then add different amount of noise on these two kinds of ratings when calculating gradients. Finally, they achieve a uniform privacy guarantee for sensitive ratings. However, these differentially private recommender systems can only protect static rating matrix and assume that interactions are independent and equally important. They largely ignore the sequential dependencies and users' dynamic preference, which makes them inadequate for sequential recommendation. Meanwhile, the protection on user features is overlooked in these works. Though [6] shed light on the protection towards user demographics, it lacks the ability to protect interactions let alone dependent behavior sequences.
### Privacy-preserving graph neural network
Graph neural networks (GNNs) have been broadly employed in sequential recommendation as users' interaction sequences can be easily transformed into graph data. However, rich node features and edge information in GNNs make them vulnerable to privacy attacks. [50; 51; 52] show that private edges can be recovered from GNNs via the influence of particular edge on model output. To mitigate such privacy risk, various privacy-preserving techniques for GNNs are emerging. [53] propose a privacy-preserving representation learning framework on graphs from mutual information perspective. [54] perturb graphs based on combinatorial optimization to protect private node labels. Nevertheless, these methods cannot provide a formal privacy guarantee. To address this limitation, differential privacy (DP) has been applied to protect privacy in GNNs. [50] propose LapGraph to provide DP protection for sensitive edges by adding laplace noise into adjacency matrix. It can be regarded as a data mask method with formal differential privacy guarantee. But [55; 56] argue that adding noise into adjacency matrix destroys original graph structure and ruins the neighborhood aggregation inside GNNs. To remedy this defect, [34] propose an aggregation perturbation mechanism to safeguard the privacy of edges in undirected and unweighted graphs, which differs from the directed and weighted graphs in our work. More specifically, it forces the sensitivity of embedding update process equal to one by normalizing each row of embedding matrix. However, we find that this normalization step makes embedding matrix deviate too much from its true value and brings excessive noise. To resolve this problem, we normalize the rows of embedding matrix with tunable threshold for different datasets and achieves better utility.
## 3 Preliminaries
### Differential Privacy
Differential privacy [25] is a powerful tool to provide formal privacy guarantee when processing sensitive data. It ensures the output of a randomized algorithm is insensitive to the deletion/addition of one individual record in a database by adding calibrated noise to the algorithm. The formal definition of differential privacy is as follows.
**Definition 1** (Differential Privacy).: _A randomized algorithm \(\mathcal{A}\): \(\mathcal{X}^{n}\rightarrow\mathcal{Y}\) is \((\epsilon,\delta)\)-differentially private, if for all neighboring datasets \(X,X^{\prime}\in\mathcal{X}^{n}\) and all \(S\subseteq\mathcal{Y}\),_
\[\Pr[\mathcal{A}(X)\in S]\leq e^{\epsilon}\cdot\Pr\left[\mathcal{A}\left(X^{ \prime}\right)\in S\right]+\delta. \tag{1}\]
where \(\Pr[\cdot]\) represents probability, \(\epsilon>0\) is privacy budget and \(\delta>0\) is failure probability. A smaller \(\epsilon\) or \(\delta\) brings a stronger privacy guarantee but forces us to add more noise in the randomized algorithm. Besides, neighboring datasets denote a pair of datasets differing at most one record. In our work, user interaction sequences are transformed into graphs and we focus on edge-level differential privacy, so neighboring datasets represent two graphs differ at only one edge. Besides graph topology data, our work also involves multidimensional numerical and categorical user feature data. [57] propose a hybrid differential privacy notion to properly perturb heterogeneous data types in social networks. They utilize edge-level differential privacy to protect graph topology data and local differential privacy [58] to protect user attributes. Inspired by hybrid differential privacy notion, we also integrate local differential privacy into our model to protect user features, as user feature data and graph topology data have different characteristics. The formal definition of local differential privacy is as follows:
**Definition 2** (Local Differential Privacy).: _A randomized function \(f(\cdot)\) satisfies \(\epsilon-\mathrm{LDP}\) if and only if for any two respective inputs \(x\) and \(x^{\prime}\) of one user and all output \(y\),_
\[\Pr\left[f(x)=y\right]\leq e^{\epsilon}\cdot\Pr\left[f\left(x^{\prime}\right) =y\right], \tag{2}\]
where \(\epsilon\) is also called privacy budget. A lower \(\epsilon\) provides stronger privacy guarantee but forces us to add heavier noise on each user's data. In local differential privacy, the perturbation of each user's data guarantees that an external attacker cannot easily infer which of any two possible inputs \(x\) and \(x^{\prime}\) from one user is used to produce the output \(y\). Thus, the true input value of this user is protected with high confidence.
### Problem Statement
Let \(U=\{u_{i}\}_{i=1}^{|U|}\) and \(V=\{v_{j}\}_{j=1}^{|V|}\) be the set of users and items in the system, respectively. Each user \(u\) has a behavior sequence in the chronological order \(S^{u}=\{v_{s}^{u}\}_{s=1}^{n_{u}}\) (\(v_{s}^{u}\in V\), and \(n_{u}\) is the length of user \(u\)'s behavior sequence) and a sensitive feature vector \(\mathbf{x}_{u}\). We convert each \(S^{u}\) into a directed weighted graph \(\mathcal{G}^{u}=(\mathcal{V}^{u},\mathcal{E}^{u})\), where \(\mathcal{V}^{u}\) and \(\mathcal{E}^{u}\) represent the set of item nodes, and the set of edges, respectively. All numerical features in \(\mathbf{x}_{u}\) are normalized into \([-1,1]\) with min-max normalization and all categorical features are encoded into one-hot vectors. Based on \(\{\mathcal{G}^{u}|u\in U\}\) and \(\{\mathbf{x}_{u}|u\in U\}\), the goal of our work is to _build a differentially private sequential recommendation framework to
generate accurate top-\(K\) recommendation list for each user, meanwhile prevent outside malicious attackers from inferring users' sensitive features and sequentially dependent interactions._
## 4 Methodology
Our proposed DIPSGNN seeks to protect sensitive user features and interactions without sacrificing considerable performance in sequential recommendation. Figure 2 depicts the overview of DIPSGNN: we first protect users' features by perturbing them at input stage. Then, we convert each user's behavior sequence \(S^{u}\) into a user behavior graph \(\mathcal{G}^{u}\) and feed it into DIPSGNN to update user embedding and item embedding. And we add calibrated noise in DIPSGNN layer to prevent the leakage of user interactions from recommendation results. Finally, the updated user embedding and item embedding are concatenated to make next-item recommendation. We will elaborate on details of these components subsequently.
### User Feature Protection
To protect sensitive user features, we adopt the strategy to perturb them at input stage with local differential privacy. Concretely, we add noise to raw user features based on piecewise mechanism (PM) [32], as it can handle multi-dimensional numerical and categorical features. With the post-processing property of differential privacy, user features will keep private during recommendation. Suppose user \(u\)'s feature vector consists of \(n\) different features, where each numerical feature is represented by a single number and each categorical feature is represented by a single one-hot vector. Thus, user \(u\)'s feature vector \(\mathbf{x_{u}}=\mathbf{x_{1}}\oplus\mathbf{x_{2}}\oplus\cdots\oplus\mathbf{x _{n}}\in\mathbb{R}^{d_{0}}\) (\(d_{0}\geq n\)), where \(\oplus\) is the concatenation operation. In this part, we aim to perturb user features with privacy budget \(\epsilon_{1}\). If we perturb each feature equally, then the privacy budget for each feature shrinks to \(\epsilon_{1}/n\). This will harm the utility of the perturbed data as the incurred noise variance is not minimized in this case. To achieve the lowest incurred noise variance, we randomly select \(k\) (\(k<n\)) features from \(\mathbf{x}_{u}\) and perturb each of them with privacy budget \(\epsilon_{1}/k\), while the non-selected features are dropped by masking them with \(0\) to avoid privacy leakage. We further follow [32] to set \(k\) as:
\[k=\max\{1,\min\{n,\lfloor\frac{\epsilon_{1}}{2.5}\rfloor\}\}. \tag{3}\]
For each \(\mathbf{x}_{i}\) in the \(k\) selected features, if it is a numerical feature, we first normalize it into \([-1,1]\) with min-max normalization and then perturb it by executing Algorithm 1 with privacy budget \(\epsilon=\frac{\epsilon_{1}}{k}\). On the contrary, if the selected \(\mathbf{x}_{i}\) is a categorical feature, as it is represented by an one-hot vector, we perturb it with optimized unary encoding (OUE) method [59]. Because OUE method will minimize the variance when perturbed one-hot vector has a higher dimension. The details of OUE method are shown in Algorithm 2. By integrating the perturbation for numerical features and categorical features, the whole process of PM are depicted in Algorithm 3. Theorem 1 guarantees that it satisfies \(\epsilon_{1}\)-local differential privacy.
Figure 2: The framework of DIPSGNN. First, user features are perturbed and protected at input stage. Next, we construct user behavior graph based on user interaction sequence. Then, user behavior graph is protected with our newly designed DIPSGNN at embedding update stage. Finally, we utilize updated user embedding and item embedding to make next-item recommendation without leakage of user features and interactions.
**Theorem 1**.: _Algorithm 3 satisfies \(\epsilon_{1}\)-local differential privacy._
Proof.: Please see appendix.
### Behavior Graph Construction
To capture the complex sequential dependencies and transition patterns, we convert each user's behavior sequence \(S^{u}\) into a user behavior graph \(\mathcal{G}^{u}=(\mathcal{V}^{u},\mathcal{E}^{u})\). Inspired by [47, 46], \(\mathcal{G}^{u}\) is a directed and weighted graph whose topological structure can be represented by two adjacency matrices, \(\mathbf{A}_{u}^{out}\) and \(\mathbf{A}_{u}^{in}\). The weights in \(\mathbf{A}_{u}^{out},\mathbf{A}_{u}^{in}\) are the
occurrence number of consecutive interactions between two items. For instance, the weight in position \([i,j]\) of \(\mathbf{A}_{u}^{out}\) is \(\text{Count}(v_{i},v_{j})\), which means the number of times that user \(u\) interacts with \(v_{i}\) first, and then immediately with \(v_{j}\). It should be noted that we drop the normalization step [47; 46] to divide \(\text{Count}(v_{i},v_{j})\) by the outdegree of \(v_{i}\), otherwise the deletion of one interaction in \(S^{u}\) will affect one row rather than one element in \(\mathbf{A}_{u}^{out}\) or \(\mathbf{A}_{u}^{in}\), which impedes the subsequent differential privacy analysis.
\[\mathbf{A}_{u}^{out}[i,j] =\text{Count}(v_{i},v_{j}), \tag{4}\] \[\mathbf{A}_{u}^{in}[i,j] =\text{Count}(v_{j},v_{i}).\]
### User Behavior Protection: DIPSGNN
As we mentioned before that malicious outside attackers can infer user interactions from recommendation results, we need to add noise into the recommendation algorithm in order to protect interactions. Instead of perturbing user behavior graph at input stage, we choose to add calibrated noise into GNN propagation step to protect user interactions. The reason lies in that perturbation on user behavior graph will destroy original graph structure and distort aggregation process inside GNNs [55; 56], we will show the superiority of aggregation perturbation over graph structure perturbation by empirical experiments. In this section, we will dive into the details of aggregation perturbation inside DIPSGNN. As user characteristics impact their preferences, we consider user features when initializing user embedding. User \(u\)'s embedding \(\mathbf{e}_{u}^{(0)}\) and item \(v_{i}\)'s embedding \(\mathbf{e}_{i}^{(0)}\) are initialized as:
\[\mathbf{e}_{u}^{(0)}=\widehat{\mathbf{x}}_{u}\mathbf{E}_{U},\;\mathbf{e}_{i}^ {(0)}=\mathbf{x}_{i}\mathbf{E}_{V}, \tag{5}\]
where \(\widehat{\mathbf{x}}_{u}\in\mathbb{R}^{1\times d_{0}}\) is perturbed feature of user \(u\) in Algorithm 2 and \(\mathbf{E}_{U}\in\mathbb{R}^{d_{0}\times d^{\prime}}\) is user embedding matrix. Similarly, \(\mathbf{x}_{i}\in\mathbb{R}^{1\times|V|}\) and \(\mathbf{E}_{V}\in\mathbb{R}^{|V|\times d}\) are respectively item one-hot encoding and item embedding matrix. Then, we feed the initialized user embedding and item embedding into DIPSGNN and update them iteratively.
At each time step \(t\) of node update, we fuse user embedding \(\mathbf{e}_{u}^{(t-1)}\) with item embedding \(\mathbf{e}_{i}^{(t-1)}\) to update them together. \(\mathbf{h}_{i}^{(t-1)}=\mathbf{e}_{i}^{(t-1)}\oplus\mathbf{e}_{u}^{(t-1)}\in \mathbb{R}^{1\times(d+d^{\prime})}\) denotes the joint embedding at \(t-1\) time for item \(v_{i}\), where \(\oplus\) is the concatenation operation. Combining the joint embedding of all items, we can get a joint embedding matrix \(\mathbf{H}^{(t-1)}\in\mathbb{R}^{|V|\times(d+d^{\prime})}\). To bound the sensitivity of joint embedding matrix and facilitate the privacy analysis, we clip each row of \(\mathbf{\bar{H}}^{(t-1)}\) to make its norm equal to a constant \(C\). And \(C\) can be properly tuned on different datasets for better utility.
\[\mathbf{\bar{H}}_{i}^{(t-1)}=\mathbf{h}_{i}^{(t-1)}*\frac{C}{||\mathbf{h}_{i }^{(t-1)}||_{2}},\;i=1,\cdots,|V|, \tag{6}\]
where \(\mathbf{\bar{H}}_{i}^{(t-1)}\) means \(i\)-th row of \(\mathbf{\bar{H}}^{(t-1)}\). Then, we use sum aggregation to aggregate information from incoming and outgoing neighbors. This step directly accesses the adjacency matrices, \(\mathbf{A}_{u}^{in}\) and \(\mathbf{A}_{u}^{out}\), which contain sensitive structure information of interactions in user behavior sequences. Therefore, we need to add calibrated noise in this step to protect sensitive interactions:
\[\mathbf{\widehat{H}}_{out}^{(t)} =\mathbf{A}_{u}^{out}\cdot\mathbf{\bar{H}}^{(t-1)}+\mathcal{N}( \sigma^{2}\mathbb{I}), \tag{7}\] \[\mathbf{\widehat{H}}_{in}^{(t)} =\mathbf{A}_{u}^{in}\cdot\mathbf{\bar{H}}^{(t-1)}+\mathcal{N}( \sigma^{2}\mathbb{I}),\]
where \(\mathcal{N}(\sigma^{2}\mathbb{I})\in\mathbb{R}^{|V|\times(d+d^{\prime})}\) denotes a noise matrix with each element drawn from Gaussian distribution \(\mathcal{N}(0,\sigma^{2})\) independently and \(\mathbf{\widehat{H}}_{out}^{(t)},\mathbf{\widehat{H}}_{in}^{(t)}\) are privately aggregated embedding matrices. Theorem 2 will prove that this step satisfies edge-level differential privacy. Besides, the post-processing property of differential privacy [60] guarantees that any operation afterwards will keep private with respect to adjacency matrices, thus we can protect sensitive interactions during recommendation. After neighborhood aggregation, we conduct a linear transformation on the aggregated embedding matrices and get intermediate representation of node \(v_{i}\) as follows:
\[\mathbf{a}_{out_{i}}^{(t)} =(\mathbf{\widehat{H}}_{out}^{(t)}\cdot\mathbf{W}_{out})_{i}+ \mathbf{b}_{out}, \tag{8}\] \[\mathbf{a}_{in_{i}}^{(t)} =(\mathbf{\widehat{H}}_{in}^{(t)}\cdot\mathbf{W}_{in})_{i}+ \mathbf{b}_{in},\] \[\mathbf{a}_{i}^{(t)} =\mathbf{a}_{out_{i}}^{(t)}\oplus\mathbf{a}_{in_{i}}^{(t)},\]
where \(i\) in \((\mathbf{\widehat{H}}^{(t)}\cdot\mathbf{W})_{i}\) denotes the \(i\)-th row, \(\mathbf{b}_{out},\mathbf{b}_{in}\in\mathbb{R}^{1\times d}\) are bias terms, and \(\mathbf{W}_{out},\mathbf{W}_{in}\in\mathbb{R}^{(d+d^{\prime})\times d}\) are learnable parameter matrices. \(\mathbf{b}_{out},\mathbf{b}_{in},\mathbf{W}_{out},\mathbf{W}_{in}\) are shared by all users. Then, we leverage gated recurrent unit
(GRU) to combine intermediate representation of node \(v_{i}\) with its hidden state of previous time, and then update the hidden state of current time:
\[\mathbf{e}_{i}^{(t)}=\text{GRU}(\mathbf{a}_{i}^{(t)},\mathbf{e}_{i}^{(t-1)}). \tag{9}\]
It worths noting that \(\mathbf{e}_{i}^{(t-1)}\) has been normalized in Equation (6) as a part of \(\mathbf{h}_{i}^{(t-1)}\) and user embedding \(\mathbf{e}_{u}\) will also be updated implicitly in this process. The whole aggregation step in DIPSGNN are shown in Algorithm 4, and Theorem 2 proves its privacy guarantee.
**Theorem 2**.: _For any \(\delta\in(0,1)\), propagation steps \(T\geq 1\), and noise standard deviation \(\sigma>0\), Algorithm 4 satisfies edge-level \((\epsilon_{2},\delta)\)-differential privacy with \(\epsilon_{2}=\frac{TC^{2}}{2\sigma^{2}}+\frac{C\sqrt{2T\log(1/\delta)}}{\sigma}\)._
Proof.: Please see appendix.
### Prediction and Training
After finishing the update of all DIPSGNN layers, we get the final representation of all items. Then, we need to obtain a unified representation for each user to conduct next-item prediction. First, we apply a readout function to extract each user's local preference vector \(\mathbf{z}_{l}\) and global preference vector \(\mathbf{z}_{g}\) from item representations. \(\mathbf{z}_{l}\) is defined as \(\mathbf{e}_{u_{u}}^{(T)}\), which is the final representation of the last item in user \(u\)'s behavior sequence \(S_{u}\), and \(\mathbf{z}_{g}\) is defined as:
\[\alpha_{s} =\mathbf{q}^{\top}\sigma(\mathbf{W}_{1}\mathbf{e}_{n_{u}}^{(T)}+ \mathbf{W}_{2}\mathbf{e}_{s}^{(T)}+\mathbf{c}), \tag{10}\] \[\mathbf{z}_{\text{g}} =\sum_{s=1}^{|n_{u}|}\alpha_{s}\mathbf{e}_{s}^{(T)},\]
where \(e_{s}^{(T)}\) refers to the final representation of the \(s\)-th item in \(S_{u}\), \(\mathbf{q}\in\mathbb{R}^{d}\) and \(\mathbf{W}_{1},\mathbf{W}_{2}\in\mathbb{R}^{d\times d}\) are learnable weight matrices. Following [47], we concatenate updated user embedding \(\mathbf{e}_{u}^{(T)}\) with the local and global preference vectors, then user \(u\)'s unified representation \(\mathbf{z}_{u}\) can be expressed as:
\[\mathbf{z}_{u}=\mathbf{W}_{3}(\mathbf{z}_{g}\oplus\mathbf{z}_{l}\oplus \mathbf{e}_{u}^{(T)}), \tag{11}\]
where \(\mathbf{W}_{3}\in\mathbb{R}^{d\times(2d+d^{\prime})}\) is a learnable matrix and \(d,d^{\prime}\) are the dimension of item and user embedding respectively. With user \(u\)'s unified representation and final representation of all items, we can compute the predicted probability of the next item being \(v_{i}\) by:
\[\hat{\mathbf{y}}_{i}=\frac{\exp(\mathbf{z}_{u}^{\top}\cdot\mathbf{e}_{i}^{(T )})}{\sum_{j=1}^{|V|}\exp(\mathbf{z}_{u}^{\top}\cdot\mathbf{e}_{j}^{(T)})}. \tag{12}\]
And, the loss function is cross-entropy of the prediction \(\hat{\mathbf{y}}\) and the ground truth \(\mathbf{y}\):
\[\mathcal{L}(\hat{\mathbf{y}})=-\sum_{i=1}^{|V|}\mathbf{y}_{i}\log \left(\hat{\mathbf{y}}_{i}\right)+\left(1-\mathbf{y}_{i}\right)\log\left(1- \hat{\mathbf{y}}_{i}\right). \tag{13}\]
Finally, we use the back-propagation through time (BPTT) algorithm to train the proposed DIPSGNN.
## 5 Experiments
### Experimental Settings
We evaluate the performance of all methods on three real-world datasets: ML-1M2, Yelp3 and Tmall4. In ML-1M and Tmall, we have 3 categorical user features like age range, gender and occupation, while in Yelp, we have 6 numerical user features like number of ratings, average rating and etc. For Tmall dataset, we use the click data from November 1 to November 7. After obtaining the datasets, we adopt 10-core setting to filter out inactive users and items following [30]. Table 1 shows their statistics after preprocessing. For each user, we use the first \(80\%\) of its behavior sequence as the training set and the remaining \(20\%\) constitutes the test set. And hyperparameters are tuned on the validation set which is a random \(10\%\) subset of the training set. The codes for our experiments will be released upon acceptance.
Footnote 2: grouplens.org/datasets/movielens/.
Footnote 3: [https://www.yelp.com/dataset](https://www.yelp.com/dataset).
Footnote 4: tianchi.aliyun.com/dataset/dataDetail?dataId=42.
To evaluate the performance of DIPSGNN, we compare it with six non-private baselines (BPRMF, SASRec, HRNN, LightGCN, SRGNN, and APGNN) and two private ones (DPMF and EdgeRand):
* **BPRMF**[61] is a widely used learning to rank model with a pairwise ranking objective function;
* **SASRec**[15] is a sequential prediction model based on attention mechanism;
* **HRNN**[37] uses a Hierarchical RNN model to provide personalized prediction in session-based recommendation;
* **LightGCN**[62] is a simplified graph convolution network for recommendation;
* **SRGNN**[46] utilizes the gated graph neural networks to capture item transitions in user behavior sequences;
* **APGNN**[47] considers user profiles based on SRGNN and captures item transitions in user-specific fashion;
* **DPMF**[22] is a differentially private matrix factorization method. The original one are based on explicit feedback, we denote it as DPMF_exp. There is only implicit feedback in Tmall, so we modify it with the same negative sampling strategy as BPRMF to fit in implicit feedback and denote it as DPMF_imp. We also evaluate DPMF_imp on other three datasets where ratings larger than \(1\) are regarded as positive interactions;
* **EdgeRand** is a graph structure perturbation method to protect user behaviors and the protection on user features is the same as DIPSGNN. Specifically, we add gaussian noise to the adjacency matrices of user behavior graphs to achieve the same level \((\epsilon,\delta)\)-differential privacy as DIPSGNN for fair comparison. It can be seen as a data desensitization method for graphs with formal differential privacy guarantee. The only difference between EdgeRand and existing LapGraph [50] is that one uses gaussian mechanism and the other uses laplace mechanism. As DIPSGNN uses the notation of \((\epsilon,\delta)\)-differential privacy, so we take EdgeRand as our baseline and regard it as the state-of-the-art method to protect edges in GNN following [34].
Motivated by [46; 47], we adopt Recall@\(K\) and MRR@\(K\) with \(K=5,10,20\) as our evaluation metrics. We run all the evaluations \(5\) times with different random seeds and report mean value for each method. The maximum length for
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & ML-1M & Yelp & Tmall \\ \hline Users & 5,945 & 99,011 & 132,724 \\ Items & 2,810 & 56,428 & 98,226 \\ Interactions & 365,535 & 1,842,418 & 3,338,788 \\ Avg. Length & 96.07 & 27.90 & 36.18 \\ User features & 3 & 6 & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of datasets after preprocessing.
user behavior sequences are \(100,30,50\) for ML-1M, Yelp and Tmall respectively, which are slightly larger than the average length of user behavior sequences. We set the dimension of item embedding \(d=100\) and user embedding dimension \(d^{\prime}=50\) following [47], other hyperparameters are tuned to their optimal values on validation set. The model is trained with Adam optimizer and we train DIPSGNN 10 epochs as we observe that our model can reach convergence at that time. For all baseline methods, we use the optimal hyperparameters provided in the original papers.
As for the privacy specification setting, the privacy budget \(\epsilon_{1}\) for protecting user features in EdgeRand and DIPSGNN is set to \(20\) by default following [6]. And the privacy budget \(\epsilon_{2}\) for protecting user behaviors is set to \(5\) by default in all private methods, we numerically calibrate the noise standard deviation \(\sigma\) according to this privacy budget following [34]. \(\delta\) is set to be smaller than the inverse number of edges. Meanwhile, the different privacy budgets for user features and user behaviors capture the variation of privacy expectations of heterogeneous data types, if all the data types are treated as equally sensitive, it will add too much unneeded noise and sacrifice utility [57]. The effect of different privacy budgets will be further discussed by empirical experiments.
### Performance Comparisons
Table 2 reports the performance comparisons between DIPSGNN and other baselines in terms of Recall@20, MRR@20, Recall@10 and [email protected] We have the following observations: (1) the non-private graph based method SRGNN and APGNN achieve the best performance among non-private methods, demonstrating the strong power of adopting graph neural networks to capture complex item transitions and dynamic user preferences in sequential recommendation. (2) As for private baselines, DPMF with explicit feedback outperforms that with implicit feedback, because explicit feedback carries more accurate information about user preferences. But these two DPMF methods both perform much worse than EdgeRand and DIPSGNN. We attribute this phenomenon to that DPMF assumes interactions as independent while EdgeRand and DIPSGNN consider complex dependencies between items by building model on user behavior graphs. This highlights the importance to take into account the dependencies between interactions rather than treating them as independent when protecting user interactions. (3) Our DIPSGNN consistently yields better performance than the state-of-the-art EdgeRand method for protecting interactions on all three datasets. Their relative gap in terms of Recall@20 is \(19.99\%,6.16\%,2.95\%\) on ML-1M, Yelp and Tmall respectively. And their difference is also significant in terms of other metrics, which verifies the effectiveness of DIPSGNN. Moreover, DIPSGNN even outperforms the best non-private baseline APGNN in terms of MRR@20, Recall@10 and MRR@10 on ML-1M. A possible explanation is that controlled amount of noise during training may improve the generalization performance on test set [63]. (4) The performance of DIPSGNN is more competitive than commonly used deep learning recommendation methods such as LightGCN, HRNN and SASRec. And it also beats SRGNN on ML-1M and Yelp. This demonstrates the feasibility of applying DIPSGNN in real-world applications to provide accurate recommendation and protect sensitive user information simultaneously.
Footnote 5: We can get similar results regarding \(K=5\). Due to space limitation, we do not report.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Method} & \multicolumn{4}{c}{ML-1M} & \multicolumn{4}{c}{Yelp} & \multicolumn{4}{c}{Tmall} \\ \cline{2-13} & & R@20 & M@20 & R@10 & M@10 & R@20 & M@20 & R@10 & M@10 & R@20 & M@20 & R@10 & M@10 \\ \hline \multirow{8}{*}{Nonp} & BPRMF & 4.56 & 0.81 & 2.20 & 0.65 & 4.38 & 1.01 & 2.58 & 0.89 & 15.19 & 6.77 & 12.28 & 6.57 \\ & SASRec & 6.69 & 1.20 & 3.44 & 0.98 & 4.70 & 1.03 & 2.69 & 0.89 & 14.46 & 5.08 & 10.46 & 4.81 \\ & HRNN & 18.43 & 4.70 & 11.70 & 4.24 & 1.49 & 0.33 & 0.83 & 0.29 & 13.34 & 5.82 & 10.45 & 5.62 \\ & LightGCN & 8.53 & 1.67 & 4.59 & 1.40 & 6.18 & 1.41 & 3.65 & 1.24 & OOM & OOM & OOM & OOM \\ & SRGNN & 21.01 & 5.15 & 13.09 & 4.61 & 7.24 & 1.77 & 4.41 & 1.58 & 24.85 & 11.55 & 20.42 & 11.24 \\ & APGNN & 21.26 & 5.29 & 13.23 & 4.75 & 7.26 & 1.81 & 4.46 & 1.62 & 24.86 & 11.57 & 20.38 & 11.26 \\ \hline \multirow{4}{*}{Priv} & DPMF \_imp & 3.15 & 0.66 & 1.77 & 0.57 & 0.41 & 0.10 & 0.25 & 0.09 & 1.95 & 0.36 & 1.18 & 0.30 \\ & DPMF\_exp & 3.40 & 0.71 & 1.89 & 0.61 & 1.30 & 0.32 & 0.83 & 0.28 & - & - & - & - \\ & EdgeRand & 17.61 & 4.28 & 10.88 & 3.82 & 6.82 & 1.66 & 4.17 & 1.49 & 20.35 & 9.80 & 16.61 & 9.54 \\ \hline \multirow{2}{*}{Ours} & DIPSGNN & **21.13** & **6.11** & **14.04** & **5.63** & **7.24** & **1.78** & **4.45** & **1.59** & **20.95** & **9.98** & **17.15** & **9.71** \\ & Improve & 19.99\%* & 42.76\%* & 29.04\%* & 47.38\%* & 6.16\%* & 7.23\%* & 6.71\%* & 6.71\%* & 2.95\%* & 1.84\%* & 3.25\%* & 1.78\%* \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparative results of different approaches on all datasets. Bold number means DIPSGNN outperforms EdgeRand. Statistical significance of pairwise differences between DIPSGNN vs. EdgeRand is determined by a paired t-test (\({}^{*}\) for \(p<0.01\)). OOM denotes out-of-memory and R denotes Recall, M denotes MRR for succinctness. Nonp means non-private baselines and Priv means private baselines.
### Effect of Privacy Budget
To analyze the trade-off between privacy and accuracy with different privacy budgets, we first fix the budget for protecting user features \(\epsilon_{1}\) at its default value \(20\) and change the budget for protecting user interactions \(\epsilon_{2}\) in \(\{3,4,5\}\). The experimental results in terms of Recall@20 are presented in Figure 3. We can observe that both EdgeRand and DIPSGNN generally perform better with a larger \(\epsilon_{2}\), as the noise added on the adjacency matrices or item embedding matrix in GNN will both decrease when \(\epsilon_{2}\) rises thus brings more accurate recommendation. Meanwhile, DIPSGNN always outperforms EdgeRand except when \(\epsilon_{2}=3\) on Tmall. And the performance gap between them tends to enlarge with a larger \(\epsilon_{2}\), this confirms again the effectiveness of our proposed DIPSGNN for protecting user interactions. Similarly, we also conduct experiments by changing the budget for user feature protection \(\epsilon_{1}\) in \(\{10,20,30\}\) with the fixed privacy budget \(\epsilon_{2}=5\) for user interaction protection. Figure 4 shows experimental results in terms of Recall@20. The performance of EdgeRand and DIPSGNN consistently rises with a larger \(\epsilon_{1}\). As we add less noise to user features when \(\epsilon_{1}\) increases, it will facilitate more accurate modeling of user interests, leading to more satisfying recommendation results. And DIPSGNN outperforms EdgeRand all the time which again shows the superiority.
### Hyperparameter Study
#### 5.4.1 Effect of GNN Propagation Steps
In this section, we first study the influence of GNN propagation steps \(T\) on the performance of EdgeRand and DIPSGNN. We fix privacy guarantees at their default values and change GNN propagation steps \(T\) in \(\{1,2,3\}\). Figure 5 shows the experimental results obtained on three datasets. It can be observed that the performance of DIPSGNN and EdgeRand both obviously decrease with larger aggregation steps \(T\). Besides, the performance decline of EdgeRand is apparently more pronounced than DIPSGNN. This is because EdgeRand adds noise to the original adjacency matrix which distorts the neighborhood aggregation inside GNNs [55, 56] and the distortion effect will become larger with more aggregation steps. Contrarily, DIPSGNN aggregates information from neighbors based on unperturbed adjacency matrix. The slight performance decrease of DIPSGNN comes from the fact that we need to add more noise on node embedding matrix to maintain the same privacy guarantee, this can be derived from Theorem 2.
#### 5.4.2 Effect of Embedding Norm
We then investigate how the norm of each row in node embedding matrix affects the performance of DIPSGNN. As we can see from Theorem 2 that a smaller \(C\) forces us to add less noise on the embedding matrix at the same privacy
Figure 4: Recall@20 of DIPSGNN and EdgeRand with different \(\epsilon_{1}\) (privacy budget for protect user features).
Figure 3: Recall@20 of DIPSGNN and EdgeRand with different \(\epsilon_{2}\) (privacy budget for protect user interactions).
guarantee, but if \(C\) is too small, the elements of embedding matrix may diverge too much from their true value. To find the proper \(C\) for each dataset, we fix privacy guarantees at their default values, GNN propagation steps \(T=1\), then select \(C\) from \(\{0.2,0.4,0.6,0.8,1\}\) for ML-1M, \(\{0.1,0.3,0.5,0.7,0.9\}\) for Yelp and Tmall. The experimental results in terms of Recall@20 are shown in Figure 6. In general, Recall@20 continues to fall with a larger \(C\) on ML-1M and Yelp, it reaches the highest value at \(C=0.2\) and \(C=0.1\) on ML-1M and Yelp respectively. On Tmall, Recall@20 first increases and reaches the highest value at \(C=0.3\), then decreases when \(C\) becomes larger. We can make a general conclusion that a large \(C\) may bring excessive noise and low utility on these three datasets.
### Importance of User Features
To verify the necessity of protecting user features to get a better balance between privacy and accuracy, we compare the performance of DIPSGNN with other three variants of DIPSGNN. **Nonp** means no noise is added on user features or the training of GNN by setting \(\epsilon_{1}=\infty\) and \(\epsilon_{2}=\infty\). **Nonp-U** means no user features are exploited and user embedding is initialized by user id. Similarly, **Priv** means the normal DIPSGNN with perturbed user features and noise added on node embedding matrix during the training of GNN. And in **Priv-U**, perturbed user features are replaced by user id for initializing user embedding, but the same amount of noise is added on the node embedding matrix during the training of GNN. We show the experimental results in Figure 7 and have the following findings. In both non-private and private settings, adding user features can help to capture more accurate user preference and improve recommendation performance. It illustrates the importance of taking user side information, besides user-item interactions, into consideration for a better balance between privacy and accuracy.
## 6 Conclusions and future work
With the enactment of General Data Protection Regulation (GDPR), there is an urgent need to protect sensitive user information on various online platforms. And recommender system is the core component of online platforms, it takes advantage of rich personal information to provide personalized service. Therefore, its privacy preservation is of great concern to users and regulators. A privacy-preserving recommender system will greatly alleviate privacy concern and increase user engagement on the platform, thus promote the commercial profit and sustainable development of the platform.
Figure 5: Recall@20 of DIPSGNN and EdgeRand with different GNN aggregation steps \(T\).
Figure 6: Recall@20 of DIPSGNN with different embedding norm \(C\).
In this paper, we address how to protect sensitive user features and interactions concurrently without great sacrifice of accuracy in sequential recommender system. We propose a differentially private sequential recommendation framework named DIPSGNN. DIPSGNN protects sensitive user features by adding noise to raw features at input stage. The noise scale is determined by piecewise mechanism which can process numerical and categorical features to make them satisfy local differential privacy. And the post-processing property of differential privacy will guarantee that user features are always well protected in the recommendation algorithm. As for the protection on user interactions, we first transform interaction sequences into directed and weighted user behavior graphs. Then, user behavior graphs are fed into gated graph neural network to model sequential dependencies and user preference. In this graph neural network, we design a novel aggregation step to protect adjacency matrices of user behavior graphs, thus to protect user behaviors. Concretely, calibrated noise is added into the aggregation step to make it satisfy differential privacy with respect to adjacency matrices of user behavior graphs. And we empirically demonstrate the superiority of this aggregation perturbation method than conventional graph structure perturbation method for protecting user interactions. Besides, extensive experimental results on three datasets (ML-1M, Yelp and Tmall) evidence that our proposed DIPSGNN can achieve significant gains over state-of-the-art differentially private recommender systems.
For future work, we will extend our framework to other popular graph neural networks such as graph attention networks (GATs) [45] and GraphSAGE [64]. Besides, we are also interested in incorporating personalized privacy preferences into our framework, as users vary substantially in privacy attitudes in real life [65].
## Appendix A Proof of Theorem 1
Proof.: First, we prove Algorithm 1 satisfies \(\epsilon\)-local differential privacy.
In Algorithm 1, \(C=\frac{\exp(\epsilon/2)+1}{\exp(\epsilon/2)-1}\), \(l(x)=\frac{C+1}{2}\cdot x-\frac{C-1}{2},r(x)=l(x)+C-1\). If \(c\in[l(x),r(x)]\), then
\[\Pr(x^{\prime}=c|x) =\frac{\exp(\epsilon/2)}{\exp(\epsilon/2)+1}\cdot\frac{1}{r(x)-l( x)}\] \[=\frac{\exp(\epsilon)-\exp(\epsilon/2)}{2\exp(\epsilon/2)+2}=p.\]
Similarly, if \(c\in[-C,l(x))\cup(r(x),C]\), then
\[\Pr(x^{\prime}=c|x) =(1-\frac{\exp(\epsilon/2)}{\exp(\epsilon/2)+1})\cdot\frac{1}{2C- r(x)+l(x)}\] \[=\frac{\exp(\epsilon/2)-1}{2\exp(\epsilon)+2\exp(\epsilon/2)}= \frac{p}{\exp{(\epsilon)}}.\]
Then if \(x_{1},x_{2}\in[-1,1]\) are any two input values and \(x^{\prime}\in[-C,C]\) is the output of Algorithm 1, we have:
\[\frac{\Pr(x^{\prime}\mid x_{1})}{\Pr(x^{\prime}\mid x_{2})}\leq\frac{p}{p/\exp (\epsilon)}=\exp(\epsilon).\]
Figure 7: Performance of DIPSGNN compared with three ablation models.
Thus, Algorithm 1 satisfies \(\epsilon\)-LDP. In Algorithm 2, we let \(\epsilon=\frac{\epsilon_{1}}{k}\), so perturbation of numerical feature satisfies \(\frac{\epsilon_{1}}{k}\)-LDP. Analogously, we prove the perturbation of categorical features in Algorithm 2 satisfies \(\frac{\epsilon_{1}}{k}\)-LDP. Suppose \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) are any two \(m\)-dimensional one-hot vector for perturbation and the output is \(\mathbf{x}^{\prime}\). In \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), \(\mathbf{x}_{1}[v_{1}]=1,\mathbf{x}_{2}[v_{2}]=1\)\((v_{1}\neq v_{2})\) and all other elements are \(0\). Let \(p=0.5\) and \(q=\frac{1}{\exp(\epsilon/k)+1}\), \(p>q\), we have:
\[\frac{\Pr(\mathbf{x}^{\prime}\mid\mathbf{x}_{1})}{\Pr(\mathbf{x} ^{\prime}\mid\mathbf{x}_{2})}=\frac{\prod_{i\in[m]}\Pr(\mathbf{x}^{\prime}[i ]\mid\mathbf{x}_{1}[i])}{\prod_{i\in[m]}\Pr(\mathbf{x}^{\prime}[i]\mid \mathbf{x}_{2}[i])}\] \[=\frac{\Pr(\mathbf{x}^{\prime}[v_{1}]\mid\mathbf{x}_{1}[v_{1}]) \Pr(\mathbf{x}^{\prime}[v_{2}]\mid\mathbf{x}_{1}[v_{2}])}{\Pr(\mathbf{x}^{ \prime}[v_{1}]\mid\mathbf{x}_{2}[v_{1}])\Pr(\mathbf{x}^{\prime}[v_{2}]\mid \mathbf{x}_{2}[v_{2}])}\] \[\leq\frac{\Pr(\mathbf{x}^{\prime}[v_{1}]=1\mid\mathbf{x}_{1}[v_{1 }]=1)\Pr(\mathbf{x}^{\prime}[v_{2}]=0\mid\mathbf{x}_{1}[v_{2}]=0)}{\Pr( \mathbf{x}^{\prime}[v_{1}]=1\mid\mathbf{x}_{2}[v_{1}]=0)\Pr(\mathbf{x}^{ \prime}[v_{2}]=0\mid\mathbf{x}_{2}[v_{2}]=1)}\] \[=\frac{p}{q}\cdot\frac{1-q}{1-p}=\exp(\frac{\epsilon_{1}}{k}).\]
Thus, the perturbation of categorical feature also satisfies \(\frac{\epsilon_{1}}{k}\)-LDP. As Algorithm 3 is composed of \(k\) times perturbation of numerical or categorical feature, and all of them satisfy \(\frac{\epsilon_{1}}{k}\)-LDP, so Algorithm 3 satisfies \(\epsilon_{1}\)-LDP based on the composition theorem of differential privacy.
## Appendix B Proof of Theorem 2
Proof.: The proof of Theorem 2 uses an alternative definition of differential privacy (DP), called Renyi Differential Privacy (RDP) [66] which is defined as follows,
**Definition 3** (Renyi Differential Privacy).: _A randomized algorithm \(\mathcal{A}\) is \((\alpha,\epsilon)\)-RDP for \(\alpha>1,\epsilon>0\) if for every adjacent datasets \(X\sim X^{\prime}\), we have:_
\[D_{\alpha}\left(\mathcal{A}(X)\|\mathcal{A}\left(X^{\prime}\right)\right)\leq\epsilon\]
_, where \(D_{\alpha}(P\|Q)\) is the Renyi divergence of order \(\alpha\) between probability distributions \(P\) and \(Q\), defined as:_
\[D_{\alpha}(P\|Q)=\frac{1}{\alpha-1}\log\mathbb{E}_{x\sim Q}\left[\frac{P(x)}{Q (x)}\right]^{\alpha}\]
A basic mechanism to achieve RDP is the Gaussian mechanism. Let \(f:\mathcal{X}\rightarrow\mathbb{R}^{d}\) and we first define the sensitivity of \(f\) as,
\[\Delta_{f}=\max_{X\sim X^{\prime}}\left\|f(X)-f\left(X^{\prime}\right)\right\| _{2}\]
Then, adding Gaussian noise with variance \(\sigma^{2}\) to \(f\) as:
\[\mathcal{A}(X)=f(X)+\mathcal{N}\left(\sigma^{2}\mathbf{e}_{d}\right),\]
where \(\mathcal{N}\left(\sigma^{2}\mathbf{e}_{d}\right)\in\mathbb{R}^{d}\) is a vector with each element drawn from \(\mathcal{N}(0,\sigma^{2})\) independently, yields an \((\alpha,\frac{\Delta_{f}^{2}\alpha}{2\sigma^{2}})\)-RDP algorithm for all \(\alpha>1\)[66].
As the analysis on \(\mathbf{A}_{u}^{out}\) and \(\mathbf{A}_{u}^{in}\) is the same, so we use \(\mathbf{A}\) to denote \(\mathbf{A}_{u}^{out}\) or \(\mathbf{A}_{u}^{in}\). If we delete an interaction from user \(u\)'s behavior sequence, \(\mathbf{A}_{u}^{out}\) and \(\mathbf{A}_{u}^{in}\) will both only change by 1 at a single position. Let \(\mathbf{A}\) and \(\mathbf{A}^{{}^{\prime}}\) be two neighboring adjacency matrix that only differ by 1 at a single position. Specifically, there exist two nodes \(p\) and \(q\) such that:
\[\begin{cases}|\mathbf{A}_{i,j}-\mathbf{A}_{i,j}^{{}^{\prime}}|=1&\text{if $i=p$ and $j=q$},\\ \mathbf{A}_{i,j}=\mathbf{A}_{i,j}&\text{otherwise}.\end{cases}\]
The sensitivity of sum aggregation step is:
\[||\mathbf{A}\cdot\bar{\mathbf{H}}^{(t-1)}-\mathbf{A}^{{}^{\prime}} \cdot\bar{\mathbf{H}}^{(t-1)}||_{F} \tag{14}\] \[=(\sum_{i=1}^{|V|}\|\sum_{j=1}^{|V|}(\mathbf{A}_{i,j}\bar{\mathbf{ H}}_{j}^{(t-1)}-\mathbf{A}_{i,j}^{{}^{\prime}}\bar{\mathbf{H}}_{j}^{(t-1)})\|_{2}^{2})^{1/2}\] \[=(\|\mathbf{A}_{p,q}\bar{\mathbf{H}}_{p}^{(t-1)}-\mathbf{A}_{p,q}^ {{}^{\prime}}\bar{\mathbf{H}}_{p}^{(t-1)}\|_{2}^{2})^{1/2}\] \[=\|(\mathbf{A}_{p,q}-\mathbf{A}_{p,q}^{{}^{\prime}})\bar{\mathbf{ H}}_{p}^{(t-1)}\|_{2}\] \[=\|\bar{\mathbf{H}}_{p}^{(t-1)}\|_{2}\] \[=C\]
where \(\tilde{\mathbf{H}}_{j}^{(t-1)}\) is the \(j\)-th row of row-normalized feature matrix \(\tilde{\mathbf{H}}^{(t-1)}\). The sensitivity of each aggregation step is \(C\) so it satisfies \((\alpha,C^{2}\alpha/2\sigma^{2})\)-RDP based on Gaussian mechanism. And, aggregation in DIPSGNN can be seen as an adaptive composition of \(T\) such mechanisms, based on composition property of RDP [66], the total privacy cost is \((\alpha,TC^{2}\alpha/2\sigma^{2})\)-RDP.
As RDP is a generalization of DP, it can be converted back to standard \((\epsilon,\delta)\)-DP using the following lemma.
**Lemma 3**.: _If \(\mathcal{A}\) is an \((\alpha,\epsilon)\)-RDP algorithm, then it also satisfies \((\epsilon+\log(1/\delta)/\alpha-1,\delta)-DP\) for any \(\delta\in(0,1)\)._
Therefore, \((\alpha,TC^{2}\alpha/2\sigma^{2})\)-RDP in DIPSGNN is equivalent to edge-level \((\epsilon_{2},\delta)\)-DP with \(\epsilon_{2}=\frac{TC^{2}\alpha}{2\sigma^{2}}+\frac{\log(1/\delta)}{\alpha-1}\). Minimizing this expression over \(\alpha>1\) gives \(\epsilon_{2}=\frac{TC^{2}}{2\sigma^{2}}+C\sqrt{2T\log(1/\delta)}/\sigma\). So we conclude that the aggregation in DIPSGNN satisfies edge-level \((\epsilon_{2},\delta)\)-DP with \(\epsilon_{2}=\frac{TC^{2}}{2\sigma^{2}}+C\sqrt{2T\log(1/\delta)}/\sigma\).
## Acknowledgment
This work is supported by Shanghai Rising-Star Program (Grant No. 23QA1403100), Natural Science Foundation of Shanghai (Grant No. 21ZR1421900), National Natural Science Foundation of China (Grant No. 72192832), Graduate Innovation Fund of Shanghai University of Finance and Economics (Grant No.CXJJ-2022-366) and the Program for Innovative Research Team of Shanghai University of Finance and Economics.
|
2308.16635 | MFR-Net: Multi-faceted Responsive Listening Head Generation via
Denoising Diffusion Model | Face-to-face communication is a common scenario including roles of speakers
and listeners. Most existing research methods focus on producing speaker
videos, while the generation of listener heads remains largely overlooked.
Responsive listening head generation is an important task that aims to model
face-to-face communication scenarios by generating a listener head video given
a speaker video and a listener head image. An ideal generated responsive
listening video should respond to the speaker with attitude or viewpoint
expressing while maintaining diversity in interaction patterns and accuracy in
listener identity information. To achieve this goal, we propose the
\textbf{M}ulti-\textbf{F}aceted \textbf{R}esponsive Listening Head Generation
Network (MFR-Net). Specifically, MFR-Net employs the probabilistic denoising
diffusion model to predict diverse head pose and expression features. In order
to perform multi-faceted response to the speaker video, while maintaining
accurate listener identity preservation, we design the Feature Aggregation
Module to boost listener identity features and fuse them with other
speaker-related features. Finally, a renderer finetuned with identity
consistency loss produces the final listening head videos. Our extensive
experiments demonstrate that MFR-Net not only achieves multi-faceted responses
in diversity and speaker identity information but also in attitude and
viewpoint expression. | Jin Liu, Xi Wang, Xiaomeng Fu, Yesheng Chai, Cai Yu, Jiao Dai, Jizhong Han | 2023-08-31T11:10:28Z | http://arxiv.org/abs/2308.16635v1 | # MFR-Net: Multi-faceted Responsive Listening Head Generation via Denoising Diffusion Model
###### Abstract.
Face-to-face communication is a common scenario including roles of speakers and listeners. Most existing research methods focus on producing speaker videos, while the generation of listener heads remains largely overlooked. Responsive listening head generation is an important task that aims to model face-to-face communication scenarios by generating a listener head video given a speaker video and a listener head image. An ideal generated responsive listening video should respond to the speaker with attitude or viewpoint expressing while maintaining diversity in interaction patterns and accuracy in listener identity information. To achieve this goal, we propose the **M**ulti-**F**aceted **R**esponsive Listening Head Generation Network (MFR-Net). Specifically, MFR-Net employs the probabilistic denoising diffusion model to predict diverse head pose and expression features. In order to perform multi-faceted response to the speaker video, while maintaining accurate listener identity preservation, we design the Feature Aggregation Module to boost listener identity features and fuse them with other speaker-related features. Finally, a renderer finetuned with identity consistency loss produces the final listening head videos. Our extensive experiments demonstrate that MFR-Net not only achieves multi-faceted responses in diversity and speaker identity information but also in attitude and viewpoint expression.
listening head generation, image synthesis, denoising diffusion model +
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, ON, Canada
+
Footnote †: ccs: AAM/12/03, October 29-November 3, 2023, Ottawa, ON, ON, Canada
face-to-face communication scenarios, the listener's role is also significant as that of the speaker. However, research on listener modeling remains less explored. Actually, the modeling of a listener is distinct from that of a speaker, as the former focuses on response to others while the latter primarily emphasizes lip synchronization. The responsive listening head generation task aims to produce new listener head videos given the speaker talking head video and listener identity head image. Apart from modeling the daily scenario, the technique can also be used in digital avatar generation in Metaverse, customer representatives, robot communication, virtual audience modeling, wherever involves responsive listeners.
Listening is a conditioned action to reflect human behavior according to the principle proposed in social psychology and anthropology [3]. The listener head videos should also contain the listener identity and opinion information at the same time. In light of the aforementioned background, there are multi-faceted requirements for the generation of responsive listener heads. 1) **Viewpoint Expression**: The generated responsive listener heads should convey certain viewpoints as a corresponding response to the speaker's head videos.Non-verbal behaviors such as nodding, smiling, frowning, head shaking, or neutral heads are generally used to convey those viewpoints. 2) **Speaker Interaction**: To achieve dynamic interaction between a speaker and listener in virtual communication, it is important that the pattern of listener motions exhibit a high correlation with the speaker's head video signal, as well as the speaker's attitude. The rhythm of the speaker's audio and the flow movement of their head can influence the action of the listener. Moreover, the attitude of the speaker, whether expressing agreement or disagreement, can prompt corresponding response actions in the listener, such as nodding, shaking, frowning, or other movements. 3) **Responsive Diversity**: For a given speaker video, listener identity, and attitude, there exist various natural response listener head videos. The expressive method and range of head movement may differ in every response scenario. In a virtual online meeting, it can be jarring if the avatars of each participant exhibit the same response head reactions, as this could detract from the sense of authenticity and spontaneity. 4) **Generation Naturalness**: The generated responsive listener heads should be of high image quality and free of any visual artifacts. Additionally, the identity information of the generated videos should match the given listener head image in order to ensure consistency and accuracy.
Recently, Zhou _et al._[52] explore this task and collect the audio-visual ViCo dataset containing pairs of speaker and listener videos. To guide the generation of listening heads, they utilize an LSTM-based model to process the input signal and output listener head pose features. Later, PCHG [16] post-process the generated frames with face segmentation model [32] to improve the stability of the background. However, the deterministic nature of their models fails to model the generation diversity, and the direct concatenation fusion between identity and speaker-related feature causes inaccurate identity preserving, leading to facial contour artifacts.
To tackle the aforementioned problems and meet the above multi-faceted requirements, we design the **M**ulti-**F**aceted **R**esponsive listening head generation network (MFR-Net). Based on the denoising diffusion model [15], MFR-Net is designed to predict the speaker's head pose and expression features. By leveraging the probabilistic nature of the denoising diffusion model, we manage to generate diverse listening head results. Except for the generation diversity, the
Figure 1. Example results generated by MFR-Net. Given speaker video, audio, listener identity image and specific attitude label, our method generates natural multi-faceted responsive listening head videos. Results #1 and #2 display diverse results indicating positive attitude (smiling and nodding) while Results #3 and #4 show neutral (calm) and negative (serious) results.
generated listening head videos should also interact with speakers with viewpoint or attitude expression and keep accurate listener identity information. To achieve the multi-faceted response, we propose the Feature Aggregation Module to embed the constraint conditions including speaker-related features, listener's attitude and listener identity information. The proposed module is applied to each denoising and diffusion process to predict the noise term. In this way, multi-faceted constraints can be integrated into the diverse generated results. Finally, the renderer trained with identity consistency loss is adopted to generate photo-realistic listener head images. As shown in Fig. 1, MFR-Net manages to achieve natural and diverse multi-faceted listening head generation with different listener attitudes and accurate listener identity preservation.
Our contributions are summarized as follows: 1) We propose MFR-Net, the first diffusion-based model for solving the task of generating responsive listener heads, and produces diverse and high-quality listener head videos. 2) The Feature Aggregation Module is designed to integrate speaker-related features, listener attitude and head image, leading to natural interaction with viewpoint expression and accurate listener identity information. 3) The state-of-the-art performance is achieved on the ViCo dataset in terms of visual quality, identity-preserving and generation diversity.
## 2. Related Work
### Responsive Listening Head Generation
Responsive Listening Head Generation aims to produce a head video of the listener, given a corresponding talking-head video of the speaker and face image of the listener. Zhou _et al._(Zhou et al., 2017) first propose this task and construct the ViCo dataset for evaluation. The proposed baseline utilizes LSTM to process the streaming input of visual and audio information of the speaker and produces facial 3DMM coefficients of the listener. Later, Huang _et al._(Huang et al., 2018) utilizes pre-trained foreground-background segmentation model U2Net (Wang et al., 2019) to fuse and improve the background of generated results. However, the above method could merely generate solitary listening head videos given certain speaker talking videos, while MFR-Net manages to produce diverse listening head videos.
### Audio-Driven Head Synthesis
Audio-driven head synthesis produces lip-sync talking head videos given source faces and driving speech signals. Some previous methods (Huang et al., 2018; Liu et al., 2019; Wang et al., 2019; Wang et al., 2019) generated talking head videos of specific identity used in the training process. Guo _et al._(Guo et al., 2019) utilize NeRF-based (Wang et al., 2019) network to model the head and torso separately and combine two generated parts. Zhang _et al._(Zhang et al., 2019) perform 3D face reconstruction over each frame in the video and generate new expression coefficients to control mouth shape. Though the aforementioned techniques preserve accurate source identity, they necessitate requiring high-quality videos of each subject for minutes to hours and only produce a limited range of identities, resulting in significant limitations on their applicability and generalization.
Therefore, some recent methods (Huang et al., 2018; Liu et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) try to relieve the identity restriction and explore to generate any source subject. Prabjwal _et al._(Prabjwal et al., 2019) adopt a pre-trained lip-sync discriminator to improve the lip-sync quality of generated talking head videos. Alghamdi _et al._(Alghamdi et al., 2019) model each frame into the latent space of StyleGAN, map audio signals into displacements in the space and generate final talking images. Though the above methods have no identity mismatch problem since they merely edit the mouth areas, they generate unnatural talking heads given the still facial parts.
Later, to improve the diversity and naturalness of talking heads, current methods (Huang et al., 2018; Liu et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) explore to generate videos with head pose changes. Some methods (Huang et al., 2018; Liu et al., 2019; Wang et al., 2019) rely on auxiliary pose video to provide explicit guidance of pose sequences, while others (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) explore to infer pose sequence from the audio signal. Among them, OPT (Huang et al., 2018) try to solve the identity mismatch problem by disentangling identity and content feature from the audio signal to eliminate the effect of audio identity. To improve the diversity of generated results, some works (Huang et al., 2018; Wang et al., 2019) utilize the VAE structure to map the audio signal into diverse pose sequences. However, MFR-Net shows diversity not only in head poses, but also in facial expression parts as well.
### Diffusion Generative Models
Denoising diffusion probabilistic model (Huang et al., 2018) is first proposed for unconditional image generation. Due to the stochastic property of the initialized noise in the reversion process, it manages to generate images of great diversity and soon becomes popular in a variety of creativity-oriented generation tasks. To name a few, GLIDE (Huang et al., 2018) introduces text conditions and proves the effectiveness of classifier-free guidance. DALLE-2 (Wang et al., 2019) further generate semantics-consistent images conditioned on CLIP (Wang et al., 2019) guidance. Later, Latent Diffusion Models (Wang et al., 2019) perform the diffusion process in latent space rather than pixel space to improve the efficiency. The above works could generate diverse and natural images. However, creative as they are, diffusion models adopted in such creativity-oriented tasks are not stable enough for responsible listening head generation, which aims to generate a natural video of a fixed listener while keeping identity information unchanged. Our proposed MFR-Net turns to generate facial coefficient features and impose explicit identity restrictions over the generation process to solve the above problems.
## 3. Method
Given speaker video clip \(V_{s}\) containing visual and audio information, listener head image \(I_{l}\) and attitude label, MFR-Net aims to produce multi-faceted responsive listening head videos. An overview of MFR-Net is shown in Fig. 2, which contains four major components. The diffusion process and denoising process act as training and inference modules, respectively. The Feature Aggregation Module receives the inputs containing speaker video \(V_{s}\), listener head image \(I_{l}\), listener attitude and last intermediate state in the denoising or diffusion process, then adopts attention-based blocks to predict noise of current step \(t\). The output of the denoising process is listener head pose and expression features, which will be fed into the generator to generate new listener head videos.
### Denoising Diffusion Model for Feature Generation
Previous responsive listener head generation methods adopt deterministic LSTM-based modules to predict listener head features.
They lack response diversity, which is the key factor in natural face-to-face interaction. To tackle the problem, we build our generation pipeline based on probabilistic denoising diffusion models.
The denoising diffusion model [15] is adopted to denoise a Gaussian noise step-by-step and finally generate the listener head pose and expression features. The form is as follows: \(p_{\theta}\left(\mathbf{x}_{0}\right):=\int p_{\theta}\left(\mathbf{x}_{0:T} \right)d\mathbf{x}_{1:T}\), where \(x_{0}\) is the real data of listener head pose and expression features and \(x_{1},...,x_{T}\) are the latent data of the intermediate state. Specifically, the joint distribution \(p_{\theta}\left(\mathbf{x}_{0:T}\right)\) is called the denoising process, which is defined as a Markov chain with learned Gaussian transitions starting at \(p\left(\mathbf{x}_{T}\right)=\mathcal{N}\left(\mathbf{x}_{T};\mathbf{0}, \mathbf{I}\right)\):
\[p_{\theta}\left(\mathbf{x}_{0:T}\right):=p\left(\mathbf{x}_{T}\right)\prod_{t =1}^{T}p_{\theta}\left(\mathbf{x}_{t-1}\mid\mathbf{x}_{t}\right), \tag{1}\]
\[p_{\theta}\left(\mathbf{x}_{t-1}\mid\mathbf{x}_{t}\right):=\mathcal{N}\left( \mathbf{x}_{t-1};\mu_{\theta}\left(\mathbf{x}_{t},t\right),\Sigma_{\theta} \left(\mathbf{x}_{t},t\right)\right).\]
Correspondingly, the diffusion process is the approximate posterior \(q(x_{1:T}|\mathbf{x}_{0})\), which is also designed to fix a Markov chain that gradually adds Gaussian noise to the data according to a variance schedule \(\beta_{1},...,\beta_{T}\):
\[q\left(\mathbf{x}_{1:T}\mid\mathbf{x}_{0}\right):=\prod_{t=1}^{T}q\left( \mathbf{x}_{t}\mid\mathbf{x}_{t-1}\right), \tag{2}\]
As shown in the diffusion part in Fig. 2, during training, noise \(\epsilon\) and uniformly sampled time step \(t\) are utilized to generate latent features. Given that the diffusion process admits sampling \(x_{t}\) at an arbitrary time step \(t\), instead of repeatedly adding noises to each intermediate state, the diffusion process can be further denoted as \(q\left(\mathbf{x}_{t}\mid\mathbf{x}_{0}\right)=\mathcal{N}\left(\mathbf{x}_{t };\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0},\left(1-\bar{\alpha}_{t}\right) \mathbf{I}\right)\), where \(\alpha_{t}:=1-\beta_{t}\) and \(\bar{\alpha}_{t}:=\prod_{t=1}^{T}\alpha_{t}\). To improve the performance, following iDDPM [27], we choose to predict noise term \(e\) instead of predicting latent feature. To integrate each input feature and achieve multi-faceted generation, we design the Feature Aggregation Module (Sec. 3.2) instead of using traditional U-Net to predict the noise term. Hence, the loss term used to optimize the model parameters is as follows:
\[\mathcal{L}_{\text{noise}}=E_{t,\mathbf{x}_{0},\epsilon}\left[\|e-e_{\theta} \left(x_{t},t,input\right)\|^{2}\right]. \tag{3}\]
Furthermore, as shown in Equation 1, to generate new samples through the denoising process, \(\mu_{\theta}\) and \(\Sigma_{\theta}\) are demanded. Following Ho _et al._[15], \(\Sigma_{\theta}\) is set to a constant number and \(\mu_{\theta}\) is:
\[\mu_{\theta}\left(\mathbf{x}_{t},t,input\right)=\frac{1}{\sqrt{\alpha_{t}}} \left(\mathbf{x}_{t}-\frac{1-\alpha_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\epsilon_{ \theta}\left(\mathbf{x}_{t},t,input\right)\right). \tag{4}\]
In this way, we manage to denoise the sampled noise step-by-step and finally generate new features, which are conditioned on the given input, as shown in the pink part in Fig. 2.
Figure 2. _Overview of MFR-Net._ MFR-Net adopts the denoising diffusion model to predict new listener features, which are fed into the generator along with listener image \(I_{l}\) to produce responsive listening heads. The Feature Aggregation Module is designed to predict the noise used in the denoising diffusion model, given speaker clip \(V_{s}\), listener image \(I_{l}\), listener attitude \(F_{l}^{att}\), intermediate feature \(x_{t}\) and time step \(t\). The brown and black arrows indicate the training and inference process, respectively.
### Feature Aggregation Module
In Sec. 3.1, we show the denoising diffusion model as the listener feature generator, then we will illustrate the neural network for predicting the noise term \(\epsilon_{\theta}(x_{t},t,input)\). Specifically, the input includes identity features extracted from the face recognition model, pose and expression features reconstructed from 3DMM, audio features, listener attitudes and latent state features. Unlike traditional denoising diffusion models (Han et al., 2015; Liu et al., 2016; Wang et al., 2017; Wang et al., 2018) using the U-Net (Wang et al., 2017) as basic module, to deal with input audio and pose features of variable length, we design our Feature Aggregation Module based on Transformer (Wang et al., 2017) technique.
Furthermore, previous responsive listener head generation methods directly concatenate listener head features and other driving features, giving the same focus on features of different importance. In this way, responsive listener heads with inaccurate facial contour and the identity mismatch problem are generated. To alleviate the problem, we hope to enhance the speaker information into driving features by aggregating its most compatible identity features.
Firstly, our objective is to enhance the input features and obtain a comprehensive understanding of the existing identity feature, which guides the generation of the noise term. Taking inspiration from the traditional face swapping work FaceShifter (Srivastava et al., 2015), we design our approach by incorporating the identity information into the latent feature map using the SPADE-like (Srivastava et al., 2015) module. This integration ensures that the explicit identity information is embedded throughout the entire generation process, thereby effectively preserving the relevant information. Then we draw inspiration from the implementation of Transformer (Wang et al., 2017) and utilize the multi-head self-attention module. Specifically, details of this process can be formulated as follows:
\[\text{Attention}(Q,K,V)=\text{Softmax}\left(\frac{QW_{q}\left(KW_{k}\right)^{T }}{\sqrt{C}}\right)VW_{0}, \tag{5}\]
where \(W_{q},W_{k},W_{o}\) are projection parameters and \(Q,K,V\) are the query, key, and value respectively. During feature enhancing, we employ different projection weights on the combination of the identity feature and latent feature.
Secondly, to fusion the listener identity information into speaker driving features of another identity, we adopt the feature fusion module to build the correlation between input processed identity features and driving speaker features. The detailed formulation is the same as Equation 5 except that the key and value are from features from speaker clip \(V_{s}\), listener attitude \(F_{l}^{att}\) and time step \(t\).
Finally, the feed-forward module is adopted to predict the noise term. Here we omit the residual connection and LayerNorm (Chen et al., 2015) operation in each formula for simplification. The loss term in Equation 3 is utilized to train the module. Thanks to the Feature Aggregation Module, the listener identity information is enhanced and injected into each driving feature, thus helping to generate accurate listener features in the denoising process.
### Generator
To improve the inter-frame coherence, the listener head pose features are obtained clip by clip with a fixed window and stride length through the above module. After obtaining the generated listener features, we adopt the state-of-the-art face reenactment model PIRenderer (Wang et al., 2017) to produce new listener heads. To further alleviate the identity mismatch problem, we re-train the model and add another identity restriction loss apart from the original perceptual loss, style loss and GAN loss.
\[\mathcal{L}_{identity}=\left\|V(\tilde{I}_{t})-V(I_{t})\right\|_{1}, \tag{6}\]
where \(V\) denotes the VGGFace (Chen et al., 2015) model to extract identity features. In this way, given generated listener pose and expression feature \(\{\tilde{F}_{l}^{P},\tilde{F}_{l}^{e}\}\) and listener head image \(I_{l}\), new responsive listener head image \(\tilde{I}_{l}\) is generated.
## 4. Experiment
### Experimental Settings
#### 4.1.1. Dataset
Our method is evaluated both quantitatively and qualitatively on the ViCo dataset (Wang et al., 2017), which is uniquely suited to our task. This dataset features 483 video clips capturing face-to-face interactions between two realistic subjects in a natural environment, with a total of over 0.1 million frames. Specifically, it includes the identities of 76 listeners and 67 speakers, and each response is manually annotated as positive, neutral, or negative attitude. As the only audio-visual dataset of its kind, ViCo provides an ideal benchmark for evaluating our approach.
#### 4.1.2. Comparison Methods
Two responsive listener head generation methods are adopted as comparing methods: **ViCo**(Wang et al., 2017) adopts the LSTM-based Sequential Decoder to predict the pose and expression features of the listener subject and renders them into new responsive heads. **PCHG**(Liu et al., 2016) post-process the generated videos with a segmentation model to improve the stability of the background. Each method is trained on the training set \(\mathcal{D}_{train}\) of ViCo, and further evaluated on the test set \(\mathcal{D}_{test}\) and out-of-domain set \(\mathcal{D}_{ood}\). Specifically, all identities in \(\mathcal{D}_{test}\) have appeared in \(\mathcal{D}_{train}\) while identities in \(\mathcal{D}_{ood}\) have no overlap with ones in \(\mathcal{D}_{train}\).
#### 4.1.3. Implementation Details
The face video frames are cropped to \(256\times 256\) size at 30 FPS and the audio signals are extracted into 45-dimensional acoustic features, including 14-dim mel-frequency cepstral coefficients (MFCC), 28-dim MFCC-Delta, energy, Zero Crossing Rate and loudness. The listener attitude information is denoted as the one-hot label. The window length of the speaker clip is set to be 40 frames with a sliding window length of 20 frames. As for model details, we utilize multi-head attention with 8 heads and 4 layers. The identity feature is extracted from the face recognition model (Chen et al., 2015), while the pose and expression features are from the angles and translation coefficients of face 3DMM reconstruction operation (Chen et al., 2015) on the frames in each video. All experiments use an NVIDIA V100 GPU with 32 GB memory.
### Quantitative Evaluation
#### 4.2.1. Evaluation Metrics
In order to evaluate the precision of the generated speaker pose and expression features, we adopt the evaluation methodology utilized by ViCo (Wang et al., 2017), which involves measuring the \(L_{1}\) distance between the generated features and their corresponding ground-truth features (FD). These features are extracted from 3D facial reconstruction data, where the angle and
translation (trans) feature track changes in head pose, while the expression (exp) feature captures variations in facial movements.
To perform a comprehensive evaluation of the video-level performance, we adopt Fr'echet Inception Distance (FID) (Liu et al., 2019), Structural Similarity (SSIM) (Wang et al., 2019), Peak Signal-to-Noise Ratio (PSNR), and Cumulative Probability of Blur Detection (CPBD) (Chen et al., 2019). Additionally, to assess the quality of identity preservation, we utilize cosine similarity (CSIM) between identity features extracted from Vggface2 (Chen et al., 2019) on generated and ground truth images. Furthermore, in order to validate the diversity of the generated head motions, we compute the standard deviation of head motion feature embeddings extracted from the generated frames using Hopenet (Wang et al., 2019), which follows the methodology of SadTalker (SadTalker, 2019).
#### 4.2.2. Evaluation Results
Table 1 presents the quantitative comparison results for both \(\mathcal{D}_{test}\) and \(\mathcal{D}_{ood}\) subsets of the ViCo dataset, with evaluations performed on head pose features including angle, expression, and translation. It is worth noting that \(\mathcal{D}_{test}\) shares listener identity overlap with the training set (\(\mathcal{D}_{train}\)), while \(\mathcal{D}_{ood}\) does not. Results are presented for three different attitudes, as well as their average values.
on \(\mathcal{D}_{test}\). In contrast, MFR-Net maintains relatively competitive performance, demonstrating its exceptional ability to generalize on unseen identities. These outstanding results are attributable to the proposed Feature Aggregation Module, which enhances and fuses identity information with speaker-related features.
Table 2 presents image-level metrics for both \(\mathcal{D}_{ood}\) and \(\mathcal{D}_{test}\). The results demonstrate that MFR-Net outperforms other methods in terms of CSIM and Diversity scores, indicating superior preservation of identity information and generation of diverse images. Moreover, our method achieves competitive results with respect to image quality. Although PCHG achieves a slightly higher PSNR score due to its complicated post-processing on background pixels, MFR-Net generates images with accurate head pose features and natural-looking diverse interaction patterns.
### Qualitative Evaluation
In this section, we present the qualitative results from the generated responsive listening head frames using each method. The results are depicted in Fig. 3. Our findings reveal that MFR-Net offers a reasonable response that may differ from the ground-truth, yet remains coherent. In comparison, other methods such as ViCo fail to maintain accurate facial contours and generate visible artifacts, while PCHG produces stiff response head videos that lack interactive features with the speaker video. Conversely, our approach ensures the preservation of accurate identity information with no visible artifacts. Additionally, the generated videos appear more visually plausible with natural and synchronized head motions and corresponding attitudes. For a detailed video-format comparison, please refer to the supplementary material.
### User Study
We conduct user studies to evaluate the performance of all methods. We randomly choose 10 speaker videos and 10 listeners. For each cross-combination, three responsive videos of each attitude are generated. This process results in 300 generated videos for each method. To assess the quality, we asked 10 participants to watch the videos and choose the best method based on overall naturalness, motion diversity, identity preservation, and attitude matching quality. The results of the study are presented in Table 3. MFR-Net outperformed all other methods in all aspects, especially with regard to motion diversity and identity preservation. These findings indicate the superiority of our proposed Feature Aggregation Module and the effectiveness of our model.
### Further Analysis
#### 4.5.1. Ablation Study
To assess the effectiveness of the designed components in MFR-Net, we conduct the ablation study on the _denoising diffusion model_ (DIFF), the _Feature Aggregation Module_ (_FAM_) and whether to utilize the _identity loss_\(\mathcal{L}_{identity}\) in the renderer. Specifically, we compare our method without DIFF, which employs a simple LSTM-based model as the backbone for predicting noise terms, and our method without FAM, which concatenates all features directly and inputs them into the diffusion model. The results of both the qualitative and quantitative analyses are presented in Fig. 4 and Table 4, respectively.
The experimental results demonstrate that incorporating the denoising diffusion model into MFR-Net leads to a significant improvement in generation diversity, owing to its probabilistic nature, while also ensuring the accurate synthesis of head features. Moreover, the Feature Aggregation Module and the identity consistency loss effectively preserve the precise listener identity information. The image-level results also attest to the effectiveness of our approach. Notably, when generating previously unseen identities in the out-of-distribution dataset \(\mathcal{D}_{ood}\), the FAM enhances the identity information and fuses it with other speaker-related features, resulting in natural-looking listening head videos. It is worth highlighting that this increased diversity in generated responses does not come at the cost of reduced accuracy in head feature synthesis.
#### 4.5.2. Interaction Patterns
As highlighted in Section 1, generating responsive listening heads that can facilitate interaction is critical. To this end, we present several typical patterns generated by MFR-Net, as illustrated in Fig. 5. In the upper pairs, we observe that when listeners intend to show agreement, they tend to nod their heads or smile. Similarly, the lower pairs demonstrate how MFR-Net can model disagreement by shaking heads or expressing
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Method & angle \(\downarrow\) & exp \(\downarrow\) & trans \(\downarrow\) & CSIM \(\downarrow\) & Diversity \(\uparrow\) \\ \hline w/o DIFF & 7.48 & 14.89 & 6.39 & 0.21 & 0.15 \\ w/o FAM & 7.52 & 18.27 & 7.49 & 0.27 & 0.26 \\ w/o \(\mathcal{L}_{identity}\) & - & - & - & 0.20 & 0.28 \\ Ours & 7.47 & 14.04 & 6.20 & 0.18 & 0.28 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Ablation study for each proposed component in MFR-Net tested on both \(\mathcal{D}_{ood}\) and \(\mathcal{D}_{test}\) of ViCo dataset.
Figure 4. Qualitative results of ablation study on denoising diffusion model (DIFF), Feature Aggregation Module (FDM) and the identity consistency loss \(\mathcal{L}_{identity}\).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & Overall & Motion & Identity & Attitude \\ & Naturalness & Diversity & Preserving & Matching \\ \hline ViCO [(52)] & 18.3\% & 11.7\% & 13.3\% & 29.7\% \\ PCHG [(16)] & 36.6\% & 25.7\% & 25.0\% & 33.7\% \\ Ours & **49.0\%** & **62.7\%** & **61.7\%** & **36.7\%** \\ \hline \hline \end{tabular}
\end{table}
Table 3. User study results on ViCo dataset.
neutrality through stationary head movements. These results indicate that MFR-Net is capable of generating diverse responses that can effectively facilitate interaction.
#### 4.5.3. Generation Diversity
The denoising diffusion model within MFR-Net exhibits a probabilistic property that enables the generation of a diverse range of responsive listening heads, as illustrated in Fig. 6. Instead of producing entirely random output, MFR-Net generates varied results based on the listener identity image, speaker video, and attitude label. Depicted in Fig. 6, MFR-Net produces a frowning expression or a serious demeanor to convey negative attitudes. By leveraging this probabilistic mechanism, MFR-Net achieves greater flexibility and effectiveness in generating responses that accurately reflect the given conditions.
#### 4.5.4. Attitude Analysis
To validate the response on attitude label, we show results generated by MFR-Net conditioned by the same speaker video and listener but different attitude labels, as shown in Fig. 7, where two distinct categories of output are displayed. Our analysis shows that the facial expressions and head motions generated by MFR-Net are highly expressive and distinguishable across various attitude labels, indicating the model's ability to produce multi-faceted responses. These findings contribute to a growing body of evidence supporting the efficacy of utilizing MFR-Net for generating natural responses with respect to human attitudes.
### Limitation
While MFR-Net shows promising results in generating realistic and diverse listening head videos given a speaker video and listener head image, there are still some limitations that need to be addressed. One of the limitations is that 3DMMs do not model the variation of eyes and teeth, which may cause difficulties in synthesizing the finer details of teeth. Additionally, we only consider the speaker frames and audio signal without taking into account the semantic information contained in speech, such as speaker emotions and viewpoints. In future works, we plan to incorporate these factors to build a more realistic and highly interpretable system.
## 5. Ethical Considerations
MFR-Net is designed for modeling face-to-face communication scenarios and can be potentially utilized in world-positive use cases and applications, like digital avatar conversation and virtual online meetings. In case of misuse of the proposed method, we strongly support all relevant safeguarding measures against such malicious applications. We believe the proper usage of this technique will enhance the development of artificial intelligence research and relevant multimedia applications.
## 6. Conclusion
This paper proposes a novel method for generating responsive listening head videos. Our proposed MFR-Net utilizes the probabilistic denoising diffusion model to predict the listener head pose and expression features, thereby generating diverse and natural results. To achieve high-quality outputs with accurate response to speaker video while expressing certain attitude and preserving the listener identity, the Feature Aggregation Module is introduced, which enhances and fuses the multi-faceted responsive features. The proposed method is evaluated both quantitatively and qualitatively, and the experimental results demonstrate its superiority in generating precise and diverse listener head responses.
## Acknowledgement
This research is supported in part by the National Key Research and Development Program of China (2020AAA0140000), and the National Natural Science Foundation of China (No. 61702502).
Figure 5. Visual interaction patterns in MFR-Net. The upper pairs show nodding and smiling for agreement, while the lower pairs display shaking for disagreement and a neutral attitude.
Figure 6. Visual results of diverse generation. To convey a negative attitude, frowning or serious emotion can be produced by MFR-Net.
Figure 7. Visual results generated by MFR-Net conditioned by same speaker video and listener but different attitudes. |
2309.08722 | Constituency Parsing as an Instance of the M-monoid Parsing Problem | We consider the constituent parsing problem which states: given a final state
normalized constituent tree automaton (CTA) and a string, compute the set of
all constituent trees that are inductively recognized by the CTA and yield the
string. We show that this problem is an instance of the M-monoid parsing
problem. Moreover, we show that we can employ the generic M-monoid parsing
algorithm to solve the constituency parsing problem for a meaningful class of
CTA. | Richard Mörbitz | 2023-09-15T19:14:28Z | http://arxiv.org/abs/2309.08722v1 | # Constituency Parsing as an Instance of
###### Abstract
We consider the constituent parsing problem which states: given a final state normalized constituent tree automaton (CTA) and a string, compute the set of all constituent trees that are inductively recognized by the CTA and yield the string. We show that this problem is an instance of the M-monoid parsing problem. Moreover, we show that we can employ the generic M-monoid parsing algorithm to solve the constituency parsing problem for a meaningful class of CTA.
## 1 Introduction
Constituency, sometimes also referred to as phrase structure, is an important aspect of natural language processing (NLP). Given a phrase of natural language, the task of constituency parsing consists in computing a tree-like structure which describes the syntactic composition of the phrase. These structures are usually visualized as trees where the words of the phrase occur as leaves. Figure 1 (left) shows such a constituent tree for the German phrase "hat schnell gearbeitet". The ordering of the phrase is indicated below the tree where dashed lines link the leaves to their corresponding positions in the phrase. A special phenomenon that may occur in the scope of constituency parsing are discontinuous constituents. These span non-contiguous parts of a phrase; for instance, cf. the constituent labeled V which spans the sub-phrases "hat" and "gearbeitet" in our example. In the usual illustration, discontinuity manifests itself by crossing lines between the leaves of the tree and the ordering of the phrase.
Usual formal models employed in NLP, such as context-free grammars (CFG) and finite-state tree automata (FTA), are not adequate for modeling discontinuous constituents. This problem has been solved on the grammar side by exploring more powerful grammar formalisms such as tree adjoining grammars (TAG; [8]) and linear context-free rewriting systems (LCFRS; [16, 9]). On the automaton side, hybrid tree automata [3] have recently been introduced. In this context, hybrid trees are usual trees where labels can be extended by a positive number, called _index_, which indicates their position in the phrase. (Each index may only occur once per hybrid tree.) Thus, constituent trees are a particular type of hybrid trees where a label has an index if and only if it occurs at a leaf position. Cf. the tree \(\xi\) in Fig. 1 (center) which corresponds to the constituent tree from above. The previously mentioned discontinuity in its first subtree is resembled by the fact that the set of indices occurring in this subtree is not contiguous. Given a constituent tree, we can obtain its phrase by reading off the labels at the leaves in the order of their indices; we call this operation _yield_. Non-contiguous indices lead to phrases with gaps that are formalized using a comma. For instance, the first subtree of \(\xi\) (whose root is labeled by V) yields the string tuple \((\mathrm{hat},\mathrm{gearbeitet})\). In contrast to this formalization of constituent trees, the usual representation of constituent trees in NLP does not feature indices and is thus more abstract.
We briefly recall the automaton model of [3]. In essence, a hybrid tree automaton (HTA) is an FTA where each transition additionally has an _index constraint_ which describes the acceptable combinations
of indices. Such a constraint may refer to both the indices occurring in the subtrees of the position where the transition is applied and the index occurring at that position itself. If unrestricted, these general constraints lead to an overly expressive automaton model. This is why [3] also introduced constituent tree automata (CTA) as a restricted form of HTA to recognize languages of constituent trees. Here, the index constraints are given by _word tuples_ as they occur in LCFRS. For instance, the word tuple \((x_{1}^{1}x_{2}^{1}x_{1}^{2})\) states that the indices of the first subtree form two separate intervals, i.e., sets of contiguous numbers, referred to by \(x_{1}^{1}\) and \(x_{1}^{2}\), and the indices of the second subtree (\(x_{2}^{1}\)) lie in between. Thus, discontinuous constituents can also be modeled. In essence, a CTA is final state normalized if the constituent trees it recognizes may only yield contiguous phrases (of course, there may be discontinuity in the subtrees). Drewes et al. (2022)[3] showed that the yields of the languages inductively recognized by final state normalized CTA are equal to the languages generated by LCFRS. Thus, CTA provide a meaningful framework for specifying constituency analyses. They did, however, not tackle the following problem which we call the _constituency parsing problem_: given a final state normalized CTA \(\mathcal{A}\) and a string \(u\), compute the set of all constituent trees that are inductively recognized by \(\mathcal{A}\) and yield \(u\). In this paper, we will solve this problem by showing that it is an instance of the M-monoid parsing problem to which the generic M-monoid parsing algorithm can be applied, provided that \(\mathcal{A}\) fulfils a certain condition.
M-monoid parsing [13, 14] is an algebraic framework for weighted parsing. Its kernel is a _weighted RTG-based language model_ (wRTG-LM) \(\bar{G}\); each wRTG-LM consists of a regular tree grammar (RTG) \(\mathcal{G}\), a \(\Gamma\)-algebra \((\mathcal{L},\phi)\) called _language algebra_, a complete M-monoid \(\mathbb{K}\) called _weight algebra_, and a weight mapping \(wt\) from the set of rules of \(\mathcal{G}\) to the signature of \(\mathbb{K}\). Moreover, the terminal alphabet of \(\mathcal{G}\) is required to be a subset of \(\Gamma\). The algebraic computations are based on the abstract syntax trees (ASTs) of \(\mathcal{G}\); these are trees over rules which represent valid derivations. In the language algebra, each AST can be evaluated to an element of \(\mathcal{L}\) by first projecting it to a tree over \(\Gamma\) and then applying the unique homomorphism from the \(\Gamma\)-term algebra to \(\mathcal{L}\). In the weight algebra, each AST can be evaluated to an element of \(\mathbb{K}\) by first applying \(wt\) to every rule and then applying the unique homomorphism from the \(\Omega\)-term algebra to \(\mathbb{K}\).
The _M-monoid parsing problem_ states the following: given a wRTG-LM \(\bar{G}\) and an element \(u\in\mathcal{L}\) of the language algebra, compute the sum of the weights (in \(\mathbb{K}\)) of all ASTs of \(\mathcal{G}\) which have the initial nonterminal as the left-hand side of the rule in their root and evaluate to \(u\) in the language algebra.
Our first contribution is the instantiation of the M-monoid parsing problem to constituency parsing. In attempting this instantiation, the constituent trees by [3] turn out to be not suitable for such an algebraic
Figure 1: Left: constituent tree for the German phrase “hat schnell gearbeitet”. It is discontinuous as the phrase of the left subtree is interleaved with a word from the right subtree. Center: formalization of this constituent tree in the framework of [3]. Right: AST of an RTG and its evaluation in the algebras of our M-monoid parsing problem.
framework. Instead, we introduce _partitioned constituent trees_ which are inspired by Nederhof and Vogler (2014) [15] (also cf. [6]). They are tuples consisting of a (usual) tree, a strict total order on its leaves, and a partitioning of its leaves. Compared to constituent trees, partitioned constituent trees abstract from particular indices and only preserve information about the order of the leaves and their groupings (where leaves with consecutive indices fall into the same component of the partitioning). This is also closer to the usual notion of constituent trees in NLP. The yield of partitioned constituent trees is defined analogously to constituent trees: now, the order of the labels is determined by the total order on the leaves and commas are placed between labels whose positions belong to different subsets of the partitioning.
The main part of the instantiation is the following construction. Given a final state normalized CTA \(\mathcal{A}\), we construct a wRTG-LM \(\bar{G}\) such that the M-monoid parsing problem for \(\bar{G}\) is equal to the constituency parsing problem for \(\mathcal{A}\). For the definition of \(\bar{G}\), we introduce two algebras: one for computing partitioned constituent trees and one for computing their yield. Both use the same signature \(\Gamma\) where each operator consists of a symbol from some ranked alphabet \(\Sigma\) and a word tuple. The first algebra, called _constituent tree algebra_, operates on partitioned constituent trees over \(\Sigma\) by performing top concatenation on their tree components (using the operator's symbol from \(\Sigma\)) and merging their total orders and partitionings using the operator's word tuple. The second one, called _constituent tree yield algebra_, operates on \(\Sigma\)-string tuples by combining them using the operator's word tuple in the same way as the language generated by an LCFRS is computed. Both algebras are many-sorted to ensure that the operators and the arguments fit.
Now, given a CTA \(\mathcal{A}\), we define the \(\mathcal{A}\)-wRTG-LM \(\bar{G}\) as follows. Its RTG \(\mathcal{G}\) is a syntactical variant of \(\mathcal{A}\), where the nonterminals, terminals, and the initial nonterminal of \(\mathcal{G}\) are the states of \(\mathcal{A}\), a particular subset of \(\Gamma\), and the final state of \(\mathcal{A}\), respectively. Each transition \((q_{1}\cdots q_{k},a,e,q)\) of \(\mathcal{A}\) becomes a rule \(q\to(a,e)(q_{1},\ldots,q_{k})\) in \(G\). Moreover, the language algebra of \(\bar{G}\) is the constituent tree yield algebra, the weight algebra is the constituent tree algebra lifted to sets, and the weight mapping maps each rule to its \(\Gamma\)-symbol. This leads to the following M-monoid parsing problem. Given \(\bar{G}\) and \(u\in\Sigma^{*}\), compute the set of all partitioned constituent trees that are results of evaluating an AST \(d\) of \(\mathcal{G}\) in the constituent tree algebra, provided that \(d\) evaluates to \(u\) in the constituent tree yield algebra. We can prove that this set equals, modulo particular indices, the set of all constituent trees that are inductively recognized by \(\mathcal{A}\) and yield \(u\). Since adding (resp. removing) these indices is trivial, the M-monoid parsing problem for \(\bar{G}\) and \(u\) is equivalent to the constituency parsing problem for \(\mathcal{A}\) and \(u\). In Figure 1 we indicate the result of this construction by showing an AST \(d\) of some RTG \(\mathcal{G}\) which could be obtained as the result of the above construction for a given CTA \(\mathcal{A}\) that inductively recognizes \(\xi\). Moreover, we show the evaluation of \(d\) in the constituent tree yield algebra as well as in the constituent tree algebra (where the partitioned constituent tree is shown via the typical illustration of constituent trees in NLP) via the homomorphisms \((\cdot)_{\mathrm{Y}}\) and \((\cdot)_{\mathcal{G}\mathcal{T}}\), resp. (Here, \((\cdot)_{\Gamma}\) projects rules to their \(\Gamma\)-symbols.)
Our second contribution concerns the applicability of the generic M-monoid parsing algorithm [14] to the M-monoid parsing problem defined above. We find that the algorithm is in general not applicable, where the problem lies in monadic cycles: if the CTA \(\mathcal{A}\) contains transitions of the form \((q_{1},a_{1},e_{1},q_{2})\), \((q_{2},a_{2},e_{2},q_{3})\),..., \((q_{n},a_{n},e_{n},q_{1})\), then termination of the algorithm is not guaranteed. Otherwise, if \(\mathcal{A}\) is free of such cycles, the M-monoid parsing algorithm is applicable to the M-monoid parsing problem constructed from \(\mathcal{A}\) and thus solves the constituency parsing problem for \(\mathcal{A}\).
This paper is structured as follows. In Section 2 we fix the basic notions and repeat some mathematic foundations, especially from the area of algebra. In Sections 3 and 4, we recall the central ideas of CTA and the M-monoid parsing problem, respectively. In Section 5, we detail the definition of the wRTG-LM \(\bar{G}\) we use to model constituency parsing and we show that the corresponding M-monoid parsing problem
is equivalent to the constituency parsing problem. Finally, in Section 6, we discuss the applicability of the M-monoid parsing algorithm.
## 2 Preliminaries
Mathematical notions.The set of natural numbers (including \(0\)) is denoted by \(\mathbb{N}\) and we let \(\mathbb{N}_{+}=\mathbb{N}\setminus\{0\}\). For every \(k,\ell\in\mathbb{N}\), we let \([k,\ell]\) denote the interval \(\{i\in\mathbb{N}\mid k\leq i\leq\ell\}\) and we abbreviate \([1,\ell]\) by \([\ell]\). The set of all nonempty intervals of \(\mathbb{N}_{+}\) is denoted by \(\mathbb{I}\). For every \(I,I^{\prime}\in\mathbb{I}\), the expression \(I<I^{\prime}\) holds if \(\max I<\min I^{\prime}\) and \(I\curvearrowright I^{\prime}\) holds if \(\max I+1=\min I^{\prime}\). Thus \(I\curvearrowright I^{\prime}\) implies \(I<I^{\prime}\). For each set \(A\), we let \(\mathcal{P}(A)\) denote the power set of \(A\). We extend a mapping \(f\colon A\to B\) in the canonical way to a mapping \(f\colon\mathcal{P}(A)\to\mathcal{P}(B)\). A _family_\((a_{i}\mid i\in I)\) is a mapping \(f\colon I\to A\) with \(f(i)=a_{i}\) for each \(i\in I\). Let \(A\), \(B\), and \(C\) be sets. The composition of two mappings \(f\colon A\to B\) and \(g\colon B\to C\) is denoted by \(g\circ f\). Whenever we deal with a partitioning \((A_{1},\ldots,A_{n})\) of a set \(A\), we require \(A_{i}\) to be non-empty (for each \(i\in[n]\)). An _alphabet_ is a finite and non-empty set.
Strings and tuples.Let \(A\) be a set and \(k\in\mathbb{N}\). We let \(A^{k}\) denote the set of all strings \(w=a_{1}\cdots a_{k}\) of length \(k\), where \(a_{1},\ldots,a_{k}\in A\), and we let \(A^{*}=\bigcup_{k\in\mathbb{N}}A^{k}\). The empty string (\(k=0\)) is denoted by \(\varepsilon\). We denote substrings of a string \(w=a_{1}\cdots a_{k}\) in \(A^{*}\) as follows: for every \(i\in[k]\) and \(j\in[k-i+1]\), we let \(w[i;j]=a_{i}\cdots a_{i+j-1}\), and \(w[i]\) abbreviates \(w[i;1]\). Let \(\ell\in\mathbb{N}\). The _concatenation_ of two strings \(v=a_{1}\cdots a_{k}\) in \(A^{k}\) and \(w=b_{1}\cdots b_{\ell}\) in \(A^{\ell}\), denoted by \(v\cdot w\), is the string \(a_{1}\cdots a_{k}b_{1}\cdots b_{\ell}\) in \(A^{k+\ell}\); we drop \(\cdot\) if it is clear from the context. Moreover, we lift concatenation to sets of strings in the obvious way.
We let \(\operatorname{Tup}_{k}(A)\) denote the \(k\)-fold Cartesian product of \(A\); its elements are called \(k\)_-tuples over \(A\)_. Moreover, we let \(\operatorname{Tup}(A)=\bigcup_{k\in\mathbb{N}}\operatorname{Tup}_{k}(A)\). In the obvious way, we transfer the notion of substrings from strings to tuples.
Sorted sets, trees, and regular tree grammars.Let \(S\) be a set; its elements are usually called _sorts_. An _\(S\)-sorted set_ is a pair \((A,\operatorname{sort})\) where \(A\) is a set and sort\(\colon A\to S\) is a mapping. For each \(s\in S\), we let \(A^{(s)}=\{a\in A\mid\operatorname{sort}(a)=s\}\). We call an \(S\)-sorted set _single-sorted_ if \(|S|=1\); thus, each (usual) set can be viewed as a single-sorted set. A _ranked set_ is an \(\mathbb{N}\)-sorted set; its sort mapping is usually denoted by \(\operatorname{rk}\). In examples, we will show the rank of a symbol as a superscript in parentheses, e.g., \(a^{(k)}\) if \(\operatorname{rk}(a)=k\). An \(S\)-sorted (resp. ranked) alphabet is an \(S\)-sorted (resp. ranked) set which is an alphabet.
An \((S^{*}\times S)\)-sorted set \(\Gamma\) is called _\(S\)-signature_. Whenever we write \(\gamma\in\Gamma^{(s_{1}\cdots s_{k},s)}\) we assume that \(k\in\mathbb{N}\) and \(s,s_{1},\ldots,s_{k}\in S\) are universally quantified if not specified otherwise. Now let \(H\) be an \(S\)-sorted set. The set of _\(S\)-sorted trees over \(\Gamma\) and \(H\)_, denoted by \(\operatorname{T}_{\Gamma}(H)\), is the smallest \(S\)-sorted set \(T\) such that, for each \(s\in S\), we have \(H^{(s)}\subseteq T^{(s)}\) and, for every \(\gamma\in\Gamma^{(s_{1}\cdots s_{k},s)}\) and \(t_{1}\in T^{(s_{1})},\ldots,t_{k}\in T^{(s_{k})}\), we have \(\gamma(t_{1},\ldots,t_{k})\in T^{(s)}\). We abbreviate \(\operatorname{T}_{\Gamma}(\emptyset)\) by \(\operatorname{T}_{\Gamma}\). Since we can view each \((S^{*}\times S)\)-sorted set as a ranked set by, for every \(\gamma\in\Gamma^{(s_{1}\cdots s_{k},s)}\), letting \(\operatorname{rk}(\gamma)=k\), the above definition also covers the usual trees over ranked alphabets.
The _set of positions_ of a tree is defined by the mapping \(\operatorname{pos}\colon\operatorname{T}_{\Gamma}(H)\to\mathcal{P}((\mathbb{N }_{+})^{*})\) as usual. Let \(t\in\operatorname{T}_{\Gamma}(H)\) and \(w\in\operatorname{pos}(t)\). The set of _leaves_ of \(t\), the _label of \(t\) at \(w\)_, and the _subtree of \(t\) at \(w\)_ are also defined as usual, and are denoted by leaves\((t)\), \(t(w)\), and \(t|_{w}\), respectively.
An _\(S\)-sorted regular tree grammar_ (RTG; [2]) is a tuple \(\mathcal{G}=(N,\Gamma,A_{0},R)\) where \(N\) is an \(S\)-sorted alphabet (_nonterminals_), \(\Gamma\) is an \((S^{*}\times S)\)-sorted alphabet (_terminals_) with \(N\cap\Gamma=\emptyset\), \(A_{0}\in N\) (_initial nonterminal_), and \(R\) is a finite set of _rules_ where each rule \(r\) has the form \(A\to\gamma(A_{1},\ldots,A_{k})\) with \(k\in\mathbb{N}\)
\(\gamma\in\Gamma^{(s_{1}\cdots s_{k},s)}\), and \(A\in N^{(s)},A_{1}\in N^{(s_{1})},\ldots,A_{k}\in N^{(s_{k})}\). (Thus, we only consider RTGs in normal form.) We call \(A\) the _left-hand side_ of \(r\); it is denoted by \(\operatorname{lhs}(r)\).
We view \(R\) as an \((N^{*}\times N)\)-sorted set where each rule \(A\to\gamma(A_{1},\ldots,A_{k})\) has sort \((A_{1}\cdots A_{k},A)\). Thus, for every \(d\in\operatorname{T}_{R}\) and \(w\in\operatorname{pos}(d)\), the following holds: if \(d(w)\) is \(A\to\gamma(A_{1},\ldots,A_{k})\), then, for each \(i\in[k]\), we have \(\operatorname{lhs}(d(w\cdot i))=A_{i}\). We call \(\operatorname{T}_{R}\) the set of _abstract syntax trees_ (short: ASTs) of \(\mathcal{G}\). We define the mapping \((\cdot)_{\Gamma}:\operatorname{T}_{R}\to\operatorname{T}_{\Gamma}\) such that \((d)_{\Gamma}\) is obtained from \(d\) by replacing each \(A\to\gamma(A_{1},\ldots,A_{k})\) by \(\gamma\). The _tree language generated by \(\mathcal{G}\)_ is the set \(\operatorname{L}(\mathcal{G})=(\operatorname{T}_{R})_{\Gamma}\).
\(S\)-sorted \(\Gamma\)-algebras.Let \(S\) be a set and \(\Gamma\) be an \(S\)-signature. An \(S\)-_sorted \(\Gamma\)-algebra_ (short: algebra) is a pair \((\mathcal{A},\phi)\) where \(\mathcal{A}\) is an \(S\)-sorted set (_carrier set_) and \(\phi\) is a mapping which maps each \(\gamma\in\Gamma^{(s_{1}\cdots s_{k},s)}\) to a mapping \(\phi(\gamma)\colon\mathcal{A}^{(s_{1})}\times\cdots\times\mathcal{A}^{(s_{k })}\to\mathcal{A}^{(s)}\). We will sometimes identify \(\phi(\gamma)\) and \(\gamma\) (as it is usual).
The \(S\)-_sorted \(\Gamma\)-term algebra_ is the \(S\)-sorted \(\Gamma\)-algebra \((\operatorname{T}_{\Gamma},\phi_{\Gamma})\) where, for every \(\gamma\in\Gamma^{(s_{1}\cdots s_{k},s)}\) and \(t_{1}\in\operatorname{T}_{\Gamma}^{(s_{1})},\ldots,t_{k}\in\operatorname{T}_{ \Gamma}^{(s_{k})}\), we let \(\phi_{\Gamma}(\gamma)(t_{1},\ldots,t_{k})=\gamma(t_{1},\ldots,t_{k})\). For each \(\Gamma\)-algebra \((\mathcal{A},\phi)\) there is a unique homomorphism, denoted by \((\cdot)_{\mathcal{A}}\), from the \(\Gamma\)-term algebra to \((\mathcal{A},\phi)\)[17]. We write its application to an argument \(t\in\operatorname{T}_{\Gamma}\) as \((t)_{\mathcal{A}}\). Intuitively, \((\cdot)_{\mathcal{A}}\) evaluates a tree \(t\) in \((\mathcal{A},\phi)\), in the same way as arithmetic expressions are evaluated to numbers. For instance, the expression \(3+2\cdot(4+5)\) is evaluated to \(21\) in the \(\{+,\cdot\}\)-algebra \((\mathbb{N},+,\cdot)\). Often we abbreviate an algebra \((\mathcal{A},\phi)\) by \(\mathcal{A}\). For every \(a\in\mathcal{A}\) we let \(\operatorname{factors}(a)=\{b\in\mathcal{A}\mid b<_{\operatorname{factor}}{}^{*}a\}\) where, for every \(a,b\in\mathcal{A}\), \(b<_{\operatorname{factor}}a\) if there is a \(\gamma\in\Gamma\) such that \(b\) occurs in some tuple \((b_{1},\ldots,b_{k})\) with \(\phi(\gamma)(b_{1},\ldots,b_{k})=a\). That is, \(\operatorname{factors}(a)\) is the set of all values that occur in a term which evaluates to \(a\). We call \((\mathcal{A},\phi)\)_finitely decomposable_ if \(\operatorname{factors}(a)\) is finite for every \(a\in\mathcal{A}\).
Word tuples.Let \(k\in\mathbb{N}\) and \(\kappa=(\ell_{1},\ldots,\ell_{k})\) in \(\operatorname{Tup}_{k}(\mathbb{N}_{+})\). We let \(\mathbb{X}_{\kappa}=\{x_{i}^{j}\mid i\in[k],j\in[\ell_{i}]\}\) and call each element \(x_{i}^{j}\) of \(\mathbb{X}_{\kappa}\) a _variable_. Moreover, let \(n\in\mathbb{N}_{+}\) and \(\Delta\) be an alphabet. Then we denote by \(\mathbb{W}_{\kappa}^{n}(\Delta)\) the set of all tuples \(e=(s_{1},\ldots,s_{n})\) such that (1) for each \(i\in[n]\), the component \(s_{i}\) is a string over \(\Delta\) and \(\mathbb{X}_{\kappa}\), (2) each variable in \(\mathbb{X}_{\kappa}\) occurs exactly once in \(e\), and (3) for all \(x_{i}^{j_{1}},x_{i}^{j_{2}}\in\mathbb{X}_{\kappa}\) with \(j_{1}<j_{2}\), the variable \(x_{i}^{j_{1}}\) occurs left of \(x_{i}^{j_{2}}\) in \(e\). Each element of \(\mathbb{W}_{\kappa}^{n}(\Delta)\) is a _monotone \((n,\kappa)\)-word tuple_.1 We let \(\mathbb{W}(\Delta)=\bigcup_{n\in\mathbb{N}_{+},k\in\mathbb{N},\kappa\in \operatorname{Tup}_{k}(\mathbb{N}_{+})}\mathbb{W}_{\kappa}^{n}(\Delta)\) and we drop '\((\emptyset)\)' for empty \(\Delta\).
Footnote 1: Monotonicity is expressed by condition (3); in this paper, we do not deal with non-monotone word tuples.
Let \(e=(s_{1},\ldots,s_{n})\) be in \(\mathbb{W}_{\kappa}^{n}(\Delta)\). The _word function induced by \(e\)_ is the mapping
\[[\![e]\!]\colon\operatorname{Tup}_{\ell_{1}}(\Delta^{*})\times\cdots\times \operatorname{Tup}_{\ell_{k}}(\Delta^{*})\to\operatorname{Tup}_{n}(\Delta^{*})\]
which is defined, for every \((w_{1}^{1},\ldots,w_{1}^{\ell_{1}})\in\operatorname{Tup}_{\ell_{1}}(\Delta^{*})\),..., \((w_{k}^{1},\ldots,w_{k}^{\ell_{k}})\in\operatorname{Tup}_{\ell_{k}}(\Delta^{*})\), by
\[[\![e]\!]((w_{1}^{1},\ldots,w_{1}^{\ell_{1}}),\ldots,(w_{k}^{1},\ldots,w_{k}^{ \ell_{k}}))=(v_{1},\ldots,v_{n})\]
where each \(v_{m}\) (\(m\in[n]\)) is obtained from \(s_{m}\) by replacing every occurrence of a variable \(x_{i}^{j}\) by \(w_{i}^{j}\). For instance, let \(\Delta=\{a,b,c,d\}\). The word tuple \(e=(bx_{2}^{1}x_{1}^{1}ax_{1}^{2},axc_{2}^{2}x_{1}^{3}a)\) in \(\mathbb{W}_{(3,2)}^{2}(\Delta)\) induces the word function \([\![e]\!]\colon\operatorname{Tup}_{3}(\Delta^{*})\times\operatorname{Tup}_{2}( \Delta^{*})\to\operatorname{Tup}_{2}(\Delta^{*})\) with
\[[\![e]\!]((w_{1}^{1},w_{1}^{2},w_{1}^{3}),(w_{2}^{1},w_{2}^{2}))=(bw_{2}^{1}w_{1}^ {1}aw_{1}^{2},acw_{2}^{3}w_{1}^{3}a).\]
We view \(\mathbb{W}(\Delta)\) as a \((\mathbb{N}_{+}^{*}\times\mathbb{N}_{+})\)-sorted set in the obvious way (i.e., \(e\in\mathbb{W}_{\kappa}^{n}(\Delta)\) has sort \((\kappa,n)\)) and we denote the unique homomorphism from the \(\mathbb{N}_{+}\)-sorted \(\mathbb{W}(\Delta)\)-term algebra to the \(\mathbb{N}_{+}\)-sorted algebra \((\operatorname{Tup}(\Delta^{*}),[\![\cdot]\!])\) also by \([\![\cdot]\!]\). Intuitively, it evaluates trees over word tuples to elements of \(\operatorname{Tup}(\Delta^{*})\) by applying in a bottom-up way the word functions induced by their word tuples.
Monoids.A _monoid_ is an algebra \((\mathbb{K},\oplus,0)\) such that \(\oplus\) is a binary, associative operation on \(\mathbb{K}\) and \(0\oplus\mathbb{k}=\mathbb{k}=\mathbb{k}\oplus 0\) for each \(\mathbb{k}\in\mathbb{K}\). The monoid is _commutative_ if \(\oplus\) is commutative and it is _idempotent_ if \(\mathbb{k}\oplus\mathbb{k}=\mathbb{k}\). It is _complete_ if, for each countable set \(I\), there is an operation \(\sum_{I}^{\oplus}\) which maps each family \((\mathbb{k}_{i}\mid i\in I)\) to an element of \(\mathbb{K}\), coincides with \(\oplus\) when \(I\) is finite, and otherwise satisfies axioms which guarantee commutativity and associativity [5, p. 124]. We abbreviate \(\sum_{I}^{\oplus}(\mathbb{k}_{i}\mid i\in I)\) by \(\sum_{i\in I}^{\oplus}\mathbb{k}_{i}\). A complete monoid is _d-complete_[10] if, for every \(\mathbb{k}\in\mathbb{K}\) and family \((\mathbb{k}_{i}\mid i\in\mathbb{N})\) of elements of \(\mathbb{K}\), the following holds: if there is an \(n_{0}\in\mathbb{N}\) such that for every \(n\in\mathbb{N}\) with \(n\geq n_{0}\), \(\sum_{i\in\mathbb{N}:\ i\leq n}^{\oplus}\mathbb{k}_{i}=\mathbb{k}\), then \(\sum_{i\in\mathbb{N}}^{\oplus}\mathbb{k}_{i}=\mathbb{k}\). A complete monoid is _completely idempotent_ if for every \(\mathbb{k}\in\mathbb{K}\) and countable set \(I\) it holds that \(\sum_{i\in I}^{\oplus}\mathbb{k}=\mathbb{k}\). An easy proof shows that if \(\mathbb{K}\) is completely idempotent, it is also d-complete.
M-monoids.A _multioperator monoid_ (M-monoid; [12]) is an algebra \((\mathbb{K},\oplus,0,\Omega,\phi)\) where \((\mathbb{K},\oplus,0)\) is a commutative monoid (_additive monoid_), \(\Omega\) is a ranked set, and \((\mathbb{K},\phi)\) is an \(\Omega\)-algebra. An M-monoid inherits the properties of its monoid (e.g., being complete). We denote a complete M-monoid by \((\mathbb{K},\oplus,0,\Omega,\phi,\sum^{\oplus})\). An M-monoid is _distributive_ if, for every \(\omega\in\Omega^{(m)}\), \(i\in[m]\), and \(\mathbb{k},\mathbb{k}_{1},\ldots,\mathbb{k}_{m}\in\mathbb{K}\),
\[\omega(\mathbb{k}_{1},\ldots,\mathbb{k}_{i-1},\mathbb{k}_{i}\oplus\mathbb{k}, \mathbb{k}_{i+1},\ldots,\mathbb{k}_{m})=\omega(\mathbb{k}_{1},\ldots,\mathbb{ k}_{i-1},\mathbb{k}_{i},\mathbb{k}_{i+1},\ldots,\mathbb{k}_{m})\oplus\omega( \mathbb{k}_{1},\ldots,\mathbb{k}_{i-1},\mathbb{k},\mathbb{k}_{i+1},\ldots, \mathbb{k}_{m}).\]
If \(\mathbb{K}\) is complete, then we only call it distributive if the above equation also holds for each countable set of summands. We sometimes refer to an M-monoid only by its carrier set.
**Example 1**.: Let \(S\) be a set, \(\Omega\) be an \(S\)-signature, and \((\mathcal{A},\phi)\) be an \(S\)-sorted set. We will now define an M-monoid which lifts the computations from \((\mathcal{A},\phi)\) to sets of elements of \(\mathcal{A}\). Its carrier set will be \(B=\bigcup_{s\in S}\mathcal{P}(\mathcal{A}^{(s)})\cup\{\bot\}\) where \(\bot\) is a new element. Thus, \(B\) contains all single-sorted subsets of \(\mathcal{A}\) and an element \(\bot\) which will be used whenever an operation is applied to arguments which do not match its sort. Formally, we define the M-monoid \((B,\Im,\emptyset,\Omega,\psi)\) where, for every \(B_{1},B_{2}\in B\),
\[B_{1}\Im B_{2}=\begin{cases}B_{1}\cup B_{2}&\text{if there exists $s\in S$ such that $B_{1},B_{2}\subseteq\mathcal{A}^{(s)}$}\\ \bot&\text{otherwise}\end{cases}\]
and, for every \(\gamma\in\Gamma^{(s_{1}\cdots s_{k},s)}\) and \(B_{1},\ldots,B_{k}\in B\),
\[\psi(\gamma)(B_{1},\ldots,B_{k})=\begin{cases}\phi(\gamma)(B_{1},\ldots,B_{k}) &\text{if $B_{1}\subseteq\mathcal{A}^{(s_{1})},\ldots,B_{k}\subseteq\mathcal{A}^{(s_{k})}$ }\\ \bot&\text{otherwise.}\end{cases}\]
We consider \(\sum^{\Im}\) which is defined for every index set \(I\) as \(\bigcup_{I}\). It is easy to see that \(B\) together with \(\sum^{\Im}\) is complete and distributive. Moreover, since the monoid \((B,\Im,\emptyset,\sum^{\Im})\) is completely idempotent, we obtain that \((B,\Im,\emptyset,\Omega,\psi,\sum^{\Im})\) is d-complete. \(\triangleleft\)
## 3 Constituent tree automata
Hybrid trees and, as a special case thereof, constituent trees are certain trees over potentially indexed symbols where, intuitively, an indexed symbol is a symbol equipped with a positive number. Formally, let \(\Sigma\) be an alphabet. The _set of indexed \(\Sigma\)-symbols_, denoted by \(\Sigma\langle\mathbb{N}_{+}\rangle\), is the ranked set defined by \(\Sigma\langle\mathbb{N}_{+}\rangle^{(k)}=\{a\langle n\rangle\mid a\in\Sigma^{ (k)},n\in\mathbb{N}_{+}\}\) for each \(k\in\mathbb{N}\). An element \(a\langle n\rangle\) is called _indexed symbol_ and \(n\) is the _index of \(a\langle n\rangle\)_. We write \((a\langle n\rangle)_{\Sigma}\) for \(a\) and \((a\langle n\rangle)_{\mathbb{N}}\) for \(n\).
Here we only define constituent trees; for a general definition of hybrid trees, cf. [3]. A _constituent tree_ is a tree \(\xi\in\operatorname{T}_{\Sigma}(\Sigma\langle\mathbb{N}_{+}\rangle)\) such that, for every \(w,w^{\prime}\in\operatorname{leaves}(\xi)\), we have that \((\xi(w))_{\mathbb{N}}=(\xi(w^{\prime}))_{\mathbb{N}}\)
implies \(w=w^{\prime}\). In words, a symbol is indexed if and only if it occurs at a leaf and no index occurs twice. We let \((\xi)_{\Sigma}\) denote the tree in \(\mathrm{T}_{\Sigma}\) obtained from \(\xi\) by removing all indices. The set of all constituent trees over \(\Sigma\) is denoted by \(\mathrm{C}_{\Sigma}\).
We extract the linear phrase from a constituent tree \(\xi\) using the mapping yield \(\colon\mathrm{C}_{\Sigma}\to\mathrm{Tup}(\Sigma^{*})\) which we define as follows. We order the set of indexed symbols occurring in \(\xi\) into a sequence according to their indices, then we drop each comma between neighbored symbols with consecutive indices, and finally we drop the indices. Thus, the tuple \(\mathrm{yield}(\xi)\) has one more component than the number of gaps in the set of indices occurring in \(\xi\). For instance, consider the constituent tree \(\xi\) in Figure 1. The ordering of its set of indexed symbols is \((\mathrm{hat}\langle 1\rangle,\mathrm{schnell}\langle 2\rangle,\mathrm{ gearbeit}\langle 3\rangle)\) and all commas are dropped as there are no gaps between indices.
A _constituent tree automaton_ (short: CTA) is a tuple \(\mathcal{A}=(Q,\Sigma,\delta,q_{f})\) where
* \(Q\) is a ranked alphabet with \(Q^{(0)}=\emptyset\) (_states_),
* \(\Sigma\) is a ranked alphabet,
* \(\delta\) is a finite set of _transitions_, each of which having either form \((\varepsilon,a,q)\) where \(a\in\Sigma^{(0)}\) and \(q\in Q^{(1)}\) or form \((q_{1}\cdots q_{k},a,e,q)\) where \(k\in\mathbb{N}_{+}\), \(e\in\mathbb{W}^{n}_{(\ell_{1},\ldots,\ell_{k})}\), \(q_{1}\in Q^{(\ell_{1})},\ldots,q_{k}\in Q^{(\ell_{k})},q\in Q^{(n)}\), and \(a\in\Sigma^{(k)}\); and
* \(q_{f}\in Q\) (_final state_).
We call \(\mathcal{A}\)_final state normalized_ if \(q_{f}\in Q^{(1)}\).
We note that this definition of CTA simplifies the definition by [3] in three regards. First, we opted to define CTA directly and not as a special case of HTA. Second, their nullary transitions contain an additional object, the universal index constraint \(\mathrm{UIC}_{0,1}\), which we have dropped for the sake of clarity. Third, to achieve coherence with RTGs, our CTA has only a single final state \(q_{f}\). This is not a restriction, since each CTA of [3] with a set of final states can be transformed into an equivalent CTA with a single final state using a standard construction from automata theory (cf., e.g., [4, L. 4.8]).
**Example 2**.: Let \(\mathcal{A}=(Q,\Sigma,\delta,q_{f})\) be a CTA where the states are \(Q=\{q^{(3)},q_{i}^{(2)},q_{r}^{(2)},q_{a}^{(1)},q_{b}^{(1)},q_{c}^{(1)},q_{f }^{(1)}\}\), the terminal alphabet is \(\Sigma=\{a^{(0)},b^{(0)},c^{(0)},d^{(3)},e^{(2)}\}\), and \(\delta\) consists of the following transitions:
\[(q_{1}qq_{c},d,(x_{1}^{1}x_{2}^{1}x_{1}^{2}x_{2}^{2}x_{3}^{1}x_{2 }^{3}),q_{f}) (q_{a}qq_{r},d,(x_{1}^{1}x_{2}^{1}x_{3}^{1}x_{2}^{2}x_{3}^{2}x_{3}^{2}),q _{f})\] \[(q_{1}qq_{c},d,(x_{1}^{1}x_{2}^{1},x_{1}^{2}x_{2}^{2},x_{3}^{1}x_ {2}^{3}),q) (q_{a}qq_{r},d,(x_{1}^{1}x_{2}^{1},x_{3}^{1}x_{2}^{2},x_{3}^{2}x_{3}^{2}),q)\] \[(q_{a}q_{b}q_{c},d,(x_{1}^{1},x_{2}^{1},x_{3}^{1}),q) (q_{a}q_{b},e,(x_{1}^{1},x_{2}^{1}),q_{l}) (q_{b}q_{c},e,(x_{1}^{1},x_{2}^{1}),q_{r})\] \[(\varepsilon,a,q_{a}) (\varepsilon,b,q_{b}) (\varepsilon,c,q_{c}).\]
We note that \(\mathcal{A}\) is final state normalized. We will use \(\mathcal{A}\) to illustrate the semantics of CTA which we define next. \(\triangleleft\)
While there are two semantics of CTA in [3], we are only interested in one of them, called the _hybrid tree language inductively recognized_ by CTA. In this paper, we refer to it simply as _language inductively recognized by \(\mathcal{A}\)_ and define it in the following.
Let \(k,\ell_{1},\ldots,\ell_{k}\in\mathbb{N}_{+}\). We let \(\kappa=(\ell_{1},\ldots,\ell_{k})\). A \(\kappa\)-_assignment_ is a mapping \(\varphi\colon\mathbb{X}_{\kappa}\to\mathbb{1}\) such that, for every \(x,x^{\prime}\in\mathbb{X}_{\kappa}\) with \(x\neq x^{\prime}\), it holds that \(\varphi(x)\cap\varphi(x^{\prime})=\emptyset\). Now let \(n\in\mathbb{N}_{+}\) and \(e\in\mathbb{W}^{n}_{\kappa}\). We say that \(\varphi\)_models_\(e\), denoted by \(\varphi\models e\), if the expression \(e^{\prime}\) holds where \(e^{\prime}\) is obtained from \(e\) by (1) writing \(\curvearrowleft\) between each occurrence of two consecutive variables, (2) replacing each common by \(<\), and (3) replacing each variable \(x\) by \(\varphi(x)\). As an example, consider the word tuple \(e=(x_{1}^{1}x_{2}^{1},x_{3}^{1}x_{2}^{2},x_{3}^{2}x_{3}^{2})\) which occurs at position 2 of \(\rho\) in Figure 2. We define the \((1,3,2)\)-assignment \(\varphi\) with \(\varphi(x_{1}^{1})=\{2\}\), \(\varphi(x_{2}^{1})=\{3\}\)
\(\varphi(x_{3}^{1})=\{5\}\), \(\varphi(x_{2}^{2})=\{6\}\), \(\varphi(x_{3}^{2})=\{8\}\), \(\varphi(x_{2}^{3})=\{9\}\), where the indices are taken from the constituent tree \(\xi\) in Figure 2. We obtain the expression \(e^{\prime}=(\{2\}\curvearrowright\{3\}<\{5\}\curvearrowright\{6\}<\{8\} \curvearrowright\{9\})\) which is valid and hence \(\varphi\models e\).
Let \(\mathcal{A}=(Q,\Sigma,\delta,q_{f})\) be a CTA. A _run of \(\mathcal{A}\)_ is a tree \(\rho\in\mathrm{T}_{Q\times\mathbb{W}}(Q)\) where, for \((q,e)\in Q\times\mathbb{W}\), we let \(\mathrm{rk}(q,e)=k\) if \(e\in\mathbb{W}_{(\ell_{1},\ldots,\ell_{k})}^{n}\). We let \(\mathrm{R}_{\mathcal{A}}\) denote the set of runs of \(\mathcal{A}\). We define \(\Theta_{\mathcal{A}}\subseteq\mathrm{C}_{\Sigma}\times\mathrm{R}_{\mathcal{A }}\times\mathrm{Tup}(\mathbb{I})\) to be the smallest set \(T\) that satisfies the following:
* For every \((\varepsilon,a,q)\in\delta\) and \(i\in\mathbb{N}_{+}\) it holds that \(\big{(}a\langle i\rangle,q,\{i\}\big{)}\in T\).
* For every \((q_{1}\cdots q_{k},a,e,q)\in\delta\) and \((\xi_{1},\rho_{1},J_{1}),\ldots,(\xi_{k},\rho_{k},J_{k})\in T\) where \(q_{i}\) is the state at \(\rho_{i}(\varepsilon)\) (for \(i\in[k]\)), we let \(\kappa\) denote \((\mathrm{rk}(q_{1}),\ldots,\mathrm{rk}(q_{k}))\) and consider the mapping \(\varphi\colon\mathbb{X}_{\kappa}\to\mathbb{I}\) defined, for every \(i\in[k]\) and \(j\in[\mathrm{rk}(q_{i})]\), by \(\varphi(x_{i}^{j})=J_{i}[j]\). (The fact that \(J_{i}[j]\) is indeed an interval can easily be verified by induction.) Now, if \(\varphi\) is a \(\kappa\)-assignment (i.e., its image consists of pairwise disjoint sets) and \(\varphi\models e\), then \(\big{(}a(\xi_{1},\ldots,\xi_{k}),\rho,(U_{1},\ldots,U_{\mathrm{rk}(q)})\big{)}\in T\) where we let \(\rho=(q,e)(\rho_{1},\ldots,\rho_{k})\) and, for each \(m\in[\mathrm{rk}(q)]\), \(U_{m}=\bigcup_{i,j:\,x_{i}^{j}\text{ occurs in the $m$-th component of $e$}}\varphi(x_{i}^{j})\).
We define the following projection of \(\Theta_{\mathcal{A}}\) (where CR stands for "constituent (trees and) runs"):
\[\mathrm{CR}_{\mathcal{A}}=\{(\xi,\rho)\mid(\exists J\in\mathrm{Tup}(\mathbb{ I})).(\xi,\rho,J)\in\Theta_{\mathcal{A}}\}.\]
The language _inductively recognized by \(\mathcal{A}\)_, denoted by \(\mathrm{L}_{\mathrm{ind}}(\mathcal{A})\), is the set
\[\mathrm{L}_{\mathrm{ind}}(\mathcal{A})=\{\xi\mid(\xi,\rho)\in\mathrm{CR}_{ \mathcal{A}},\rho(\epsilon)\text{ has state }q_{f}\}.\]
**Example 3**.: Recall the CTA \(\mathcal{A}\) of Example 2. The top left of Figure 2 shows a constituent tree \(\xi\) and a run \(\rho\) of \(\mathcal{A}\) such that \((\xi,\rho)\in\mathrm{CR}_{\mathcal{A}}\). In order to show that \((\xi,\rho)\in\mathrm{CR}_{\mathcal{A}}\) indeed holds, in Figure 3 (left), we illustrate the assignments used at each position of \(\xi\) in the inductive definition of \(\Theta_{\mathcal{A}}\). For this, we use arrows starting at the indices in the leaves of \(\xi\). At every non-leaf position \(w\) of \(\rho\), we show the \(\kappa\)-assignment \(\varphi\) which witnesses the existence of \(J\in\mathrm{Tup}(\mathbb{I})\) such that \((\xi|_{w},\rho|_{w},J)\in\Theta_{\mathcal{A}}\) as follows: for each variable \(x_{i}^{j}\) in the word tuple at \(\rho(w)\), it holds that \(\varphi(x_{i}^{j})\) consists of all indices whose arrows reach \(x_{i}^{j}\). In the way these arrows pass through the word tuples at subtrees of \(\rho|_{w}\), it is shown that \(\varphi\) is consistent with the assignments in the subtrees. This stresses the inductive nature of \(\mathrm{CR}_{\mathcal{A}}\).
The constituent tree \(\xi\) exemplifies the form of each constituent tree inductively recognized by \(\mathcal{A}\). The backbone is a monadic chain where each position is labeled with \(d\). The bottom of the chain has three leaf children, labeled by \(a\), \(b\), and \(c\). Each inner position of the chain has three children as well, the second of which continues the chain. Moreover, the symbols \(a\), \(b\), and \(c\) are distributed as leaves among the first and third child, where \(e\) serves as an intermediate node under the child receiving two symbols (cf. positions \(\epsilon\) and \(2\) of \(\xi\)). The indices are placed such that, for each of \(a\), \(b\), and \(c\), the indices occurring with this symbol form an interval where \(a\) has the lowest and \(c\) has the highest interval. Thus, \(\mathrm{yield}(\mathrm{L}_{\mathrm{ind}}(\mathcal{A}))=\{a^{n}b^{n}c^{n}\mid n \in\mathbb{N}_{+}\}\) which is not context-free. As there are two patterns for inner positions of the backbone, \(\mathcal{A}\) may recognize several constituent trees with the same yield (an example is given in the right of Figure 3). \(\triangleleft\)
The _constituency parsing problem_ states:
**Given:**: a final state normalized CTA \(\mathcal{A}=(Q,\Sigma,\delta,q_{f})\) and \(u\in(\Sigma^{(0)})^{*}\)
**Compute:**: \(\{\xi\in\mathrm{L}_{\mathrm{ind}}(\mathcal{A})\mid\mathrm{yield}(\xi)=(u)\}\).
We note that, since \(\mathcal{A}\) is final state normalized, every \(\xi\in\mathrm{L}_{\mathrm{ind}}(\mathcal{A})\) has \(\mathrm{yield}(\xi)\in\Sigma^{*}\). Hence we did not allow string tuples consisting of more than one component in the specification of the constituency parsing problem.
Figure 3: Left: constituent tree \(\xi\) and run \(\rho\) of the CTA \(\mathcal{A}\) from Example 2 such that \((\xi,\rho)\in\mathrm{CR}_{\mathcal{A}}\) where the states and word tuples of \(\rho\) have been written next to the positions of \(\xi\). Arrows indicate the family of assignments which witnesses \((\xi,\rho)\in\mathrm{CR}_{\mathcal{A}}\) where, at each non-leaf position of \(\rho\), a variable is assigned the set of all indices whose arrows reach it. Right: another constituent tree \(\xi^{\prime}\in\mathrm{L}_{\mathrm{ind}}(\mathcal{A})\) such that \(\mathrm{yield}(\xi)=\mathrm{yield}(\xi^{\prime})\).
## 4 Weighted RTG-based language models and the M-monoid parsing problem
The M-monoid parsing problem [13, 14] builds on RTG-based language models which are inspired by the initial algebra approach [7].
An _RTG-based language model_ (RTG-LM) is a tuple \((\mathcal{G},(\mathcal{L},\phi))\) where, for some \(S\)-signature \(\Gamma\),
* \((\mathcal{L},\phi)\) is a \(\Gamma\)-algebra (_language algebra_), we call the elements of \(\mathcal{L}\)_syntactic objects_, and
* \(\mathcal{G}=(N,\Lambda,A_{0},R)\) is an \(S\)-sorted RTG with \(\Lambda\subseteq\Gamma\).
The _language generated by_\((\mathcal{G},(\mathcal{L},\phi))\) is the set
\[(\mathrm{L}(\mathcal{G}))_{\mathcal{L}}=\{(t)_{\mathcal{L}}\mid t\in\mathrm{L} (\mathcal{G})\}\subseteq\mathcal{L},\]
i.e., the set of all syntactic objects obtained by evaluating trees of \(\mathrm{L}(\mathcal{G})\) in the language algebra \(\mathcal{L}\). We note that \((\mathrm{L}(\mathcal{G}))_{\mathcal{L}}\subseteq\mathcal{L}^{\text{sort}(A_{ 0})}\), i.e., each syntactic object in the language generated by \((\mathcal{G},(\mathcal{L},\phi))\) has the sort of \(A_{0}\).
A _weighted RTG-based language model_ (wRTG-LM) is a tuple
\[\big{(}\ (\mathcal{G},(\mathcal{L},\phi)),\ (\mathbb{K},\oplus,0,\Omega, \psi,\Sigma^{\oplus}),\ \ wt\ \big{)}\]
where
* \((\mathcal{G},(\mathcal{L},\phi))\) is an RTG-LM,
* \((\mathbb{K},\oplus,0,\Omega,\psi,\Sigma^{\oplus})\) is a complete M-monoid (_weight algebra_), and
* \(wt\) maps each rule of \(\mathcal{G}\) with rank \(k\) to a \(k\)-ary operation in \(\Omega\). In the obvious way, we lift \(wt\) to the mapping \(wt^{\prime}\colon\mathrm{T}_{R}\to\mathrm{T}_{\Omega}\) and let \(wt\) also denote \(wt^{\prime}\).
The _M-monoid parsing problem_ states:
**Given:**: a wRTG-LM \(((\mathcal{G},(\mathcal{L},\phi)),(\mathbb{K},\oplus,0,\Omega,\psi,\Sigma^{ \oplus}),wt)\) with \(G=(N,\Lambda,A_{0},R)\) and \(a\in\mathcal{L}\)
**Compute:**: the value \(\text{parse}_{(\mathcal{G},\mathcal{L})}(a)\in\mathbb{K}\) where
\[\text{parse}_{(\mathcal{G},\mathcal{L})}(a)=\sum_{\begin{subarray}{c}d\in \mathrm{T}_{R}:\\ ((d)_{\Gamma})_{\mathcal{L}}=a,\text{lns}(d(\varepsilon))=A_{0}\end{subarray}} (wt(d))_{\mathbb{K}}.\]
The computation of \(\text{parse}_{(\mathcal{G},\mathcal{L})}(a)\) employs the homomorphisms of both algebras. Each AST of \(\mathcal{G}\) is mapped to an element of \(\mathcal{L}\) via the homomorphisms \((\cdot)_{\Gamma}\) and \((\cdot)_{\mathcal{L}}\) and it is mapped to an element of \(\mathbb{K}\) via the homomorphisms \(wt\) and \((\cdot)_{\mathbb{K}}\). Given a syntactic object \(a\), the M-monoid parsing problem states to first compute a collection of ASTs2 via the inverse of the homomorphisms \((\cdot)_{\Gamma}\) and \((\cdot)_{\mathcal{L}}\). These ASTs are filtered for those where the left-hand side of the rule at the root is the initial nonterminal. Then, values in \(\mathbb{K}\) are computed from the remaining ASTs via the homomorphisms \(wt\) and \((\cdot)_{\mathbb{K}}\). Finally, these values are accumulated to a single value using \(\Sigma^{\oplus}\).
## 5 Constituency parsing as an M-monoid parsing problem
In this section, we give the formal details of the definition of the constituent tree algebra, the constituent tree yield algebra, and the wRTG-LM we construct for a given CTA to model its constituency parsing problem. Moreover, we sketch the proof of the statement that the corresponding M-monoid parsing problem is equal to that constituency parsing problem. We start by defining partitioned constituent trees which are inspired by the hybrid trees of Nederhof and Vogler (2014) [15] (also cf. [6, 11]).
Let \(\Sigma\) be a ranked alphabet. A _partitioned constituent tree (over \(\Sigma\))_ is a tuple \(\xi=(t,<,(U_{1},\ldots,U_{n}))\) where \(t\in\mathrm{T}_{\Sigma}\), \(<\) is a strict total order on \(\mathrm{leaves}(t)\), \(n\in\mathbb{N}_{+}\), and \((U_{1},\ldots,U_{n})\) is a partitioning of \(\mathrm{leaves}(\xi)\) such that, for every \(i\in[n-1]\), \(w_{1}\in U_{i}\), and \(w_{2}\in U_{i+1}\), we have that \(w_{1}<w_{2}\). Intuitively, this condition on the partitioning enforces consistency with \(<\), i.e., positions further left in \((U_{1},\ldots,U_{n})\) are smaller. We say that \(\xi\)_has \(n\) segments_. The set of all partitioned constituent trees over \(\Sigma\) is denoted by \(\mathrm{pC}_{\Sigma}\).
Compared to the constituent trees of [3], partitioned constituent trees abstract from particular indices. Thus, each partitioned constituent tree represents infinitely many constituent trees. To formalize this, we define the mapping \(\mathrm{rep}\colon\mathrm{C}_{\Sigma}\to\mathrm{pC}_{\Sigma}\) as follows. Let \(\xi\in\mathrm{C}_{\Sigma}\). If \(\xi\) is of the form \(a\langle n\rangle\), we let \(\mathrm{rep}(a\langle n\rangle)=(a,\emptyset,(\{\varepsilon\}))\). Otherwise, \(\xi\) is of the form \(a(\xi_{1},\ldots,\xi_{k})\) and we let \(\mathrm{rep}(\xi)=((\xi)_{\Sigma},<,(U_{1},\ldots,U_{n}))\) where, for every \(w_{1},w_{2}\in\mathrm{leaves}(\xi)\), we let \(w_{1}<w_{2}\) if and only if \((\xi(w_{1}))_{\mathbb{N}}<(\xi(w_{2}))_{\mathbb{N}}\) and \((U_{1},\ldots,U_{n})\) is the unique partitioning of \(\mathrm{leaves}(\xi)\) such that, for each \(m\in[n]\), the set \(\{(\xi(w))_{\mathbb{N}}\mid w\in U_{m}\}\) is an interval and, for each \(m\in[n-1]\), \(\max_{w\in U_{m}}(\xi(w))_{\mathbb{N}}+1<\min_{w\in U_{m+1}}(\xi(w))_{\mathbb{N}}\). Intuitively, \(<\) orders the leaves of \(\xi\) by their indices and \((U_{1},\ldots,U_{n})\) groups the leaves such that, for each subset of the partitioning, the indices of the leaves in that subset form an interval and this interval is as large as possible.
We remark that our partitioned constituent trees differ from the hybrid trees by [15] in three regards. (1) In the first component, we only allow a tree \(\xi\) rather than a sequence of trees. (2) The total order \(<\) is defined on the set of leaves of \(\xi\) rather than the set of all positions of \(\xi\) whose labels are from a particular subset \(\Gamma\) of \(\Sigma\). We note that [6] defined constituent trees3 as a special case of hybrid trees where \(\Gamma\) makes up the leaf labels, hence that difference is only syntactical (also, this was already indicated by [15]). (3) Their hybrid trees did not feature a partitioning, so phrases with gaps cannot be modeled. Compared to the segmented totally ordered terms (tots) of [11], the total order of our partitioned constituent trees only regards the leaves rather than the entire set of positions.
Footnote 3: They refer to constituent trees as phrase structure trees.
Intuitively, the linear phrase represented by a partitioned constituent tree \((t,<,(U_{1},\ldots,U_{n}))\) can be obtained analogously to the yield of constituent trees in \(\mathrm{C}_{\Sigma}\); we merely order the symbols at the leaves according to \(<\) rather than by their index and we place comms according to \((U_{1},\ldots,U_{n})\) rather than gaps in the indices. We formalize this by defining the mapping \(\mathrm{p}\)-yield\(\colon\mathrm{pC}_{\Sigma}\to\mathrm{Tup}(\Sigma^{*})\) as follows. Let \(\xi=(t,<,(U_{1},\ldots,U_{n}))\) be in \(\mathrm{pC}_{\Sigma}\). Then
\[\mathrm{p}\mbox{-yield}(\xi)=(f_{t,<}(U_{1}),\ldots,f_{t,<}(U_{n}))\]
where the auxiliary mapping \(f_{t,<}\) is inductively defined by \(f_{t,<}(\emptyset)=\varepsilon\) and, for nonempty \(U\subseteq\mathrm{leaves}(t)\), \(f_{t,<}(U)=t(\min_{<}U)\cdot f_{t,<}(U\setminus\{\min_{<}U\})\).
It is easy prove that, for every \(\xi\in\mathrm{C}_{\Sigma}\), we have
\[\mathrm{yield}(\xi)=\mathrm{p}\mbox{-yield}(\mathrm{rep}(\xi)), \tag{1}\]
i.e., intuitively, the mapping \(\mathrm{rep}\) preserves yield.
### The constituent tree algebra and the constituent tree yield algebra
Prior to the definition of the algebras we give the formal definition of their signature \(\Gamma\). The intuition behind our choice of sorts is the observation that the elements of both algebras, partitioned constituent trees and string tuples, have a certain "arity": each partitioned constituent tree has \(n\) segments, i.e., groups of leaves, and each string tuple consists of \(n\) strings where, in both cases, \(n\in\mathbb{N}_{+}\).
We define the \(((\mathbb{N}_{+})^{*}\times\mathbb{N}_{+})\)-sorted set \(\Gamma=\Gamma^{(\varepsilon,1)}\cup\bigcup_{n,k,\ell_{1},\ldots,\ell_{k}\in \mathbb{N}_{+}}\Gamma^{(\ell_{1}\cdots\ell_{k},n)}\) where
* \(\Gamma^{(\varepsilon,1)}=\Sigma^{(0)}\) and
* for every \(n,k,\ell_{1},\ldots,\ell_{k}\in\mathbb{N}_{+}\), we let \[\Gamma^{(\ell_{1}\cdots\ell_{k},n)}=\{(a,e)\mid a\in\Sigma^{(k)},e\in \mathbb{W}^{n}_{(\ell_{1},\ldots,\ell_{k})}\}.\]
Now we can approach the definition of the constituent tree algebra as a \(\Gamma\)-algebra whose carrier set is \(\mathrm{pC}_{\Sigma}\). For this, we consider \(\mathrm{pC}_{\Sigma}\) as an \(\mathbb{N}_{+}\)-sorted set by letting, for every \(n\in\mathbb{N}_{+}\),
\[(\mathrm{pC}_{\Sigma})^{(n)}=\{\xi\in\mathrm{pC}_{\Sigma}\mid\xi\text{ has $n$ segments}\}.\]
The _constituent tree algebra_ is the \(\mathbb{N}_{+}\)-sorted \(\Gamma\)-algebra \(\mathcal{C}\mathcal{T}=(\mathrm{pC}_{\Sigma},\theta_{\Sigma})\) where
* for each \(a\in\Sigma^{(0)}\), we let \(\theta_{\Sigma}(a)=(a,\emptyset,(\{\varepsilon\}))\) and
* for every \((a,e)\in\Gamma^{(\ell_{1}\cdots\ell_{k},n)}\) and \(\xi_{1}\in(\mathrm{pC}_{\Sigma})^{(\ell_{1})},\ldots,\xi_{k}\in(\mathrm{pC}_{ \Sigma})^{(\ell_{k})}\) with \(\xi_{i}=(t_{i},<_{i},(U_{1}^{(i)},\ldots,U_{i}^{(\ell_{i})}))\) (for \(i\in[k]\)), we let \[\theta_{\Sigma}(a,e)(\xi_{1},\ldots,\xi_{k})=(t,<,(U_{1},\ldots,U_{n}))\] where \(t=a(t_{1},\ldots,t_{k})\) and, for each \(m\in[n]\), we let \(U_{m}\) be the union of all sets \(\{i\}\cdot U_{i}^{(j)}\) such that \(x_{i}^{j}\) occurs in the \(m\)-th component of \(e\). Thus, clearly, \((U_{1},\ldots,U_{n})\) is a partitioning of leaves\((t)\). Hence, for each \(w\in\text{leaves}(t)\), there exist exactly one \(i\in[k]\) and \(j\in[\ell_{i}]\) such that \(w\in\{i\}\cdot U_{i}^{(j)}\); we let \(\text{var}(w)\) denote \(x_{i}^{j}\). For the definition of \(<\), let \(w_{1},w_{2}\in\text{leaves}(t)\). If \(\text{var}(w_{1})\neq\text{var}(w_{2})\), then we let \(w_{1}<w_{2}\) if and only if \(\text{var}(w_{1})\) occurs left of \(\text{var}(w_{2})\) in \(e\). Otherwise, we let \(i\in[k]\) and \(j\in[\ell_{i}]\) such that \(\text{var}(w_{1})=x_{i}^{j}\). Then \(w_{1}<w_{2}\) if and only if \(w_{1}^{\prime}<_{i}w_{2}^{\prime}\) where \(w_{1}^{\prime},w_{2}^{\prime}\in\text{pos}(t_{i})\) such that \(w_{1}=i\cdot w_{1}^{\prime}\) and \(w_{2}=i\cdot w_{2}^{\prime}\).
We let \((\cdot)_{\mathcal{C}\mathcal{T}}\) denote the unique \(\Gamma\)-homomorphism from \(\text{T}_{\Gamma}\) to \(\mathrm{pC}_{\Sigma}\).
We note that the definition of \(\mathcal{C}\mathcal{T}\) is semantically close to the algebra of segmented tops by [11], but the operations of \(\mathcal{C}\mathcal{T}\) are defined using word tuples and the non-nullary symbols of \(\Gamma\) do not add tree positions to the total order or the partitioning since, in our case, these components only refer to the leaves. Moreover, one cannot define a \(\Gamma\)-algebra similar to \(\mathcal{C}\mathcal{T}\) but with \(\mathrm{C}_{\Sigma}\) as its carrier set. For this, one would need to fix a mapping sort\(:\mathrm{C}_{\Sigma}\to\mathbb{N}_{+}\). An appropriate choice could be assigning to each \(\xi\in\mathrm{C}_{\Sigma}\) the smallest number \(n\) such that the indices of \(\xi\) form \(n\) intervals. For instance, let \(\xi_{1}=a\langle 2\rangle\) and \(\xi_{2}=b(a\langle 1\rangle,a\langle 4\rangle)\) be constituent trees over \(\Sigma\). Then we have \(\text{sort}(\xi_{1})=1\) and \(\text{sort}(\xi_{2})=2\). In essence, this mimics the sort mapping of \(\mathrm{pC}_{\Sigma}\) but considers intervals of indices rather than the partitioning of the set of leaves. However, this approach bears the following problem. Let \(c\in\Sigma^{(2)}\) and \(e=(x_{2}^{1}x_{1}^{1},x_{2}^{2})\). We compute \(\theta_{\Sigma}(c,e)(\xi_{1},\xi_{2})=a(\xi_{1},\xi_{2})\) and have \(\text{sort}(a(\xi_{1},\xi_{2}))=2\). On the other hand, if we also consider \(\xi_{3}=b(a\langle 1\rangle,a\langle 3\rangle)\), then \(\theta_{\Sigma}(c,e)(\xi_{1},\xi_{3})\) has sort \(1\) which contradicts the sort of \((c,e)\). Moreover, this sort mapping falls short of inhibiting that constituent trees with overlapping indices are passed as arguments to \(\theta_{\Sigma}(e)\). The rich field of many-sorted algebra surely provides means to remedy these problems by choosing a more complex signature rather than \(\Gamma\). However, we believe that circumventing these problems by dealing with \(\mathrm{pC}_{\Sigma}\) is a cleaner solution.
We define the _constituent tree yield algebra_ to be the \(\mathbb{N}_{+}\)-sorted \(\Gamma\)-algebra \((\mathrm{Tup}(\Sigma^{*}),\theta_{Y})\) where
* for each \(n\in\mathbb{N}_{+}\), we let \(\operatorname{sort}(\operatorname{Tup}_{n}(\Sigma^{*}))=n\),
* for each \(a\in\Sigma^{(0)}\), we let \(\theta_{\Upsilon}(a)=(a)\), and
* for each \((a,e)\in\Gamma^{(\ell_{1}\cdots\ell_{k},n)}\), we let \(\theta_{\Upsilon}(a,e)=[\![e]\!]\).
Let \((\cdot)_{\Upsilon}\) denote the unique homomorphism from \(\operatorname{T}_{\Gamma}\) to \(\operatorname{Tup}(\Sigma^{*})\). We can also show that the mapping \(\operatorname{p-yield}\) is a \(\Gamma\)-homomorphism from \(\operatorname{pC}_{\Sigma}\) to \(\operatorname{Tup}(\Sigma^{*})\). Thus, by the laws of universal algebra (cf., e.g., [17]), we obtain that, for every \(t\in\operatorname{T}_{\Gamma}\),
\[\operatorname{p-yield}((t)_{\mathcal{GT}})=(t)_{\Upsilon}. \tag{2}\]
### The wRTG-LM for constituency parsing
Here we show, given a CTA \(\mathcal{A}\) and a string \(u\), how to construct a wRTG-LM such that the corresponding M-monoid parsing problem is equivalent to the constituency parsing problem for \(\mathcal{A}\) and \(u\). We start with the RTG and afterwards add the algebras from the previous subsection.
Let \(\mathcal{A}=(Q,\Sigma,\delta,q_{f})\) be a CTA. We define the \(\mathcal{A}\)_-RTG_ to be the RTG \(\mathcal{G}=(Q,\Lambda,R,q_{f})\) where
1. \(\Lambda=\Lambda^{(e,1)}\cup\bigcup_{k\in\mathbb{N}_{+},n,\ell_{1},\ldots,\ell_ {k}\in\operatorname{\sf K}(Q)}\Lambda^{(\ell_{1}\cdots\ell_{k},n)}\) where we let \(\Lambda^{(e,1)}=\Sigma^{(0)}\) and, for each \(k\in\mathbb{N}_{+}\) and every \(n,\ell_{1},\ldots,\ell_{k}\in\operatorname{\sf rk}(Q)\), we let \(\Lambda^{(\ell_{1}\cdots\ell_{k},n)}=\Sigma^{(k)}\times\mathbb{W}^{n}_{(\ell_ {1},\ldots,\ell_{k})}\),4 Footnote 4: Thus \(\Lambda\) is a finite subset of \(\Gamma\).
2. for every \(a\in\Sigma^{(0)}\) and \(q\in Q\) it holds that \((\varepsilon,a,q)\in\delta\) if and only if \((q\to(a))\in R\), and
3. for every \(k\in\mathbb{N}_{+}\), \(a\in\Sigma^{(k)}\), \(e\in\mathbb{W}\), and \(q_{1},\ldots,q_{k},q\in Q\) it holds that \((q_{1}\ldots q_{k},a,e,q)\in\delta\) if and only if \((q\to(a,e)(q_{1},\ldots,q_{k}))\in R\).
**Example 4**.: Recall the CTA \(\mathcal{A}=(Q,\Sigma,\delta,q_{f})\) from Example 2. The \(\mathcal{A}\)-RTG is \(\mathcal{G}=(Q,\Lambda,R,q_{f})\) where \(\Lambda^{(\varepsilon,1)}=\{a,b,c\}\), for every \(n,\ell_{1},\ell_{2},\ell_{3}\in[3]\), \(\Lambda^{(\ell_{1}\ell_{2},n)}=\{e\}\times\mathbb{W}^{n}_{(\ell_{1},\ell_{2})}\) and \(\Lambda^{(\ell_{1}\ell_{2}\ell_{3},n)}=\{d\}\times\mathbb{W}^{n}_{(\ell_{1}, \ell_{2},\ell_{3})}\); and \(R\) consists of the following rules:
\[q_{f}\to(d,(x_{1}^{1}x_{2}^{1}x_{1}^{2}x_{2}^{2}x_{3}^{1}x_{2}^ {3}))(q_{l},q,q_{c}) q_{f}\to(d,(x_{1}^{1}x_{2}^{1}x_{3}^{1}x_{2}^{2}x_{3}^{2}x_{3}^{ 3}))(q_{a},q_{r})\] \[q\to(d,(x_{1}^{1}x_{2}^{1}x_{2}^{2}x_{3}^{2}x_{3}^{1}x_{2}^{3}))( q_{l},q,q_{c}) q\to(d,(x_{1}^{1}x_{2}^{1}x_{2}^{1}x_{2}^{2},x_{3}^{2}x_{3}^{2}))(q_{a},q_{r})\] \[q\to(d,(x_{1}^{1},x_{2}^{1},x_{3}^{1}))(q_{a},q_{b},q_{c}) q_{l}\to(e,(x_{1}^{1},x_{2}^{1}))(q_{a},q_{b}) q_{r}\to(e,(x_{1}^{1},x_{2}^{1}))(q_{b},q_{c})\] \[q_{a}\to(a) q_{b}\to(b) q_{c}\to(c).\]
The bottom right of Figure 2 shows an AST of \(\mathcal{G}\). \(\triangleleft\)
As the language algebra of our wRTG-LM we use the constituent tree yield algebra \((\operatorname{Tup}(\Sigma^{*}),\theta_{\Upsilon})\). For the weight algebra we point out that each of its operations computes a single partitioned constituent tree. However, our goal as determined by the constituency parsing problem is to compute a _set_ of constituent trees. Hence, we lift the constituent tree algebra to sets. Formally, we define the M-monoid
\[\mathbb{C}=(\bigcup_{n\in\mathbb{N}_{+}}\mathcal{P}(\operatorname{pC}_{\Sigma }^{(n)})\cup\{\bot\},\mathbb{O},\Omega,\Gamma,\theta_{\Sigma}^{\prime},\Sigma^ {(\mathbb{O})})\]
where \(\mathbb{O}\) and \(\theta_{\Sigma}^{\prime}\) are defined like their counterparts in Example 1. In the following, we will write \(\theta_{\Sigma}\) rather than \(\theta_{\Sigma}^{\prime}\) and we let \((\cdot)_{\mathcal{GT}}\) denote also the unique \(\Gamma\)-homomorphism from \(\operatorname{T}_{\Gamma}\) to this algebra.
Combining these components, we define the \(\mathcal{A}\)_-wRTG-LM_ to be the wRTG-LM
\[\tilde{G}=((\mathcal{G},(\operatorname{Tup}(\Sigma^{*}),\theta_{\Upsilon})), \mathbb{C},wt)\]
where \(\mathcal{G}\) is the \(\mathcal{A}\)-RTG and, for every \(r=(A\to\gamma(A_{1},\ldots,A_{k}))\) in \(R\), we let \(wt(r)=\gamma\).
### Constituency parsing is an instance of the M-monoid parsing problem
Let \(\mathcal{A}=(Q,\Sigma,\delta,q_{f})\) be a CTA and let \(\bar{G}=((\mathcal{G},(\text{Tup}(\Sigma^{*}),\theta_{\text{Y}})),\mathbb{C}, wt)\) with \(\mathcal{G}=(Q,\Lambda,R,q_{f})\) be the \(\mathcal{A}\)-wRTG-LM. The M-monoid parsing problem for \(\bar{G}\) is, given some \((u)\in\text{Tup}(\Sigma^{*})\), to compute
\[\text{parse}_{(\mathcal{G},\mathbb{C}_{\Sigma})}(u)=\bigcup_{ \begin{subarray}{c}d\in\mathbb{T}_{R}:\\ (d)\Gamma\text{Y}=(u),\,\text{lhs}(d(\varepsilon))=q_{f}\end{subarray}}(wt(d) )_{\mathcal{G}\mathcal{T}}.\]
For a given phrase \(u\), this instance of the M-monoid parsing problem enumerates the set of all ASTs of \(\mathcal{G}\) that have the initial nonterminal at the root and evaluate to \(u\) in the constituent tree yield algebra. Each of these ASTs is evaluated in the constituent tree algebra.
In order to show that this M-monoid parsing problem is equal to the constituency parsing problem for \(\mathcal{A}\) (and \(u\)), we seek a bijection \(\psi\) between the set \(\text{CR}_{\mathcal{A}}\) and the set of abstract syntax trees of \(\mathcal{G}\). However, similar to [3], we only find such a bijection if we consider certain elements of \(\text{CR}_{\mathcal{A}}\) equivalent.
Formally, we define the equivalence relation \(\sim\) as follows. For every \((\xi_{1},\rho_{1}),(\xi_{2},\rho_{2})\in\text{CR}_{\mathcal{A}}\), we let \((\xi_{1},\rho_{1})\sim(\xi_{2},\rho_{2})\) if and only if \((\xi_{1})_{\Sigma}=(\xi_{2})_{\Sigma}\) and \(\rho_{1}=\rho_{2}\). Clearly, \(\sim\) is indeed an equivalence relation. Let \(\text{CR}_{\mathcal{A}}/_{\sim}\) denote the quotient set of \(\text{CR}_{\mathcal{A}}\) by \(\sim\). For each \((\xi,\rho)\in\text{CR}_{\mathcal{A}}\), we let \([\xi,\rho]\) denote the equivalence class \((\xi,\rho)\) belongs to. An example for \(\sim\) is given in the top of Figure 2.
We define the mapping \(\psi\colon\text{CR}_{\mathcal{A}}/_{\sim}\to\text{T}_{R}\) inductively as follows. Let \((\xi,\rho)\in\text{CR}_{\mathcal{A}}\). If \((\xi,\rho)\) is of the form \((a\langle n\rangle,q)\), we let \(\psi([a\langle n\rangle,q])=q\to(a)\). Otherwise, \(\xi\) is of the form \(a(\xi_{1},\ldots,\xi_{k})\) and \(\rho\) is of the form \((q,e)(\rho_{1},\ldots,\rho_{k})\), then we let
\[\psi([\xi,\rho])=q\!\to\!(a,e)(\psi([\xi_{1},\rho_{1}]),\ldots,\psi([\xi_{k}, \rho_{k}])).\]
We illustrate \(\psi\) for the CTA \(\mathcal{A}\) from Example 2 and the \(\mathcal{A}\)-RTG \(\mathcal{G}\) from Example 4 in Figure 2.
Using a method similar to [3] we can show that \(\psi\) is indeed a bijection. Moreover, we can prove the following auxiliary statement. Let \((\xi,\rho)\in\text{CR}_{\mathcal{A}}\). If \(((\psi([\xi,\rho]))_{\Gamma})_{\mathcal{G}\mathcal{T}}=(t_{1},<_{1},(U_{1}^{( 1)},\ldots,U_{1}^{(\ell_{1})}))\) and \(\text{rep}(\xi)=(t_{2},<_{2},(U_{2}^{(1)},\ldots,U_{2}^{(\ell_{2})}))\), then
\[t_{1}=t_{2}\quad\text{and}\quad<_{1}=<_{2}. \tag{3}\]
Intuitively, \(((\psi([\xi,\rho]))_{\Gamma})_{\mathcal{C}\mathcal{T}}\) and \(\text{rep}(\xi)\) may only differ in the partitioning. For instance, consider the constituent tree \(\xi\) and the run \(\rho\) in Figure 2 where we even have \(((\psi([\xi,\rho]))_{\Gamma})_{\mathcal{C}\mathcal{T}}=\text{rep}(\xi)\).
We note that since \(\mathcal{A}\) is final state normalized, \(\psi\) implies that each AST \(d\) of \(\mathcal{G}\) with \(\text{lhs}(d(\varepsilon))=q_{f}\) has \(((d)_{\Gamma})_{\text{Y}}\in\Sigma^{*}\). Thus, \(\text{parse}_{(\mathcal{G},\mathbb{C}_{\Sigma})}(u)\) is only non-empty if \(u\) is a string. This resembles the fact that the constituency parsing problem is only defined for strings. We will assume that \(u\in\Sigma^{*}\) in the following.
After these preparations, we can show that the M-monoid parsing problem for \(\bar{G}\) and \(u\) relates to the constituency parsing problem for \(\mathcal{A}\) and \(u\) in the following way:
\[\text{parse}_{(\mathcal{G},\mathbb{C}_{\Sigma})}(u) =\{(wt(d))_{\mathcal{C}\mathcal{T}}\mid d\in\text{T}_{R},((d)_{ \Gamma})_{\text{Y}}=u,\text{lhs}(d(\varepsilon))=q_{f}\}\] \[\stackrel{{(\ref{eq:1})}}{{=}}\{(wt(d))_{\mathcal{C} \mathcal{T}}\mid d\in\text{T}_{R},\text{p-yield}(((d)_{\Gamma})_{\mathcal{C} \mathcal{T}})=u,\text{lhs}(d(\varepsilon))=q_{f}\}\] \[\stackrel{{\text{bij.}}}{{=}}\{(wt(\psi([\xi,\rho])))_{ \mathcal{C}\mathcal{T}}\mid(\xi,\rho)\in\text{CR}_{\mathcal{A}},\text{p-yield}((( \psi([\xi,\rho]))_{\Gamma})_{\mathcal{C}\mathcal{T}})=u,\] \[\text{lhs}(\psi([\xi,\rho])(\varepsilon))=q_{f}\}\] \[\stackrel{{\text{s}_{1}}}{{=}}\{(wt(\psi([\xi,\rho])))_{ \mathcal{C}\mathcal{T}}\mid(\xi,\rho)\in\text{CR}_{\mathcal{A}},\text{p-yield}((( \psi([\xi,\rho]))_{\Gamma})_{\mathcal{C}\mathcal{T}})=u,\] \[q_{f}\text{ is the state at }\rho(\varepsilon)\}\]
\[\stackrel{{\cong}}{{=}}\{\text{rep}(\xi)\mid(\xi,\rho) \in\text{CR}_{\mathcal{A}},\text{p-yield}(\text{rep}(\xi))=u,q_{f}\text{ is the state at }\rho(\epsilon)\}\] \[\stackrel{{(1)}}{{=}}\{\text{rep}(\xi)\mid(\xi,\rho) \in\text{CR}_{\mathcal{A}},\text{yield}(\xi)=u,q_{f}\text{ is the state at }\rho(\epsilon)\}\] \[=\{\text{rep}(\xi)\mid\xi\in\text{Lind}(\mathcal{A}),\text{yield }(\xi)=u\}\]
where \(\star_{1}\) holds by definition of \(\psi\) and \(\star_{2}\) follows from (3) (using \(wt=(\cdot)_{\Gamma}\)) and both \(\text{rep}(\xi)\) and \((wt(\psi([\xi,\rho])))_{\mathcal{C}\mathcal{T}}\) being in \(\text{pC}_{\Sigma}^{(1)}\) (which is a consequence of \(\mathcal{A}\) being final state normalized). We illustrate this equality by showing how the mapping rep commutes with \((\cdot)_{\mathcal{C}\mathcal{T}}\circ(\cdot)_{\Gamma}\circ\psi\) in Figure 2.
We note that \(\text{parse}_{(\mathcal{G},\mathcal{C}_{\Sigma})}(u)\) is a subset of \(\text{pC}_{\Sigma}\) (i.e., constituent trees without particular indices) whereas the constituency parsing problem computes a subset of \(\text{C}_{\Sigma}\). To bridge this gap, we note that the set \(T=\{\xi\in\text{C}_{\Sigma}\mid\text{rep}(\xi)\in\text{parse}_{(\mathcal{G},\text{C}_{\Sigma})}(u)\}\) can be easily constructed. We sketch this construction by letting \(\xi=(t,<,(U_{1},\ldots,U_{n}))\in\text{parse}_{(\mathcal{G},\text{C}_{\Sigma })}(u)\). Since \(\mathcal{A}\) is final state normalized, we have \(n=1\). Now we let \(m\in\mathbb{N}_{+}\) and fix an interval \([m,m+|U_{1}|]\), then we obtain \(\xi^{\prime}\in\text{C}_{\Sigma}\) from \(t\) by adding the indices \(m,m+1,\ldots,m+|U_{1}|\) to the symbols at the leaves of \(t\) in the order determined by \(<\). Clearly, \(\text{rep}(\xi^{\prime})=\xi\). By letting \(m\) range over \(\mathbb{N}_{+}\) we obtain the set \(\{\xi^{\prime}\in\text{C}_{\Sigma}\mid\text{rep}(\xi^{\prime})=\xi\}\). Clearly, for every \(\xi\in T\) we have \(\text{yield}(\xi)=u\) and \(\xi\in\text{Lind}(\mathcal{A})\). Thus \(T\) is the solution of the constituency parsing problem of \(\mathcal{A}\) and \(u\).
## 6 Applicability of the M-monoid parsing algorithm
The M-monoid parsing algorithm [14] is a two-phase pipeline which is applicable to a large class of M-monoid parsing problems, where applicability means that the algorithm is terminating and correct. Due to space restrictions, we cannot repeat the algorithm here and only investigate its applicability to our scenario. For this, we let \(\mathcal{W}(\text{CTA})\) be the set of all \(\mathcal{A}\)-wRTG-LMs for each final state normalized CTA \(\mathcal{A}\). We let \(\bar{G}\in\mathcal{W}(\text{CTA})\) and \(u\in\Sigma^{*}\).
The first phase of the M-monoid parsing algorithm applies a weighted deduction system to \(\bar{G}\) and \(u\), thus obtaining a new wRTG-LM \(\bar{G}^{\prime}\). Morbitz and Vogler (2021) [14] provide the canonical weighted deduction system which is applicable in all situations where the language algebra of the input wRTG-LM is finitely decomposable. Since this is clearly the case for \((\text{Tup}(\Sigma^{*}),\theta_{\text{Y}})\), we obtain that the first phase of the M-monoid parsing algorithm is applicable to every \(\bar{G}\in\mathcal{W}(\text{CTA})\) and \(u\in\Sigma^{*}\).
The second phase, called value computation algorithm, uses \(\bar{G}^{\prime}\) to compute an element in the weight algebra. There are two independent sufficient conditions for this value to be equal to \(\text{parse}_{(\mathcal{G},\text{C}_{\Sigma})}(u)\). The first condition requires \(\bar{G}\) to fulfil a property called closed. Without giving details on this property, we state that not every wRTG-LM in \(\mathcal{W}(\text{CTA})\) is closed.5 The second condition requires \(\bar{G}\) to fulfil a property called nonlooping and the weight algebra to be distributive and d-complete. Now distributivity of \(\mathbb{C}\) is easy to see and d-completeness of \(\mathbb{C}\) follows from the fact that its additive monoid is completely idempotent. In essence, \(\bar{G}\) is nonlooping if for each AST \(d\) of its RTG the following holds: if there is a proper subtree \(d|_{w}\) of \(d\) which evaluates to the same syntactic object as \(d\) in the language algebra, then \(d(w)\) must have a different label than \(d(\epsilon)\). As our language algebra is \((\text{Tup}(\Sigma^{*}),\theta_{\text{Y}})\), this property can only be violated if each node in \(d\) from \(\epsilon\) to \(w\) is monadic. Then, by pumping the monadic chain from \(\epsilon\) to \(w\), we can construct infinitely many ASTs with the same yield, each of which is evaluated to a different constituent tree in the weight algebra. However, a terminating algorithm cannot compute an infinite set of constituent trees. By the construction of \(\bar{G}\), we find that this situation is only possible if the CTA \(\mathcal{A}\) contains transitions of the form \((q_{1},a_{1},e_{1},q_{2})\), \((q_{2},a_{2},e_{2},q_{3})\),..., \((q_{n},a_{n},e_{n},q_{1})\). Thus, if \(\mathcal{A}\) is free of such monadic cycles, \(\bar{G}\) is nonlooping and the M-monoid parsing algorithm is correct for \(\bar{G}\) and \(u\).
## 7 Future work
Dependency is another important syntactical analysis in NLP. Dependency trees are also introduced by [3] where dependency tree automata are mentioned as another possible special case of HTA, mirroring CTA. We believe that the corresponding dependency parsing problem can be shown to be an instance of M-monoid parsing in a way very similar to the present paper.
The constituency parsing problem considered here states to compute the set of all suitable constituent trees. However, parsing problems often occur in weighted settings where the weights are, e.g., probabilities, and compute only the best analysis. A constituency parsing problem with such additional weights also falls in the scope of the M-monoid parsing problem. Moreover, the underlying CTA could even have transitions that allow monadic cycles as long as they lead to a decrease in weight.
|
2308.00057 | Grid-Based Atmospheric Retrievals for Reflected-Light Spectra of
Exoplanets using PSGnest | Techniques to retrieve the atmospheric properties of exoplanets via direct
observation of their reflected light have often been limited in scope due to
computational constraints imposed by the forward-model calculations. We have
developed a new set of techniques which significantly decreases the time
required to perform a retrieval while maintaining accurate results. We
constructed a grid of 1.4 million pre-computed geometric albedo spectra valued
at discrete sets of parameter points. Spectra from this grid are used to
produce models for a fast and efficient nested sampling routine called PSGnest.
Beyond the upfront time to construct a spectral grid, the amount of time to
complete a full retrieval using PSGnest is on the order of seconds to minutes
using a personal computer. An extensive evaluation of the error induced from
interpolating intermediate spectra from the grid indicates that this bias is
insignificant compared to other retrieval error sources, with an average
coefficient of determination between interpolated and true spectra of 0.998. We
apply these new retrieval techniques to help constrain the optimal bandpass
centers for retrieving various atmospheric and bulk parameters from a
LuvEx-type mission observing several planetary archetypes. We show that
spectral observations made using a 20\% bandpass centered at 0.73 microns can
be used alongside our new techniques to make detections of $H_2O$ and $O_2$
without the need to increase observing time beyond what is necessary for a
signal-to-noise ratio of 10. The methods introduced here will enable robust
studies of the capabilities of future observatories to characterize exoplanets. | Nicholas Susemiehl, Avi M. Mandell, Geronimo L. Villanueva, Giuliano Liuzzi, Michael Moore, Tyler Baines, Michael D. Himes, Adam J. R. W. Smith | 2023-07-31T18:24:01Z | http://arxiv.org/abs/2308.00057v1 | # Grid-Based Atmospheric Retrievals for Reflected-Light Spectra of Exoplanets using PSGnest
###### Abstract
Techniques to retrieve the atmospheric properties of exoplanets via direct observation of their reflected light have often been limited in scope due to computational constraints imposed by the forward-model calculations. We have developed a new set of techniques which significantly decreases the time required to perform a retrieval while maintaining accurate results. We constructed a grid of 1.4 million pre-computed geometric albedo spectra valued at discrete sets of parameter points. Spectra from this grid are used to produce models for a fast and efficient nested sampling routine called PSGnest. Beyond the upfront time to construct a spectral grid, the amount of time to complete a full retrieval using PSGnest is on the order of seconds to minutes using a personal computer. An extensive evaluation of the error induced from interpolating intermediate spectra from the grid indicates that this bias is insignificant compared to other retrieval error sources, with an average coefficient of determination between interpolated and true spectra of 0.998. We apply these new retrieval techniques to help constrain the optimal bandpass centers for retrieving various atmospheric and bulk parameters from a LuvEx-type mission observing several planetary archetypes. We show that spectral observations made using a 20% bandpass centered at 0.73 microns can be used alongside our new techniques to make detections of \(H_{2}O\) and \(O_{2}\) without the need to increase observing time beyond what is necessary for a signal-to-noise ratio of 10. The methods introduced here will enable robust studies of the capabilities of future observatories to characterize exoplanets.
Exoplanets 0000-0002-4882-8879]Nicholas Susemiehl
0000-0002-4882-7886]Avi M. Mandell
0000-0002-4880-7073]Geronimo L. Villanueva
0000-0001-8870-7886]Giuliano Liuzzi
0000-0002-4880-0708]Michael Moore
0000-0002-4880-0880]Tyler Baines
0000-0002-1881-7888]Michael D. Himes
0000-0002-1883-0888]Adam J. R. W. Smith
## 1 Introduction
Spectral or atmospheric retrieval is one of the most direct and powerful methods available for remotely exploring the composition of the atmospheres and surfaces of extrasolar planets. The objective of these retrievals is to disentangle the spectral signatures of atmospheric and surface constituents, as well as atmospheric parameters such as temperature and pressure. Doing so enables the constraint of the planet's atmospheric composition and structure as well as its bulk planetary parameters (Madhusudhan, 2018). Spectral retrieval methodologies developed for characterizing exoplanets have been adapted from existing and highly effective tools used for Solar System studies (Rodgers, 1976). The first works to retrieve the atmospheres of exoplanets used optimal estimation schemes based on Solar System retrievals (Madhusudhan and Seager, 2009), but later works showed Bayesian frameworks to be more successful for the highly unconstrained and degenerate planetary parameters characteristic of exoplanet science (Benneke and Seager, 2012). These methods have been used to constrain temperatures and abundances of gas giant atmospheres (e.g., Line et al., 2014; Stevenson et al., 2014; Kreidberg et al., 2015; Oreshenko et al., 2017; Arcangeli et al., 2018; Lothringer et al., 2018; Brogi and Line, 2019; Wilson et al., 2020; Harrington et al., 2022; Himes and Harrington, 2022), model atmospheric winds (Seidel et al., 2020), and detect water in the atmosphere of a hot Jupiter (Wilson et al., 2020).
Due to the immediate and burgeoning pool of spectroscopic data for transiting planets, the majority of exoplanet spectral retrieval tools have been developed to tackle data for the physical conditions and radiative transfer geometry probed by transiting planet measurements. This has enabled the characterization of a variety of planets with high planet-star radius ratios relative to those probed by other detection techniques. While the transit method has been successful in detecting nearly 4000 exoplanets to date (almost 80% of all known exoplanets) 1, its limitations (combined with those of other prevalent detection techniques such as radial velocity) have created a sizable gap in the mass-period discovery space. To date, no Earth-sized or even Neptune-sized planets have been detected in the habitable zones of Sun-like stars (Checlair et al., 2021) due to the extreme contrast between the parent star and the planet (of order \(10^{-10}\) contrast Checlair et al. (2021)). However, with the advent of new instruments for suppressing light from the central star, direct imaging observations of reflected planetary light have begun to yield results. Current instruments such as the Very Large Telescope's Spectro-Polarimetric High-Contrast Exoplanet Research (Beuzit et al., 2019) and the Gemini South Telescope's Gemini Planet Imager (Macintosh et al., 2015) have been able to examine dozens of brown dwarfs and distant giant planets using direct imaging, and the Nancy Grace Roman Space Telescope (formerly WFIRST; Spergel et al., 2013) is expected to push this down to Jupiter analogs. Future missions such as the LUVOIR and HabEx concepts studied as part of the recent Astro2020 Decadal Survey (of Sciences Engineering & Medicine, 2021) will have the ultimate goal of charting a path to the detection and characterization of Earth-like planets orbiting Sun-like stars in reflected light. Furthermore, using these instruments to constrain the abundance of biosignatures such as \(O_{2}\) and \(O_{3}\) will help us explore the possibility of extraterrestrial life in the universe (Schwieterman et al., 2018). Now that the first detections of an Earth-like planet orbiting a Sun-like star could be made in the near future, it is necessary to develop and validate retrieval techniques capable of accurately quantifying the atmospheric compositions and bulk properties of these planets (Damiano & Hu, 2022).
Footnote 1: [https://exoplanetarchive.ipac.caltech.edu/](https://exoplanetarchive.ipac.caltech.edu/)
Retrieval studies are also useful for evaluating future mission yields. Such studies have been performed to assess the science return for giant gaseous exoplanets in reflected light (Lupu et al., 2016; Nayak et al., 2017), examine how the constraints yielded by retrievals of Earth-like exoplanets vary for different noise and resolving power levels (Feng et al., 2018), investigate potential yields for atmospheric water constraints (Smith et al., 2020), and validate retrieval methods using Solar System analogs (Robinson & Salvador, 2022). However, one of the major challenges for evaluating future mission yields with atmospheric retrieval studies is the computational runtime required to effectively examine a wide range of instrument and observing scenarios. Bayesian retrievals compare the observed spectrum to thousands or millions of model spectra using a likelihood function. These model spectra are often simulated using complex radiative transfer codes in real-time. A typical cloudy spectrum can take about a minute to generate using state-of-the-art radiative transfer tools (e.g., Villanueva et al., 2018). Modeling scattering processes, an important contribution for reflected-light spectra at short wavelengths, increases the computational cost further. If 10,000 models are required to perform a retrieval, real-time model generation can cause the retrieval to take a week. Other works have reported similarly long retrieval runtimes (Feng et al., 2018). In this regime, performing the variety of retrievals necessary to explore multiple instrument designs and wavelength ranges becomes untenable.
Several recent studies have investigated various means to accelerate retrievals. Machine learning methods are becoming more prevalent throughout the field and have frequently been applied to retrieval studies (e.g. Waldmann, 2016; Zingales & Waldmann, 2018; Marquez-Neila et al., 2018; Soboczenski et al., 2018; Cobb et al., 2019; Fisher et al., 2020). While these methods can reduce compute times by several orders of magnitude, this often comes at the cost of the accuracy of the resulting posterior distributions. Attempts to remedy this include Himes et al. (2022) which presents retrievals utilizing a traditional Bayesian framework but with the radiative transfer forward model replaced by a neural network. Though slower than other machine learning approaches to retrieval, their method more closely agrees with traditional retrieval methods while still reducing computational costs by orders of magnitude. Other recent works have incorporated variational inference and normalizing flow-based neural networks (Hou Yip et al., 2022; Vasist et al., 2023) which have been shown to produce comparable posterior distributions to more traditional Bayesian methods while significantly accelerating retrievals. On the other hand, an example of an accelerated retrieval tool which does not implement machine learning methods is rfast(Robinson & Salvador, 2022). The rfast framework enables the retrieval of a variety of scenarios including reflected light, thermal emission, and transmission observations. rfast takes advantage of linear algebra techniques to vectorize most computations, greatly accelerating the time needed to generate a spectrum. rfast was validated using Solar System analog data and performs as accurately as radiate transfer-based
methods. Direct comparisons between the total runtimes and forward model accuracy of different retrieval methods are difficult to make due to potentially significant differences in model parameterizations and computer hardware, but future inter-comparisons could help to explore these questions.
Another option to accelerate exoplanet atmospheric retrievals which we adopt in this work is to utilize a grid-based approach. Instead of producing spectra in real time, a grid of spectra is pre-generated at defined parameter values. Intermediate spectra are interpolated from this grid for use as the model spectra during the retrieval runtime. These calculations are performed using a linear interpolation scheme which interpolates intermediate parameter values and combines the resulting spectra proportionally. This process induces an interpolation error which is proportional to the distance the interpolated points are from the closest grid points in the multidimensional space. For this reason, it is important to strategically place a sufficient number of grid points for each parameter to minimize this interpolation error. However, each new grid point adds a significant number of spectra to the grid (the total number of spectra in the full grid is equal to the product of the number of grid points for each parameter). Therefore, it is important to carefully choose the number and placement of grid points while also not adding an excessively large number of points due to computational constraints during grid generation.
A number of studies have produced spectral model grids across a range of planetary parameters (Fortney et al., 2010; Kempton et al., 2017; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2019, 2020; Smith et al., 2020), and several have incorporated grid-based methodologies into retrieval studies (e.g. Waldmann, 2016; de Wit et al., 2018; Fisher and Heng, 2022). Most commonly, grids are constructed using an equal number of linearly spaced points for each parameter (e.g. Allard et al., 2001; Goyal et al., 2019; Marley et al., 2021). Constructing grids in this manner enables the incremental study of spectra as particular parameters are changed in defined steps. However, this approach may not be optimal for retrieval studies which seek to maintain the accuracy of retrievals compared with full radiative transfer calculations by minimizing the interpolation error. Placing grid points evenly ignores the nonlinear effect that changing certain parameters has on the morphology of a spectrum. This method could also under- or over-sample certain regions of the parameter space, resulting in interpolation errors that are either excessively high or grid sizes which are computationally challenging or even infeasible to generate. For these reasons, exoplanet spectral retrieval studies using a grid-based approach may benefit from a different means of choosing the placement of grid points.
In this work, we present a novel, nonlinear approach to grid construction. We developed an algorithm which iteratively adds additional grid points at the location with the highest interpolation error. This results in the maximum reduction of interpolation error at each step by optimizing the trade-off between computational complexity and interpolation accuracy. While the grid-based techniques we present in this work induce some interpolation error and require an initial computation investment, they reduce total retrieval runtimes to minutes or even seconds (using a standard high-end laptop.). This significant speed-up enables a host of new studies which involve many retrieval runs and will greatly enhance our capabilities to examine the sensitivity of model parameter inference to the expected performance capabilities of near-future observatory missions. Our methods for constructing, evaluating, and deploying this grid are the primary topics of this work.
This work is structured as follows: in Section 2, we describe our methods for parameterizing and constructing the grid. In Section 3, we present our implementation and evaluation of grid-based retrievals. Section 4 describes an application of these methods to a scientific case. We discuss caveats associated with this work in Section 5 and conclude in Section 6
## 2 Input Spectral Grids
The goal for our project was to construct a grid of model reflectance spectra at the visible wavelengths, focusing on rocky planets and using a simplified set of planetary parameters; the ranges of parameter values would be centered around an Earth-like case. This grid would allow us to examine the effectiveness of retrieving planetary parameters for the case of a potentially Earth-like planet assuming different observing scenarios.
In order to build a suitable spectral grid, a complete radiative transfer code capable of including all the surface and atmospheric components of interest was required. To this end, we employed the radiative transfer capabilities of the Planetary Spectrum Generator (PSG; Villanueva et al., 2018, 2022) to generate the grid of spectra. PSG is a state-of-the-art radiative transfer suite, incorporating a variety of different spectroscopic methods, opacity databases, and continuum and scattering processes (e.g., Rayleigh, CIAs) to synthesize spectra at different viewing geometries. At the core of the planetary radiative-transfer module of PSG, the PSGDORT module performs multiple-scattering calculations in a layer-by-layer framework. Many spectral databases are available in PSG, but for this study we have
employed the molecular parameters from the latest HITRAN-2020 database (Gordon et al., 2022) that are integrated using a correlated-k method. The molecular databases are complemented in the UV/optical with cross-sections from the Max Planck Institute of Chemistry database (Keller-Rudek et al., 2013). Besides the collision-induced absorption (CIA) bands available in the HITRAN database, the MT_CKD water continuum is characterized as H\({}_{2}\)O-H\({}_{2}\)O and H\({}_{2}\)O-N\({}_{2}\) CIAs (Kofman and Villanueva, 2021).
### Grid Parameterization
The grid of planetary reflectance spectra were computed as geometric albedo spectra (I/F) over a broad bandpass from 0.4 - 1.0 \(\mu m\); this wavelength range was chosen to encompass the defined spectral range for the visible channels of the exoplanet imaging instruments from the LUVOIR and HabEx mission studies We chose a native spectral resolution (R) of 500, which allows grid users to down-sample the spectra to any lower R. The atmospheric layering and vertical temperature and abundance profiles were generated following the methods described in Smith et al. (2020). The atmospheres of these spectra used constant volume mixing ratio (VMR) profiles of \(H_{2}O\), \(O_{3}\), and \(O_{2}\) with an \(N_{2}\) background (i.e. \(N_{2}=1-H_{2}O-O_{3}-O_{2}\)), and the temperature profiles are constant at 250 K. The model-top pressure is set to \(10^{-4}\) bars for each spectrum. Both clear and "cloudy" versions of each spectra were created. The cloudy spectra contain isotropic, wavelength independent clouds with a mass mixing ratio of 0.23509 ppm and a particle size distribution peaked at 1 \(\mu\)m and an S parameter of 1.5 for the full vertical profile, which corresponds to an optical depth of 10 at the surface. The cloud scattering model employed the Mie implementation by Bohren and Huffman, with 20 angles, 200 size bins, and a complex extinction coefficient of 1; for more details see Section 3 of Chapter 5 of the PSG handbook2. Partially cloudy spectra can then be produced by linearly combining the clear and cloudy spectra according to a given cloudiness fraction. While the cloudiness fraction is not a parameter which was optimized during grid construction, it is used as a variable in the retrievals mentioned in Section 4. In all other retrievals, the cloudiness fraction is set to a constant value of 0.5 (meaning that all spectra involved in these retrievals are partially cloudy) unless otherwise stated.
Footnote 2: [https://psg.gsfc.nasa.gov/help.php#handbook](https://psg.gsfc.nasa.gov/help.php#handbook)
The planetary radius (\(R_{p}\)) of each spectrum in the grid was set to 1 \(R_{\oplus}\). Rather than generating spectra with different planetary radii, we took advantage of the proportional relationship between reflected flux and planetary radius for a 1D atmosphere and surface model. A given geometric albedo spectrum from the grid can be converted to a planet-star contrast spectrum with any given planetary radius following Equation 1 below. Equation 1 shows the planet-star flux ratio \(\frac{F_{p}(\lambda)}{F_{s}(\lambda)}\) as a function of the geometric albedo spectrum \(A_{g}(\lambda)\), planetary radius \(R_{p}\), and planet-star separation \(r\) (1 AU throughout this work):
\[\frac{F_{p}(\lambda)}{F_{s}(\lambda)}=A_{g}(\lambda)\Phi(\alpha)\left(\frac{R_ {p}}{r}\right)^{2}, \tag{1}\]
The spectra are parameterized by their unique \(H_{2}O\), \(O_{3}\), and \(O_{2}\) abundances as well as their surface pressure (\(P_{0}\)), gravity (\(g\)), and surface albedo (\(A_{s}\)). These parameter ranges match the prior ranges explored by Feng et al. (2018, F18); the only difference in these parameter ranges is that we cap \(P_{0}\) at 10 bars because greater surface pressures would lead to a complex contribution from clouds, and we deemed this regime to be beyond the scope of the current analysis. The full ranges of these parameters are provided in Table 1. Fiducial values were chosen to be the same Earth-like values used in F18: \(H_{2}O=3\times 10^{-3},P_{0}=1,O_{3}=7\times 10^{-7},O_{2}=0.21,g=9.8,A_{s}=0.05\). Figure 1 shows the fiducial spectrum in comparison to spectra with extreme high/low values of each parameter.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Parameter Symbol & Description & Minimum & Maximum \\ \hline \(H_{2}O\) & Water Abundance & \(10^{-8}\) & \(10^{-1}\) \\ \(O_{3}\) & Ozone Abundance & \(10^{-10}\) & \(10^{-1}\) \\ \(O_{2}\) & Oxygen Abundance & \(10^{-8}\) & \(0.8\) \\ \(P_{0}\) & Surface Pressure (bars) & \(10^{-3}\) & \(10\) \\ \(g\) & Surface Gravity (\(m/s^{2}\)) & \(1\) & \(100\) \\ \(A_{s}\) & Surface Albedo & \(10^{-2}\) & \(1\) \\ \hline \end{tabular}
\end{table}
Table 1: Grid Parameterization
Figure 1: Grid spectra at extreme parameter values compared to the fiducial baseline spectrum (\(log_{10}H_{2}O=-2.52,log_{10}P_{0}=0.0,log_{10}O_{3}=-6.15,log_{10}O_{2}=-0.68, log_{10}g=0.99,A_{s}=0.05\)).
### Grid Construction
We developed an algorithm to find the optimal spacing and quantity of grid points in order to optimize the trade-off between interpolation accuracy and the computational time needed to produce the grid. For each parameter, we generated a test grid with 75 points for that parameter and 3 points (low, medium and high) for the other parameters. Multiple values of the other parameters are taken into account to capture the effects of all parameters on each spectrum, in order to give a more complete view of how interpolations will be performed in a real retrieval scenario. This leads to a total of \(75*3^{5}=18,225\) grid points for each parameter analysis. The grid point selections for each parameter are handled individually. The algorithm is initialized with a two-point grid composed of one point at each of the upper and lower extrema. This simple grid is then used to interpolate spectra across the full space and the interpolation error (w.r.t. the true spectra in the test grid) is calculated at each point. A new grid point is then added at the location of maximum error from among the 75 test grid points of the parameter of interest. This process is repeated to iteratively add points to the grid at the location of the maximum interpolation error until this maximum error falls below a threshold value.
We utilized an error metric designed to capture the difference between the true and interpolated spectra in proportion to the true spectrum, calculated as:
\[error=\frac{\tilde{d^{*}}}{max(\bar{s_{f}},\bar{s_{t}})} \tag{2}\]
where \(s_{f}\) is the fiducial spectrum and \(s_{t}\) is the true spectrum. \(\tilde{d^{*}}\) is the median of the top 10% of spectral points with the greatest squared difference \(d\), which is defined by:
\[d=(s_{i}-s_{t})^{2} \tag{3}\]
where \(s_{i}\) is the interpolated spectrum and \(s_{t}\) is the true spectrum.
We take the squared difference to treat deviations of the interpolated spectrum above and below the true spectrum the same and to emphasize larger differences over smaller ones. We only consider the 10% of spectral points with the highest error to further focus on the regions where interpolation is least accurate. The median is then taken over these differences to summarize them as one number, robust to outliers. The fiducial spectrum, which is one with Earth-like parameters as described in Feng et al. (2018) (\(H_{2}O=3\times 10^{-3},P_{0}=1,O_{3}=7\times 10^{-7},O_{2}=0.21,g=9.8,A_{s}=0.05\)), was used in the denominator of this equation because spectra which have high \(O_{3}\) abundances have continua near zero, causing the error value to increase substantially (see the \(O_{3}\) panel of Figure 1). Once a series of grid points that resulted in a maximum interpolation error metric of less than 10% was found for each parameter, the full grid could be constructed.
Figure 2 illustrates the grid construction algorithm with this error metric. As expected, the error is lowest closest to the grid point and on the grid points the error is zero. The error is not necessarily the highest directly in the middle of two grid points (see top panel), so a naive strategy of only placing new grid points in between existing ones is insufficient.
Our final full grid is composed has 13 points for \(H_{2}O\), 16 for \(P_{0}\), 24 for \(O_{3}\), 9 for \(O_{2}\), 8 for \(g\), and 4 for \(A_{s}\) totalling 1.4 million spectra. In order to efficiently generate this large number of spectra, we used a scalable cloud-based interface to PSG housed on GSFC's local cloud computing cluster, which we call GridRunner. GridRunner is able to accelerate calls to PSG's API using in-RAM filesytms and automated configuration of PSG packages. We used GridRunner with about 50 virtual machines, each of which had four workers using 2.6 Ghz Intel Sandy Bridge hardware. This architecture enabled us to generate 1.4 million spectra in about two weeks. The final structure of the grid is summarized in Table 2.
## 3 Implementation and Validation of Grid-Based Retrievals
### Choosing a Bayesian Inference Algorithm
Figure 2: Three steps in the grid construction process for \(H_{2}O\). Each step adds one grid point, with step 1 beginning with 2 grid points. The first panel shows the initial state (step 1), the second an intermediate (step 6), and the third panels shows the final grid point configuration (step 12). Each grid point is represented by a blue vertical line. Each black point is the maximum error for the particular \(H_{2}O\) value across all other off-axis parameter values. The horizontal red dashed line is the 10% error threshold.
A key consideration while building a grid-based retrieval framework was the choice of a Bayesian posterior sampling algorithm. This can greatly affect the performance of retrievals so we investigated two popular implementations of two common classes of algorithms, emcee(Foreman-Mackey et al., 2013) for Markov Chain Monte Carlo (MCMC) and Multinest (Feroz and Hobson, 2008) for Nested Sampling (NS). While these algorithms both approximate the parameter posterior distributions, the means by which they explore the prior space to infer the distribution differs. An MCMC chain starts from an (often random) initial point in the prior space and "walks" to another with a higher likelihood following a stochastic acceptance rule. This continues until a convergence criterion is reached (Gelman and Rubin, 1992; Vehtari et al., 2021) and the paths taken by the walkers are used to construct the posterior distributions. Similarly, NS is initialized by randomly placing a number of _live points_ throughout the prior space. A likelihood value is calculated for each of these and the live point with the lowest log-likelihood is discarded. A new point, unrelated to the previous, with higher log-likelihood is then sampled from the restricted prior volume. This process continues until the remaining prior volume is negligibly small, with each discarded live point composing the posterior distribution according to a given weight.
We examined MCMC and NS through two popular implementations of these methods: emcee and Multinest. These packages have been used throughout the literature in the context of exoplanet atmospheric retrievals (e.g., Lupu et al., 2016; Feng et al., 2018; Marley et al., 2018; Konrad et al., 2022). Some works have pointed out weaknesses in these two particular sampling algorithms (in particular Multinest, e.g., Buchner, 2016; Ardevol Martinez et al., 2022; Himes, 2022). This work is not intended to be an endorsement of these particular implementations over others; these two were chosen for ease of use and their extensive use by previous studies. Particular results presented here may change slightly when using other sampling algorithms, but the overall methodology presented in this paper can be used independent of the choice of Bayesian framework, and in future implementations we will conduct a more extensive analysis to determine if other sampling algorithms would produce improvements in our results.
We compared emcee and Multinest by performing a series of retrievals using both algorithms (the same as those discussed in Section 3.2 - F18's Figures 7-9). In our testing, the final results between the two algorithms were comparable, but MultiNest converged much faster and in far fewer iterations than emcee. Specifically, the 4-parameter retrieval discussed in Figure 4 took about 6 seconds to run using our implementation of Multinest (discussed below) and 476 seconds to run using emcee (while ensuring convergence according to the the method of Vehtari (2021)). For this reason, we chose to continue this work with Multinest and a NS framework as our Bayesian sampling algorithm.
The choice of whether to interpolate a parameter in linear space or log space can also significantly change the accuracy of the interpolation. To determine the optimal choice for each parameter, we interpolated each parameter individually at multiple values with both spacings and chose the configuration that led to the lowest interpolation error. This led to the choice to interpolate \(H_{2}O\), \(P_{0}\), \(O_{3}\), \(O_{2}\), and \(g\) in \(\log_{10}\) space and \(A_{s}\) in linear space.
We used a novel application of the Planetary Spectrum Generator (Villanueva et al., 2018, 2022) called PSGnest3 to perform the data analysis shown in this paper. PSGnest is a retrieval tool based on Multinest which is specifically conceived for exoplanetary observations, yet it can be adapted to any data with the proper setup. PSGnest takes advantage of memory mapping methods coded in C to greatly accelerate computations. PSGnest outputs all the relevant quantities for nested sampling, including the log evidence log(Z), the highest-likelihood output parameters, their average value resulting from the (possibly multimodal) posterior distribution and their uncertainties, which are estimated from the posterior distribution as well (Villanueva et al., 2022). Unless otherwise stated, all of the retrievals presented in this work used the PSGnest default values for certain MultiNest hyperparameters. These include 400 live points (examined further in Figure 6) and a stopping/convergence factor (dlogz) of 0.1. The sampling efficiency factor was set to 1.0, favoring posterior accuracy over evidence accuracy. Constant efficiency mode, which is known to underestimate errorbounds, was not used.
Footnote 3: [https://psg.gsfc.nasa.gov/apps/psgnest.php](https://psg.gsfc.nasa.gov/apps/psgnest.php)
Once the grid was constructed, we proceeded to use it to benchmark PSGnest. This was primarily done by performing retrievals configured in a manner similar to those described in F18. There are some differences between the radiative transfer scheme used here versus that of F18, mainly that we use isotropic clouds while F18 defines a distinct cloud layer. Figures 7, 8, and 9 of F18 show their retrieval results for retrievals of 2 (\(P_{0}\), \(A_{s}\)), 4 (\(P_{0}\), \(R_{p}\), \(g\), \(A_{s}\)), and 7 (\(H_{2}O\), \(P_{0}\), \(O_{3}\), \(O_{2}\), \(R_{p}\), \(g\), \(A_{s}\)) parameters, all parameters included in our grid (except for \(R_{p}\), see Section 2.2). We perform retrievals to reproduce these figures using spectra interpolated from our grid as the forward model to the PSGnest retrieval interface. The goal of this study is to benchmark the performance of PSGnest through comparison to ensure that it is a correctly-implemented interface to MultiNest. To this end, we set the data spectra of the retrievals to be a spectrum interpolated from the grid. Thus, interpolation error will not confound the sampler's ability to compare model spectra to the data spectrum and this study will only investigate the validity of PSGnest, independent from the interpolation error from the grid. The spectra were converted from geometric albedo to star/planet contrast following Equation 1, which enables us to include the planetary radius, \(R_{p}\), in the retrieval (see Section 2.2). In addition to this, the spectra of the grid were downgraded from R=500 to R=140 by averaging together the hi-res spectral values closest to each low-res wavelength point to match F18, and S/N=20 uncertainty was applied during the retrieval (without random scatter, see Section 5.3). We used partly cloudy spectra (computed by adding 50% of a clear spectrum to 50% of a cloudy spectrum). The results of this reproduction are shown in the Figures 3, 4, and 5.
Figure 3: Reproduction of F18’s Figure 7 using PSGnest.
Figure 4: Reproduction of F18’s Figure 8 using PSGnest.
Overall, we find that the results produced by PSGnest compare favorably to those presented in F18. The retrieved values and uncertainties are similar to those of F18. Differences are likely due to the grid-based nature of PSGnest (although interpolation error does not factor this) and the difference in sampling techniques. PSGnest is built on MultiNest (Feroz & Hobson, 2008) whereas F18 used emcee(Foreman-Mackey et al., 2013). We found that PSGnest/Multinesl performs much faster than emcee, with PSGnest taking on the order of minutes while emcee took hours if care is taken to ensure the sampler converges to a solution. Figure 6 summarizes the number of forward model evaluations used by PSGnest for different numbers of live points. These retrievals, as well as all others discussed in this work, were run on a 2020 MacBook Pro with a 2.0GHz quad-core 10th-generation Intel Core i5 processor
Figure 5: Reproduction of F18’s Figure 9 using PSGnest.
One area where noticeable differences are present in our retrievals compared to those of F18 is in the correlations between parameters, particularly in the reproduction of their Figure 9 (our Figure 5). We observe a weak anticorrelation between \(H_{2}O\) and \(P_{0}\) and also \(O_{2}\) and \(P_{0}\) whereas the respective figure in F18 shows positive correlations. We are confident in the validity of our findings due to physical expectations. High \(P_{0}\) leads to deeper spectral features for the same gas abundance, so matching a spectrum with higher \(P_{0}\) requires a lower abundance. Therefore, some degree of anticorrelation should be expected.
### Grid Evaluation
Before applying the grid towards a scientific study, we sought to characterize the error induced by interpolating spectra between grid points and its effect on retrievals across the parameter space. To this end, we generated a "test grid" composed of actual PSG-derived spectra with grid parameter values chosen to be intermediate between the grid point positions of the original grid of spectra, which we refer to as the "main" grid. The test grid spectra were generated in a manner that is otherwise identical to the main grid's spectra (see Section 2.2). This test grid allows us to explore the interpolation error at the locations where it should theoretically be the highest, thus yielding a worst-case estimation of the error.
#### 3.3.1 Interpolation Error
First, we calculated the interpolation error for each of the 700,000 test grid spectra using two metrics: the sum of squared errors (SSE) between the true and interpolated spectrum, and the coefficient of determination (\(R^{2}\)), which measures the linear correlation between each interpolated and true spectra. As a first check for the performance of the grid, we examined the mean \(R^{2}\) across the entire grid space and found this value to be 0.998, indicating very accurate
Figure 6: Number of forward model evaluations of PSGnest for 2- (F18’s Figure 7) and 7- (F18’s Figure 9) parameter retrievals by number of live points
interpolations on average. To look closer into where the interpolation accuracy is poor, we took the maximum of the interpolation SSE's for all spectra of each parameter value of the six grid parameters. This yielded the highest interpolation error across the parameter space for every distinct value of every parameter and is shown in Figure 7.
Figure 7: Interpolation error across the parameter space. The blue curves represent the maximum interpolation error at each point in the parameter space over all off-axis parameters. The non-blue curves are created by sequentially filtering out all values other than an Earth-like value for each respective parameter (other parameters are not filtered unless noted). The locked parameters are set to Earth-like values (also listed in Table 3) of 0.002 for \(H_{2}O\), 1.205 for \(P_{0}\), 7.796\(*10^{-7}\) for \(O_{3}\), 0.301 for \(O_{2}\), 10.642 for \(g\), and 0.311 for \(A_{s}\), and are represented by vertical black dashed lines. These are points on the test grid which are closest to realistic Earth-like values. The parameters are locked in descending order of impact on the maximum interpolation error SSE. Panel 1 is the maximum interpolation SSE with no parameters locked; Panel 2 adds the effect of locking \(P_{0}\); Panel 3 adds the effect of locking \(O_{3}\); Panels 4, 5, and 6 add the effects of locking \(A_{s}\), \(g\), and \(H_{2}O\) respectively (the maximum interpolation SSE with \(P_{0}\), \(O_{3}\), \(A_{s}\), \(g\), and \(H_{2}O\) locked is entirely in line with the curve with \(P_{0}\), \(O_{3}\), \(A_{s}\) and \(g\) locked).
In Figure 7 we plot the highest interpolation error values for each parameter. The blue curves in Figure 7 represent the maximum interpolation error for each parameter across all values of the off-axis parameters. We observe interpolation errors which are high relative to the rest of the respective parameter space in regions of low \(P_{0}\) (\(log_{10}P_{0}<-1.5\)), high \(O_{3}\) (\(-5<log_{10}O_{3}<-1.5\)), high \(A_{s}\) (\(A_{s}\sim 0.9\)), and moderate \(g\) (\(log_{10}g\sim 0.6\)). The overall interpolation error does not vary significantly as a function of \(H_{2}O\) and \(O_{2}\). Retrievals of observed exoplanetary spectra with planetary parameters in these regions of high interpolation error could suffer from poorer accuracy than retrievals of spectra with atmospheric parameters in a region with low interpolation error, but this is dependent on the relative impact of interpolation error versus the contribution from noise in the data and the degeneracies due to the combination of parameters retrieved.
A key point is that the accuracy of interpolating between values of a single parameter is highly dependent on the values of the off-axis parameters. Each of the non-blue curves in Figure 7 shows the interpolation error when a given off-axis parameter is locked to a single, Earth-like value on the test grid (see Table 3). Locking an off-axis parameter to a single value allows us to estimate the effect that the parameter has on the interpolation error overall by comparing these narrower interpolations to those performed across every off-axis parameter value. We chose to focus on the Earth-like parameter space for this examination because we consider this to be an important planetary regime for the purpose of this study, and it is clear from this same figure that the grid interpolation works relatively well in this regime. The non-blue curves show that the interpolation error in the Earth-like regime produces a significantly lower interpolation error than the worst portions of parameter space. For instance, we observe in panel 2 (upper right) of Figure 7 that locking \(P_{0}\) at the Earth-like value reduces the maximum interpolation SSE of \(O_{3}\) from 0.2 to 0.1. From this, we can conclude that \(P_{0}\) has a significant impact on the accuracy of interpolation between \(O_{3}\) values. This trend continues for each parameter - we see that locking \(P_{0}\) has the greatest impact on the maximum interpolation errors of each other parameter, followed by \(O_{3}\), then \(A_{s}\). Locking \(g\), \(H_{2}O\), or \(O_{2}\) has a much smaller effect on the interpolation error than locking any other parameter. Therefore, \(P_{0}\), \(O_{3}\), and \(A_{s}\) are the primary drivers of the interpolation error in this grid.
#### 3.3.2 Retrieval Performance in the Earth-like Regime
Once the interpolation error across the parameter space was characterized, our next step was to evaluate the grid on its performance when used in retrievals. We ran 6-dimensional retrievals for each distinct value of each parameter in the test grid. For each of these, the parameter of interest was set to be the given value in the test grid and the off-axis parameters were set to be Earth-like values (see Table 3). For each of these parameter sets, a "true data" spectrum was created with an S/N of 20 from the original PSG radiative transfer calculation. The retrievals were then repeated but instead of using a "true" spectrum for the data, the data spectrum was created by interpolating from the main grid. By comparing the retrieval accuracy between a "true" data spectrum and an interpolated data spectrum, we can determine the offset in the retrieved values created by the interpolation error, since the retrieval sampling algorithm
\begin{table}
\begin{tabular}{c c} \hline \hline Parameter Symbol & Value \\ \hline \(H_{2}O\) & 0.002 \\ \(O_{3}\) & 7.796\(\ast 10^{-7}\) \\ \(O_{2}\) & 0.301 \\ \(P_{0}\) & 1.205 \\ \(g\) & 10.642 \\ \(A_{s}\) & 0.311 \\ \hline \end{tabular} Note. – Earth-like values used for off-axis parameters throughout this study. These were chosen as the points on the test grid which are closest to the Earth-like values used in F18.
\end{table}
Table 3: Earth-like off-axis values
should be able to match the interpolated data spectrum perfectly but should retrieve a somewhat imperfect best-fit value for the true spectrum. No random noise was added to the data spectra in any of these retrievals because this noise would prevent the retrievals which used an interpolated data spectrum from perfectly matching the input.
Figure 8 displays the results of these retrievals. In this figure, retrievals performed using the interpolated data spectra are represented in blue and those performed using "true" data spectra are shown in orange. This figure provides two different views of the retrieval error, calculated as the absolute offset between the retrieved parameter value for a retrieval performed using a "true" data spectrum and the retrieved value when using an interpolated one. The left column shows the full extent of the offset, while the right column zooms in to regions where the two types of retrievals yielded similar offsets. By comparing the interpolated data results to the "true" data results and examining how the difference compares to the uncertainty driven by the retrieval analysis, we can examine the effect of the interpolation error on the derived best-fit value while other confounding factors such as degeneracies between
Figure 8: 6-D Retrieval Evaluations showing the retrieval error across the parameter space. The retrieval error is defined as the absolute difference between the true and retrieved values. Blue points represent retrievals performed using an interpolated data spectrum while orange points represent retrievals performed using true data spectra. Error bars on these points are drawn as the 68% credible regions. The left column shows the full extent of the y-axes, while the right column zooms into the portion of the y-axes contained within the horizontal dashed black lines.
parameters and uncertainty due to the data uncertainties are controlled. Any differences between the results of the two series of retrievals are due to the interpolation error.
As we would expect, the derived retrieval error is either higher for the "true" data spectrum retrievals, or very similar for both "true" and interpolated data retrievals. We also find that the derived retrieval error for both the retrievals performed using a true data spectrum and those resulting from the use of an interpolated spectrum are almost always within \(1\sigma\) of the input value. The only outliers are the "true" data spectrum retrievals for several values for O3, which are \(\sim 1.2\sigma\) from the input value; these points represent the highest-interpolation error points from panel 2 in Figure 7, and we further discuss the specific reasons for the excess error in these regions of parameter space below. Overall, we conclude the the interpolation error is not a significant inhibitor for performing retrievals of planetary spectra in the region of parameter space close to the Earth-like regime using this grid.
#### 3.3.3 Retrieval Performance for High-Error Regions
While the evaluation metrics discussed in Sections 3.3.1 and 3.3.2 show that the interpolation error does not significantly impair retrievals of spectra in the Earth-like regime (recall Table 3), this level of performance does not hold for all regions of the grid's parameter space. We observed remarkably poor retrieval performance (i.e. the retrieved values differ significantly from the true values) in regions of high \(O_{3}\) (\(log_{10}O_{3}>-5\)), low \(P_{0}\) (\(log_{10}P_{0}<-1.7\)), and high \(A_{s}\) (\(A_{s}=0.96\)) where the Bayesian inference converges to incorrect values and around multiple distinct modes appearing as spikes. Figure 9 shows an example of these inaccurate retrievals alongside a more typical case.
This 6-D retrieval of a data spectrum with \(O_{3}=0.014\), \(P_{0}=0.0068\), and \(A_{s}=0.960\) converges to incorrect solutions focused around several \(O_{3}\) and \(P_{0}\) points. These spikes are also present in the \(g\) posterior, but this marginalized parameter does not contribute significantly to the interpolation error (Figure 7). The spikes are not present in the \(H_{2}O\), \(O_{2}\), and \(A_{s}\) marginalized posteriors associated with these retrievals. To confirm that these results are correlated with regions of high \(O_{3}\) and low \(P_{0}\), we produced a similar corner plot for a retrieval using the same high \(O_{3}\) value (\(O_{3}=0.014\)) but with Earth-like \(P_{0}\) and other off-axis parameters (Table 3), shown in the right panel. The right corner plot does not exhibit the same problems present in the low \(P_{0}\) retrieval of the left corner plot. We also examined retrievals involving other combinations of high/low \(O_{3}\) and \(P_{0}\) and found that these issues were not present in other
Figure 9: 2-D posteriors of \(O_{3}\) and \(P_{0}\) resulting from two separate 6-D retrievals of data spectra both with \(O_{3}=0.014\). The corner plot on the left shows a retrieval with a low true surface pressure (\(P_{0}=0.0068\)) while the panel on the right shows a retrieval with an Earth-like value for the true surface pressure (\(P_{0}=1.2\)). The true and retrieved (median) values are shown as blue and orange vertical lines respectively. The retrieval algorithm fails to recover the correct \(O_{3}\) and \(P_{0}\) and converges to several other solutions when \(O_{3}\) is high and \(P_{0}\) is low but is more accurate when both \(O_{3}\) and \(P_{0}\) are high.
cases. Therefore, it is the interaction between high \(O_{3}\) and low \(P_{0}\), not one parameter individually, which causes the degeneracy and poor retrieval results.
We then repeated the retrieval whose results are shown in the left panel of Figure 9 but using an interpolated data spectrum instead of a true one produced by PSG (the same approach used in Section 3.3.2). This allows us to investigate this effect independent of interpolation error. We observed similar spikes in this case as before, but most solutions were centered near the true values. This indicates that the interpolation error contributes to the significant error in the retrieved best-fit values, but is not the cause of the unusual posterior morphology. To investigate this further, we plot the true and retrieved spectra as well as the spectra associated with the left, middle, and right points in the 2-D marginalized posterior shown in Figure 9.
Plotting the true and retrieved spectra alongside other spectra explored by the sampler reveals the degenerate nature of this regime. The lower panel of Figure 10 shows the difference between each spectrum drawn from the posterior and the data spectrum. These differences are relatively minuscule, indicating that each of these spectra are nearly identical despite their different locations in the parameter space. This degenerate behavior may also be inherent to the nature of this particular region of parameter space and the combination of high \(O_{3}\) and low \(P_{0}\) values. These degenerate
Figure 10: Spectra associated with retrievals in the high \(O_{3}\), low \(P_{0}\), high \(A_{s}\) space. ”Left”, ”Middle”, and ”Right” refer to the three darkest points in the 2-D posterior of the left panel of Figure 9. The true spectrum corresponds to the blue vertical lines in that figure while the retrieved corresponds to the orange. The second panel plots the difference between the true spectrum and each other spectrum. Despite differences in parameter values, all spectra are nearly identical.
solutions are particularly acute for atmospheres with high levels of \(O_{3}\). The modeling of \(O_{3}\) in the UV/optical is done in PSG employing cross-sections, which for highly optically thick regimes can be challenging to capture the subtle changes at high opacities on the line cores and wings; this may lead to a relatively high interpolation error compared with the small changes in the spectral shape. Unfortunately, there are no available linelists databases for these bands, so no line-by-line or correlated-k methods can be applied at these wavelengths, which could assist in removing these degeneracies in the modeling.
Examining this issue further, we notice that the positions of these spikes correspond directly to grid points in the case where the data spectrum is not interpolated from the grid (i.e. when interpolation error confounds the sampler's ability to reach the true solution). Since we established in Figure 10 that the spectra in this region are all practically identical, the likelihood values associated with these spectra will also be nearly identical. This will cause the sampler to prefer all degenerate solutions nearly equally. However, because we use grid-based methods where interpolation error causes deviations from the true spectra, interpolated spectra which are closest to the their true versions will be preferred by the sampler because these will be closest to the degenerate solution. This occurs where the interpolation error is the lowest: on the grid points themselves. Therefore, these spikes can be expected when using grid-based methods in a region of high degeneracy.
Adding random scatter to the data spectrum would make this issue less pronounced as the singular true spectrum is altered, but it is unclear if this would fully alleviate the issue; additional work will be needed to further characterize this effect. The results shown here are a worst-scenario for retrievals using this grid.
## 4 Application of Grid-Based Retrievals
With the grid and the PSGnest retrieval framework validated, we proceeded to apply these methods to a scientific case. Extensive studies to understand the potential yields of future direct imaging mission concepts (such as the LUVOIR and HabEx concepts prepared for the Astro2020 Decadal Survey) will be important in designing and optimizing mission architecture and instrumentation. In particular, direct imaging missions incorporating internal coronagraphs to block the light of the central star will be limited in the width of the simultaneous bandpass that can be acquired. Additionally, the use of imaging spectrograph technologies such as an integral field spectrograph (IFS) will limit the spectral resolving power that can be achieved. By comparing the atmospheric constraints achieved with different mission and instrument performance expectations, we can determine the best balance of various aspects of the instrumentation. Similarly, we would like to optimize the total integration time needed to acquire specific constraints on atmospheric parameters, which are driven by the wavelength coverage and S/N of the data being examined.
### Optimal Bandpass Study Methods
To this end, we applied our grid-based retrieval framework to investigate the potential "characterization yields", or constraints on different atmospheric parameters, as a function of different bandpass centers, bandpass widths, S/N, and spectral resolving powers (R). We examined three scenarios for atmospheric bulk density (i.e. \(P_{0}\) and \(g\)), in order to inform the parameterizations of the spectra used in this study: atmospheric surface pressures and surface gravities analogous to those of Mars, Earth, and Neptune. Each of these bulk atmospheric density scenarios has the same gas abundance values as the Earth-like case of F18 (constant VMRs of \(H_{2}O=3*10^{-3}\), \(O_{3}=7*10^{-7}\), \(O_{2}=0.21\)) and surface albedo \(A_{s}\) and planetary radius \(R_{p}\) of a realistic Earth (0.3 and 1 \(R_{\oplus}\) respectively). The planetary scenarios therefore only differ in their surface pressure and gravity, which more closely emulate those of the Solar System planetary analogs. These parameter values are presented in Table 4. Note that the parameterizations for these planetary scenarios were not meant to replicate the respective planets exactly - only to provide different \(P_{0}/g\) archetypes for comparison.
In contrast to the previous retrievals we ran, here the cloudiness fraction is now varying and the planetary radius is fixed at 1 \(R_{\oplus}\). The cloudiness fraction, \(C_{f}\), controls the linear combination of clear (\(s_{clear}\)) and cloudy (\(s_{cloudy}\)) spectra: \(C_{f}\times s_{clear}+[1-C_{f}]\times s_{clear}\). Fixing the planetary radius leads to unrealistically strong constraints on the surface pressure and surface albedo because of the degeneracy between the impact of the planetary radius and the impact of these parameters, but this is necessary since our current retrieval framework does not allow for including physically-based priors on planetary radius with true planetary radii other than 1 \(R_{\oplus}\). Disentangling true \(R_{p}\) values from other factors using only directly imaged reflected-light spectra of planets is essentially impossible, and therefore ancillary constraints are necessary. We leave this type of prior constraint retrieval for future work.
We chose a fiducial parameter set with a bandwidth of 10%, S/N=10, and R=140 to be a baseline for comparison; this is similar to the values assumed in the LUVOIR concept study for initial characterization measurements of Earth-like planets (The LUVOIR Team, 2019). From there, we varied one of the bandwidth, S/N, or resolving power to explore the sensitivity of the constraints to these instrumental factors. A bandwidth of 20%, R of 90, and S/N of 20 were individually adopted for each planetary scenario. To test the impact of bandpass position, we chose 25 evenly-spaced bandpass center positions within the wavelength range of 0.515-1.0 \(\mu\)m, which matches the visible wavelength range of the LUVOIR mission concept (Checlair et al., 2021). The 20% bandwidth cases used the same bandpass centers as the 10% cases but with any bandpasses extending beyond the 0.515-1.0 \(\mu\)m range removed. We took portions of the planetary analog spectra around these center positions corresponding to a given fractional bandwidth. We then ran a retrieval with the given slice of the planetary analog spectrum as the data spectrum. For each of the 25 retrievals performed, we record the median value and upper and lower limits of the 68% credible region of the posteriors (following the recommendations of Harrington et al., 2022). We calculate the Bayes factor for each retrieval run in order to confirm that the constraint achieved on the gaseous parameters can be considered a detection. This was done by subtracting the Bayesian log-evidence of a retrieval performed using minimal gas abundance from that resulting from a retrieval using the gas abundances listed earlier in this section. The resulting differences in log-evidences yields the log-Bayes Factor (\(lnB\); Benneke and Seager, 2013). Log-Bayes Factors greater than 1 represent a weak detection, those greater than 2.5 a moderate detection, and those greater than 5 a strong detection (see Table 2 of Benneke and Seager, 2013). Bayes Factors are not meaningful for non-gaseous parameters because minimal values of these do not represent the absence of features, so this statistic was not calculated for \(C_{f}\), \(P_{0}\), \(g\), or \(A_{s}\).
### Optimal Bandpass Study Results
When examining the efficacy of these four instrumental designs, we are primarily concerned with maximizing the number of parameters constrained at the bandpass center with the shortest central wavelength possible. Minimizing the wavelength of the bandpass center is important for several reasons. First, the throughput of a coronagraph is proportional to \(\lambda/D\) due to the impact of the angular resolution of the telescope. Decreasing the wavelength (\(\lambda\)) improves the angular resolution of the telescope, allowing for planets closer to their hosts stars to be imaged. The habitable zones (HZ) of smaller stars or those further away are smaller in angular separation, so the ability to resolve planets closer to their host stars enables the characterization of HZ planets of smaller and/or more distant stars. Second, for G- and K-type stars, the stellar SED peaks at \(0.5-0.7\mu m\), so positioning our bandpass closer to that wavelength region allows for a greater number of photons in reflected light, increasing the S/N achievable with the same exposure time.
We start by examining the fiducial case of the bandpass study, with a bandwidth of 10%, S/N=10, and R=140. Figure 11 depicts these results. Under the Mars-like scenario, no constraints on any of the parameters are achieved except for \(A_{s}\). This is likely due to the low surface pressure of this case, which diminishes the spectral features of the gases present in the atmosphere. This in turn makes \(P_{0}\) difficult to constrain, which relies on the depth of the features. \(A_{s}\) is constrained in every bandpass. In the Earth-like case, a strong detection of \(O_{2}\) can be achieved using a bandpass centered at \(0.73\mu m\), and simultaneously yield a moderate detection of \(H_{2}O\). Alternatively, a strong detection of \(H_{2}O\) can be made at \(0.90\mu m\), but at the cost of a detection of \(O_{2}\). Under the Neptune-like case, with its high surface pressure and therefore increased feature depth, constraints on \(H_{2}O\) and \(O_{2}\) can be made over a wider span of bandpass centers, but the minimum bandpass center which can be used to constrain both \(H_{2}O\) and \(O_{2}\) is still \(0.73\mu m\)
\begin{table}
\begin{tabular}{c c c c} \hline \hline Parameter Symbol & Mars-like & Earth-like & Neptune-like \\ \hline \(P_{0}\) (bars) & 0.00636 & 1.0 & 10 \\ \(g\) (\(m/s^{2}\)) & 3.71 & 9.8 & 11.15 \\ \hline \end{tabular} Note. – Values of \(P_{0}\) and \(g\) for the planetary scenarios. All other parameter are the same for each scenario. The \(P_{0}\) upper limit for the Neptune-like case is restricted by the parameter space of the grid (recall Table 1)
\end{table}
Table 4: Bandpass Study Planetary Bulk Parameterizations
Constraints on surface pressure and albedo can also be made at similar bandpass centers for both the Earth- and Neptune-like cases.
Next, we examine the case where the bandwidth is 20% while S/N and R are the same as the fiducial values in Figure 12. A wider bandwidth allows for a greater portion of a given spectral feature to be observed, thus yielding higher constraints on the gaseous parameters. While the Mars-like scenario still yields no constraints, the Earth-like scenario shows that strong constraints can be obtained for both \(H_{2}O\) and \(O_{2}\) using a bandpass centered at \(0.74\mu m\). The Neptune-like scenario shows strong detections of both \(H_{2}O\) and \(O_{2}\) as well as a moderate detection of \(O_{3}\) when the bandpass is centered at \(0.70\mu m\). Detections of \(P_{O}\) and \(A_{s}\) are also likely in both the Earth and Neptune scenarios.
Figure 11: Results of the bandpass study for the fiducial case. The grey regions represent the upper and lower limits of the 68% credible region and the red line shows the (median) retrieved values with dots at each bandpass center. The true value for each parameter is shown as a horizontal dashed black line. Regions where the grey region narrows indicate the increased certainty of the Bayesian retrieval algorithm. Colored points along the red line indicate the strength of a detection of the particular parameter at each bandpass center. Red points indicate a weak detection (\(lnB<2.5\)), purple indicate a moderate detection (\(2.5\leq lnB<5\)), and blue indicates a strong detection (\(lnB>5\)). Black points, present for each non-gaseous parameter, represent the non-applicability of Bayes factors to these parameters. Grey vertical lines mark the minimum bandpass centers for each planet where the most parameters can be constrained.
We also examine the case where the S/N is 20 while the bandwidth and R are the same as the fiducial values through Figure 13. Increasing the S/N decreases the uncertainty in the retrieval and enables the sampling algorithm to obtain better constraints on the retrieved parameters. As before, the Mars-like case shows no constraints on any parameter except for \(A_{s}\). In the Earth-like case, strong detections of \(H_{2}O\) and \(O_{2}\) can be made using a bandpass centered at \(0.73\mu m\). Additionally, strong detections of \(H_{2}O\) can be made for bandpasses centered above \(0.82\mu m\). \(H_{2}O\) and \(O_{2}\) can be strongly detected on a Neptune-like planet using a bandpass centered at \(0.68\mu m\) and a moderate detection of \(O_{3}\) can be made here as well. Surface pressure can likely be obtained from these same observations on the Earth- and Neptune-like planets, but a detection of surface albedo on these planets is less likely.
Figure 12: Results of the bandpass study for the case with 20% bandwidth. Aspects of this plot are explained in the caption below Figure 11.
Lastly, we examine the case where R=90 and the bandwidth is the same as the fiducial value in Figure 14. Reducing the spectral resolution reduces the information content of the spectrum, causing fewer data points to be present within spectral features. This effect makes parameters more difficult to constrain using retrieval methods. Here, we adjust the S/N to be 12.5, in order to compare scenarios assuming a constant exposure time (under the assumption that the uncertainty is dominated by photon-noise statistics). Changing the S/N compensates for reduction in resolving power from 140 to 90, so the photons collected in each bin increase by a factor of 140/90. Since we assume the observations are photon-noise-limited, this would increase the S/N by a factor of sqrt(140/90) = 1.25 for the same exposure time, meaning that S/N of 10 would become 12.5. Like before, only \(A_{s}\) is constrained in the Mars-like case. The Earth-like case yields a strong detection of \(H_{2}O\) when the bandpass is centered on 0.90\(\mu m\), but no other strong detections are available. The Neptune-like case can obtain strong detections of \(H_{2}O\) and \(O_{2}\) using a bandpass centered at 0.75\(\mu m\), while strong detections of \(H_{2}O\) are still possible using bandpasses centered a wavelengths higher than 0.80\(\mu m\). \(A_{s}\) is potentially well-constrained in the Earth-like case while \(P_{0}\) can likely be constrained in the Neptune-like case.
Figure 13: Results of the bandpass study for the case with S/N=20. Aspects of this plot are explained in the caption below Figure 11.
To take a closer look at how the posteriors of certain parameters are affected by the bandpass center, width, R, and S/N, we plotted the marginalized posteriors of \(H_{2}O\), \(O_{3}\), \(O_{2}\), and \(P_{0}\) from the test cases atop those from the fiducial case. These posteriors are the results of the retrievals performed at the bandpass centers positioned at the vertical lines in Figures 11 through 14. We only show the posteriors of these parameters for the Earth-like and Neptune-like planets because the no parameters were well-constrained in any of the retrievals on the Mars-like planet (except \(A_{s}\)). These plots show the best-case (in terms of minimal bandpass center) retrievals for the particular bandwidth, R, and S/N cases.
First, Figure 15 compares the posteriors of the fiducial case to those of the 20% bandwidth case. Increasing the bandwidth causes more spectral features to be included in a particular wavelength range, so parameters with narrow features will be better constrained. \(H_{2}O\), \(O_{2}\), and \(P_{0}\) were shown to be well constrained in Figures 11 and 12 and these posteriors are consistent with this finding. The constrains represented in the Neptune-like scenario of Figure 12 are represented here as well. The \(O_{3}\) panel of the Neptune-like column (second panel from the top in the second column) shows that \(O_{3}\) is well constrained using a bandwidth of 20% centered at \(0.70\mu m\) while using a bandwidth half as wide with a bandpass centered at \(0.73\mu m\) yields no \(O_{3}\) constraint.
Figure 14: Results of the bandpass study for the case with R=90 and S/N=12.5. Aspects of this plot are explained in the caption below Figure 11.
Following this, we compare the posteriors of the fiducial case to those of the S/N=20 case using Figure 16. Increasing the S/N decreases the uncertainty of the retrieval, so the retrieved (median) values are closer to their respective true values in most cases. The benefit of increasing the S/N is clear in the \(O_{3}\) panel of the Neptune-like column (second panel from the top in the left column). Increasing this instrumental parameter leads to a posterior which is much more tightly confined around the true value.
Figure 15: Posteriors of the bandpass studying comparing the fiducial to the 20% bandwidth case. The marginalized posteriors for \(H_{2}O\), \(O_{3}\), \(O_{2}\), and \(P_{0}\) are shown in blue for the fiducial (R=140, S/N=10, BW=0.1) case and orange for the 20% bandwidth case. Blue and orange vertical lines show the median values of these retrieval results while a dashed black line shows the true value.
Lastly, we compare the fiducial posteriors to those resulting from the R=90 case in Figure 17. Decreasing the resolving power decreases the information content of the spectrum, making the retrieval of narrow spectral features more difficult. This is true in the Earth-like case where \(O_{2}\) can no longer be constrained when the resolution is decreased. The \(O_{2}\) panel of the Earth-like case (third panel from the top in the right column) shows a totally unconstrained posterior for the R=90 case while the fiducial case (with R=140) is exists tightly around the true value.
Figure 16: Posteriors of the bandpass studying comparing the fiducial to the S/N=20 case. The marginalized posteriors for \(H_{2}O\), \(O_{3}\), \(O_{2}\), and \(P_{0}\) are shown in blue for the fiducial (R=140, S/N=10, BW=0.1) case and red for the 20% bandwidth case. Blue and red vertical lines show the median values of these retrieval results while a dashed black line shows the true value.
These results are summarized in Table 5 for the Mars-like scenario, Table 6 for the Earth-like scenario, and Table 7 for the Neptune-like scenario.
At the current stage in the process of developing the next generation of space telescopes, instrumental designs have not yet been determined. What we know for certain is that observing time will be a limited resource. Therefore, we use the results of this bandpass study to recommend instrumental designs (in terms of bandwidth and R) that will yield the most molecular detections while minimizing observation time (which is proportional to S/N). We focus these recommendations on the Earth-like scenario and prioritize observations centered on shorter wavelengths. Increasing the exposure time of an observation such that the S/N becomes 20 allows for constraints on both \(H_{2}O\) and \(O_{2}\) for observations using a resolution of 140 and bandwidth of 10% centered at 0.73 \(\mu m\), but these same molecular detections can also be made by changing instrumental parameters other than exposure time. If a bandwidth of 20% is used
Figure 17: Posteriors of the bandpass studying comparing the fiducial to the R=90 and S/N=12.5 case. The marginalized posteriors for \(H_{2}O\), \(O_{3}\), \(O_{2}\), and \(P_{0}\) are shown in blue for the fiducial (R=140, S/N=10, BW=0.1) case and green for the 20% bandwidth case. Blue and green vertical lines show the median values of these retrieval results while a dashed black line shows the true value.
alongside a resolution of 140, then both \(H_{2}O\) and \(O_{2}\) can be detected when the S/N is only 10 for observations are centered at 0.73 \(\mu m\). If a bandwidth of 10% is used, then a resolution greater than 90 and less than or equal to 140 is needed to detect both molecules. If the bandwidth and resolution are set to 10% and 90 respectively, then only \(H_{2}O\) can be detected and at a higher wavelength of around 0.90 \(\mu m\). Increasing the bandwidth of future instruments may be the best way to detect a greater number of molecules, while lowering the resolution or bandwidth will necessitate a greater exposure time to achieve the same detections. If only a detection of \(O_{2}\) is required, then this can be achieved using observations centered at wavelengths as short as 0.69 \(\mu m\) if the bandwidth is 20%. Additional work is needed to determine feasible levels resolution and bandwidth parameter can varied. Future works should also increase the fidelity of these experiments.
## 5 Discussion
### Simplifying Assumptions
Several simplifications were employed to reduce the computational complexity of our simulated spectra. Firstly, each simulated cloudy atmospheres uses an isotropic distribution of clouds. We considered using distinct cloud layers but this was causing issues at higher surface pressures, and it would add a high level of model dependence on the
\begin{table}
\begin{tabular}{c c c} \hline \hline Case & Params. Constrained & Min. Bandpass Center (\(\mu\)m) \\ \hline
10\% BW, R=140, S/N=10 & \(O_{2}\) & 0.73 \\
20\% BW, R=140, S/N=10 & \(H_{2}O\), \(O_{2}\) & 0.74 \\
10\% BW, R=140, S/N=20 & \(H_{2}O\), \(O_{2}\) & 0.73 \\
10\% BW, R=90, S/N=12.5 & \(H_{2}O\) & 0.90 \\ \hline \end{tabular} Note. – Only strong detections (\(lnB>5\)) are shown.
\end{table}
Table 6: Bandpass Study Results (Earth-like Scenario)
\begin{table}
\begin{tabular}{c c c} \hline \hline Case & Params. Constrained & Min. Bandpass Center (\(\mu\)m) \\ \hline
10\% BW, R=140, S/N=10 & \(H_{2}O\), \(O_{2}\) & 0.73 \\
20\% BW, R=140, S/N=10 & \(H_{2}O\), \(O_{2}\) & 0.70 \\
10\% BW, R=140, S/N=20 & \(H_{2}O\), \(O_{2}\) & 0.68 \\
10\% BW, R=90, S/N=12.5 & \(H_{2}O\), \(O_{2}\) & 0.75 \\ \hline \end{tabular} Note. – Only strong detections (\(lnB>5\)) are shown.
\end{table}
Table 7: Bandpass Study Results (Neptune-like Scenario)
\begin{table}
\begin{tabular}{c c c} \hline \hline Case & Params. Constrained & Min. Bandpass Center (\(\mu\)m) \\ \hline
10\% BW, R=140, S/N=10 & None & N/A \\
20\% BW, R=140, S/N=10 & None & N/A \\
10\% BW, R=140, S/N=20 & None & N/A \\
10\% BW, R=90, S/N=12.5 & None & N/A \\ \hline \end{tabular} Note. – Only strong detections (\(lnB>5\)) are shown.
\end{table}
Table 5: Bandpass Study Results (Mars-like Scenario)
microphysical assumptions regarding these clouds. The assumed noise in the retrievals performed throughout this work is constant and wavelength-independent, which allow us to better separate modeling/grid errors from a-priori noise considerations.
### Grid Construction Methods
One of the greatest challenges we encountered while building the grid was devising an error metric that would accurately reflect the degree of the interpolation error present in intermediate spectra during retrieval. Retrievals cannot be performed using the grid until the grid is made, yet assessing the performance of the grid in this context is challenging until it can be used in the retrievals themselves. We approached this by calculating interpolation errors at different levels of off-axis parameters, but this does not truly account for the degeneracies present in the retrieval when all parameters are free to vary. Additionally, the choice of an error metric is one that could be further optimized. We attempted to devise a metric which captures the most significant deviations from the true spectrum relative to the true spectrum itself, but found that this error metric we used when building the grid does not correlate well with the interpolation error found when evaluating the grid after it was built. Instead, an error metric which resembles the log-likelihood function (e.g. sum of square errors) may be more closely related to retrieval performance. Furthermore, the error cutoff level which is used to terminate the grid construction algorithm could also be altered. We chose to terminate the construction process at when the top deviations are 10% of the ground truth, but the contrived nature of our error metric limits the interpretability of this cut-off. Future efforts may further explore alternative metrics which are able to better predict error in retrievals before the grid is constructed.
### Grid Based Retrievals as a Means to Investigate Multiple Noise Realizations
Other works (e.g., Feng et al., 2018) have discussed the importance of adding randomized noise to the data spectrum and running multiple retrievals with different noise realization. While this method may be an ideal way to simulate more realistic observations, it is often not done due to the long runtimes associated with running multiple retrievals. However, the grid-based methods discussed here are able to easily perform multiple retrievals using different noise realizations because of the remarkably quick runtime this method enjoys (on the order of seconds). Despite having the capability to enhance the fidelity of our retrievals in this way, we chose to perform retrievals on data spectra without any additional scatter added to the spectral data points. This decision was made to better isolate the numerical effects of multi-dimensional interpolation. Introducing random noise to the relatively small number of spectral points in each bandpass could result in any unusually large stochastic fluctuations in the artificial scatter. This, or any inherent bias in the random number generation, could induce systemic effects on the results of the retrievals which would be difficult to identify and account for across all of the investigations conducted for this study. Future work could use our grid-based methods to study the effect that multiple realizations of random noise would have on retrieval results.
### Grid Point Placement
While we adopted an iterative approach which constructs a grid by placing points at the location of highest interpolation error, other works have proposed different methods. Fisher and Heng (2022) investigated grid point placement by random and Latin hypercube (LH) sampling and compared these methods to traditional evenly-spaced linear sampling (Allard et al., 2001; Goyal et al., 2018; Marley et al., 2021). They found that grids produced using random or LH sampling outperform those produced using linear spacing for all but the lowest grid dimensionalities. Unfortunately, direct comparisons between our results and those of Fisher and Heng (2022) are difficult for multiple reasons. First, they compute spectra intermediate to grid points using a random forest machine learning model whereas we used linear interpolations. These techniques may perform fundamentally differently in such a way that one form of grid sampling is optimal for one method while another form of grid sampling is optimal for the other - future work should investigate this. Second, their grids implement different atmospheric parameters than ours. As shown in Figure 7, the interpolation error can vary greatly between parameters. While they found random and LH grid point sampling to perform better than linear sampling (especially at higher grid dimensions), this approach should be used with caution when considering parameters with highly nonlinear effects such as \(O_{3}\). Random sampling could easily miss critical points between spectra morphologies caused by subtle differences in parameters like \(O_{3}\). However, random and LH sampling are advantageous for grids which include a large number of parameters as our iterative sampling approach (as well as linear methods) is susceptible to the curse of dimensionality.
Throughout this work, we have demonstrated methods for constructing, validating, and deploying precomputed grids of model spectra for use in the atmospheric retrievals of exoplanets. Interpolating spectra from a grid and using these as the models within a Bayesian framework significantly accelerates retrievals compared to traditional methods of calculating each model spectrum on-demand, reducing runtimes from days or weeks to seconds or minutes using a standard high-end laptop. Though interpolation from a grid will induce some error into the model comparison spectrum and will therefore prove to be somewhat less accurate than a true radiative transfer calculation, the extreme efficiency of grid-based retrievals can enable a host of new studies that were previously thought to be computationally infeasible. The methods presented here can be used to make any other grid with any other combination of variables, in which case some of the same issues we observed in this work may no longer be relevant (Figure 3.3.3).
Our evaluation procedure reveals that the linear interpolation between spectral grid points for our grid are generally very accurate (average \(R^{2}\)=0.998) and retrieval performance is not significantly inhibited by this interpolation error (Figure 8). However, there are particular regions within the parameter space explored in this work which are problematic for retrievals. This is due in part to the interpolation error of these regions (Figure 7 and also the degenerate nature of spectra in this region (Figure 10. Future works should avoid using this grid for retrievals involving concurrently high \(O_{3}\)\((log_{10}O_{3}>-5)\), low \(P_{0}\)\((log_{10}P_{0}<-1.7)\), and high \(A_{s}\)\((A_{s}=0.96)\). Furthermore, we have shown the grid-based techniques can be used to enable a variety of studies which were previously regarded as computationally infeasible, such as simulating yields from future observations using a variety of instrumental setups. Our techniques for constructing and evaluating model grids can be applied to a wide variety of use cases. For example, they could be used to improve the grid-based methodologies employed in the James Webb Space Telescope Early Release Science spectral analysis by enabling the construction of chemistry model grids with minimal interpolation error and thereby improve the accuracy of their retrievals (Alderson et al., 2023; Rustamkulov et al., 2023). Future works may choose to use the grid presented in this work or employ our methods to build a grid of their own.
As a first application of the capabilities of our grid-based retrieval methods using our 6-parameter grid, we conducted an examination of the sensitivity of retrieval results to instrument and observation design parameters. Many details pertaining to the design of future observatory instruments have not yet been determined, and we can utilize a yield analysis of different instrument configurations in order to help inform these decisions. We performed a sequence of retrievals using simulated observations of Solar System planet analogues centered at 25 different wavelength positions within the proposed LUVOIR visible-light wavelength range of 0.515-1.0 \(\mu m\). Four instrumental designs were explored which varied the spectral bandwidth, resolution, and exposure time (via S/N). We found that detections of \(H_{2}O\) and \(O_{2}\) in the atmospheres of Earth-like planets can be made using observations centered at 0.74 \(\mu m\) simultaneously if a 20% bandwidth is used. Using an S/N of 20 can yield detections of the same molecules at a similar bandpass center. If the resolution is 90, then only \(H_{2}O\) can be detected and at a much longer wavelength of 0.90\(\mu m\). From these results, we conclude that detections of both \(H_{2}O\) and \(O_{2}\) are obtainable on Earth-like planets if only the bandwidth is increased; increasing exposure time is not necessary. Similar tests performed using resolutions between 90 and 140 are needed to determine the minimum resolution necessary to detect both \(H_{2}O\) and \(O_{2}\) when the bandwidth is 20%. In general, a broader variety of instrument parameters should be examined using methods like those shown here in order to make more definitive recommendations for future designs. Future work could improve on this by increasing the fidelity of these experiments.
The authors would like to thank the Sellers Exoplanet Environments Collaboration (SEEC) and ExoSpec teams at NASA's Goddard Space Flight Center for their consistent support.
|
2303.17881 | Pentimento: Data Remanence in Cloud FPGAs | Cloud FPGAs strike an alluring balance between computational efficiency,
energy efficiency, and cost. It is the flexibility of the FPGA architecture
that enables these benefits, but that very same flexibility that exposes new
security vulnerabilities. We show that a remote attacker can recover "FPGA
pentimenti" - long-removed secret data belonging to a prior user of a cloud
FPGA. The sensitive data constituting an FPGA pentimento is an analog imprint
from bias temperature instability (BTI) effects on the underlying transistors.
We demonstrate how this slight degradation can be measured using a
time-to-digital (TDC) converter when an adversary programs one into the target
cloud FPGA.
This technique allows an attacker to ascertain previously safe information on
cloud FPGAs, even after it is no longer explicitly present. Notably, it can
allow an attacker who knows a non-secret "skeleton" (the physical structure,
but not the contents) of the victim's design to (1) extract proprietary details
from an encrypted FPGA design image available on the AWS marketplace and (2)
recover data loaded at runtime by a previous user of a cloud FPGA using a known
design. Our experiments show that BTI degradation (burn-in) and recovery are
measurable and constitute a security threat to commercial cloud FPGAs. | Colin Drewes, Olivia Weng, Andres Meza, Alric Althoff, David Kohlbrenner, Ryan Kastner, Dustin Richmond | 2023-03-31T08:32:40Z | http://arxiv.org/abs/2303.17881v1 | # Pentimento: Data Remanence in Cloud FPGAs
###### Abstract
Cloud FPGAs strike an alluring balance between computational efficiency, energy efficiency, and cost. It is the flexibility of the FPGA architecture that enables these benefits, but that very same flexibility that exposes new security vulnerabilities. We show that a remote attacker can recover "FPGA pentimenti" - long-removed secret data belonging to a prior user of a cloud FPGA. The sensitive data constituting an FPGA pentimento is an analog imprint from bias temperature instability (BTI) effects on the underlying transistors. We demonstrate how this slight degradation can be measured using a time-to-digital (TDC) converter when an adversary programs one into the target cloud FPGA.
This technique allows an attacker to ascertain previously safe information on cloud FPGAs, even after it is no longer explicitly present. Notably, it can allow an attacker who knows a non-secret "skeleton" (the physical structure, but not the contents) of the victim's design to (1) extract proprietary details from an encrypted FPGA design image available on the AWS marketplace and (2) recover data loaded at runtime by a previous user of a cloud FPGA using a known design. Our experiments show that BTI degradation (burn-in) and recovery are measurable and constitute a security threat to commercial cloud FPGAs.
## 1 Introduction
Amazon, Microsoft, Alibaba, Baidu, Huawei, TenCent, and Nimbix offer FPGAs as an on-demand cloud service. FPGAs efficiently accelerate common cloud applications including neural networks [18], video transcoding [1], genome sequencing [14], secure database transactions [5], networking [49], and homomorphic encryption [48].
Unfortunately, cloud FPGAs open the door to new security vulnerabilities related to confidentiality [19, 21, 55, 56, 69], integrity [11, 31, 37, 40, 50], and availability [24, 43]. Signal timing sensors [23, 70] have been used to extract cryptographic keys of active computation within the FPGA [55], identify the active computation running within the FPGA [25], implement covert channels across dies on a 2.5D integrated package [19], and perform attacks across chips on the same board [20, 56].
These attacks require that the attacker and victim are spatiotemporally co-located on the same system. For this reason, cloud FPGAs are often only temporally shared; they do not allow multiple users to co-exist spatially on the same FPGA. Conventional wisdom says information leakage will not occur if the FPGA is correctly erased after use. Thus, after a user relinquishes the cloud FPGA, the FPGA is wiped [7, 36], and at some point, rented to another user.
The attacks presented in this work exploit a side channel that allows an attacker to target previous users of the FPGA even after a wiping procedure is performed. The victim is no longer performing computation or renting the FPGA; the victim no longer resides on the device, _and the victim has left no logical information on the device_.
We show that data from previous users can be extracted via an analog side channel due to bias temperature instability (BTI) aka _burn-in_. We call these "_FPGA pentimenti"_ - the analog residue of the previous digital data that remains on the FPGA due to BTI effects. FPGA pentimenti are recoverable by sensing BTI recovery using time-to-digital (TDC) sensors. Our experiments show that FPGA pentimenti are a real and extant threat to the security of cloud FPGAs. Much like infrared imaging can expose art work pentimenti (previous paint strokes since painted over by an artist) not visible to the naked eye, we demonstrate that attackers can exploit _FPGA pentimenti_ (previous design and user data digitally wiped by the cloud provider).
BTI physically deteriorates transistors, thus negatively affecting their propagation delay. The BTI effect is caused by applying positive/negative (1/0) voltages to CMOS transistors. _BTI recovery_ occurs when the transistors are no longer stressed; the transistors partially revert to their previ
ously faster state. Transistors undergo negative and positive BTI caused by applying logical \(0\) and \(1\) values, respectively. NBTI and PBTI degradation is not symmetric. NBTI effects are typically larger than PBTI. NBTI recovery is also faster. By measuring the speed and size of the recovery, an attacker can deduce if a previous value as a \(1\) or \(0\).
We show how to measure BTI effects on cloud FPGAs to exploit an analog temporal side channel that leaks data between successive users of a cloud FPGA. BTI is measured using a TDC sensor that records the time to propagate a pulse through FPGA resources. The change in propagation delay reflects the previous values held by those resources due to BTI effects. We show how this opens a side channel that can be exploited by an attacker to ascertain previous design and user data.
Our work describes, for the first time, how BTI effects are a security concern for commercial cloud FPGAs. We demonstrate the ability to recover pentimenti in remote FPGAs to expose two violations of the cloud FPGA security model when the "skeleton" (the physical structure, but not the contents) of the design is known to the attacker: An attacker can (1) extract proprietary details or keys from an encrypted bitstream accessible via the cloud platform (i.e., the AWS marketplace) and (2) recover non-transient runtime data from a previous user of a cloud FPGA device by observing the BTI recovery via circuit timing changes.
We experimentally validate the burn-in threat on a AWS F1 (Virtex UltraScale+) cloud FPGA and on a local ZCU102 (Zynq UltraScale+) FPGA. In both cases, we demonstrate a discernible difference of the burn-in behavior on an FPGA route before and after BTI degradation. The timing behavior of that route is dependent on what the previous value was, and thus an attacker is able to carry out attack (1) and (2) on the AWS F1 platform as detailed in Section 2.
We contextualize these findings on the OpenTitan hardware root of trust - an open-source hardware design with strict data security requirements [44]. Roots-of-trust carry out core, security-critical functionalities such as secure boot, configuration of operative modes (e.g., debug vs normal), and management of sensitive of data (e.g., cryptographic keys). OpenTitan security assets are vulnerable to the burn-in threat.
Section 2 presents the cloud FPGA threat model. Section 3 provides background on the effects of BTI transistor degradation, which are measured by the sensor presented in Section 4. We construct an experiment to verify the burn-in threat model generally on an Ultrascale+ FPGA and specifically on the AWS F1 platform in Section 5, which is carried out in Section 6. We relate this paper to prior efforts in Section 7, discuss mitigations in Section 8, and conclude in Section 9. **Disclosure:** The necessary steps have been taken to alert affected vendors. Amazon Web Services was originally notified July 2022. Xilinx was originally notified in August 2022.
## 2 Threat Models
The threat models extract side channel information about previous cloud FPGA user data via temporal analog residue, aka "pentimenti", that arise from BTI recovery effects. Our discussion is framed in the context of the AWS F1 platform though it applies to other cloud FPGA platforms. AWS enables customers to share/sell preexisting designs to other AWS users through the AWS marketplace. AWS provides these designs as an Amazon Machine Image (AMI) and Amazon FPGA Image (AFI). The AFI provides the FPGA bitstream. The AMI is a Linux machine that interfaces with the AFI.
Figure 1 shows the general approach of our threat models. 1 A user rents and loads a design containing confidential information (denoted by the green key). 2 The design remains programmed on the FPGA and computes for some number of hours, allowing the user data to experience BTI effects and burn-in (red key). The victim FPGA is
Figure 1: **Pentimenti FPGA Threat Models:**1� A design containing confidential information (green key) is loaded onto the FPGA. 2 After this design runs for some number of hours, parts of the design are imprinted – a pentimento (red key) is left on the FPGA due to analog remanence from BTI effects. 3 The attacker loads their design with a BTI sensor to extract the pentimento based on the recovery timing effects.
released back into the rental pool. It undergoes a design wipe performed by AWS to reset the system and clear out any data remanence [7, 36]. 3 The attacker gains access to the FPGA and loads the TDC sensor to extract the pentimenti - the analog residues of the previous digital data that remains on the FPGA due to BTI effects.
With this setup, we can extract two types of previously safe data using the techniques presented in this paper: **Type A** design data and **Type B** user data.
**Type A (Design Data):** FPGA designs often contain confidential information as netlist constants, e.g., cryptographic keys or machine learning weights. The AFI promises to keep such proprietary design information secret. A purchased AFI does not permit the user access to the FPGA source code or bitstream1 to preserve intellectual property rights. But this sensitive information can be extracted via their pentimenti as we show in this paper. We call this secret information baked into the design **Type A** data, and the victim is the AFI publisher.
Footnote 1: The bitstream is the binary file used to program the FPGA.
**Type B (User Data):** Type B data is from a previous user of the FPGA. The previous user loads confidential information onto an AFI at runtime. Since the attacker does not control the loading and unloading of the design, an attack cannot rely on gathering initial delay estimates (as can be done for Type A data). Thus, extracting Type B user data is more challenging but powerful attack that requires measuring BTI recovery.
_The difference between **Type A** and **Type B** is subtle but shifts the target of an attack from being the publisher of a design/AFI (**Type A**) to the user of design/AFI (**Type B**)._ The threat models differ, but both follow the steps depicted in Figure 1.
**Threat Model 1 - Proprietary Design Data Extraction:** Threat Model 1 targets Type A Design Data encoded into the design itself, e.g., a netlist constant holding a cryptographic key or machine learning weight. The attacker is renting the design, satisfies Assumptions 1 and 2 (discussed later), and can control the loading and unloading of the design. AWS guarantees to keep design intellectual property secret [7], thus Threat Model 1 violates AWS F1 security guarantees.
The attacker extracts proprietary design information via the following:
1. A malicious AWS F1 user rents an FPGA instance with the intention extracting sensitive information from a third party design.
2. The attacker measures the routes that will hold the sensitive data and gather pre-burn-in route delay characteristics.
3. The attacker loads a target design in Stage 1 of Figure 1 that contains sensitive information stored in the FPGA routes.
4. The attacker executes the target design until Stage 2 of Figure 1 when the BTI effects burn-in the FPGA routes holding sensitive information.
5. The attacker initiates the attack phase (Stage 3 of Figure 1). They unload the victim design and load a measure design that contains the TDC sensor from Section 4 to measure the BTI degradation of the victim routes via their timing behavior.
6. The attacker analyzes sensor data to determine the sensitive information from the victim design with high probability.
**Threat Model 2 - Confidential User Data Extraction:** The attacker recovers confidential data from a previous victim tenant of the cloud FPGA (Type B). This model assumes that the attacker can requisition an FPGA after the victim has finished computing. The attacker extracts confidential user data via the following:
1. A non-malicious AWS F1 victim user loads a design in Stage 1 of Figure 1. This design contains sensitive information either stored statically in FPGA bitstream (e.g., a netlist constant) or in data loaded at runtime.
2. The victim design executes, during which the sensitive data is statically held in the FPGA resources. After some time, the victim design has induced the burn-in effect (Stage 2 of Figure 1).
3. The victim completes their computation and relinquishes the FPGA back into AWS's pool of available devices.
4. The attacker instantiates an AWS instance and is assigned the relinquished victim device.
5. The attacker loads in Stage 3 of Figure 1 an FPGA design that contains TDC sensors connected to victim resources that previously held sensitive information.
6. The attacker analyzes TDC sensor results to determine the sensitive victim data with high probability.
The difference between these two threat models shifts the attack target from the producers of the design IP (**Threat Model 1**) to a previous user the cloud FPGA (**Threat Model 2**). Both threat models are a fundamental violation of the AWS FPGA F1 security guarantees. AWS guarantees that "no FPGA internal design code is exposed" [7] through an AFI leased from the marketplace, meaning **Threat Model 1** should not occur. Furthermore, AWS states that they scrub "FPGA state on termination of an F1 instance," [7] meaning **Threat Model 2** leakage should not occur. Our results demonstrate the feasibility of these threat models, which show that burn-in is recoverable using a TDC sensor.
Our threat models rely on two assumptions. Assumption 1 covers Type A data and Threat Model 1. Type B data and Threat Model 2 make an additional assumption.
**Assumption 1:** The attacker knows the placement, or "skeleton", of the targeted design routes2 that contains confidential design information (Type A) or sensitive user data (Type B).
Footnote 2: The wire segments inside the FPGA holding the targeted data.
The attacker's knowledge of the sensitive information's location could be derived from a publicly available design or bitstream. For example, the OpenTitan hardware root of trust distributes a prebuilt bitstream that a user loads with sensitive information like cryptographic keys [45]. Xilinx
FINN provides prebuilt bitstreams for different neural network architectures [65]. In both cases, the complete source code and compilation scripts are available, which allows one to determine the locations of the sensitive data - the keys for OpenTitan and the neural network weights for FINN.
Other options to learn the target route places include: 1) the attacker is the original author of the AFI on the AWS marketplace and knows design route details. 2) Proprietary information about the design layout has been leaked to an attacker. Finally, when evaluating an implementation's security, it is common practice to assume the architecture is publicly visible [46].
Thus, we believe it is reasonable to assume that the attacker knows the placement information (Assumption 1). Loosen or removing this assumption would strengthen the threat model, and we are considering ways to expand the threat model without Assumption 1 in future work.
**Assumption 2:** The attacker has the ability to gain access to the same FPGA the victim relinquished. Gaining access to a relinquished cloud FPGA requires aspects of cloud cartography and co-location attacks [6, 30, 54, 66, 68] that check out devices en masse or leveraging cloud FPGA fingerprinting techniques [59, 60, 61, 62]. Another potential option is a flash attack where the attacker locks up the available stock right before the victim releases their instance. If the attacker procures all the available resources, they are guaranteed to obtain the relinquished victim board. In our AWS experimentation we commonly received errors implying that we have reached the limit of F1 devices in the region, suggesting that this flash attack could be accomplished through acquiring only a few devices.
## 3 Bias Temperature Instability (BTI)
Bias temperature instability (BTI) is a degradation behavior of transistor transconductance, subthreshold slope, and linear and saturation drain current fundamental to modern field-effect transistors [38, 39]. _Negative BTI (NBTI)_ occurs when the PMOS transistor gate voltage is negative relative to its other terminals (0/False logical value), which results in positive charge migration into the silicon dioxide insulation. _Positive BTI (PBTI)_ affects NMOS transistors when its gate voltage is positive relative to the other terminals (1/True logical value), resulting in negative charge migration into the insulating dielectric. BTI effects accumulate under voltage stress, increasing the threshold voltage of their respective transistor types, and consequently increased transistor rise and fall transition delays [41].
CMOS logic gates (e.g., NAND, NOR, and INV) are built from PMOS and NMOS transistors. Static 0/1 inputs on logic gates cause NBTI/PBTI degradation on PMOS/NMOS transistors. The degradation affects the rising (0 \(\rightarrow\) 1) and falling (1 \(\rightarrow\) 0) propagation delays through the logic gates. Figure 2 details how a CMOS inverter undergoes data-dependent BTI effects which cause timing deviations captured by the difference in rising and falling propagation delays through the inverter.
When BTI-causing values are removed, there is a partial threshold voltage recovery [51, 52, 13, 53] that increases the transistor switching speed. NBTI and PBTI recovery differs in mechanism and recovery rate [27, 28, 35]. NBTI recovery is due to defect removal via the recuperation of broken bonds with positively-charged hydrogen atoms [33, 51]; PBTI recovery is due to the removal of trapped negatively-charged electrons in the transistor dielectric [39]. The defects from PBTI electron charge traps are energetically deeper than NBTI positive charge traps [67] and affect the recovery timescale of PBTI relative to NBTI. BTI recovery effects are measurable, the differences between positive and negative recovery characteristics are apparent, and we can exploit them as a temporal side channel.
BTI creates pentimenti - analog remenance of previous design state and data. BTI effects are differentiable; they encode data about the prior state, e.g., if the route under test was previous a 0 or 1 value. The rate and degree of BTI effects are driven by constant voltages and dynamic switching [58], and has a predictable effect. Remenance is an essential condition to recover any logical information and necessary for Threat Models 1 and 2. BTI effects are not-permanent. BTI degradation is not a permanent artifact and undergoes a recovery period where the transistors become faster [51, 52, 13, 53, 8]. This required for Threat Model 2 since it requires measuring remanance of previous user values that are no longer present.
FPGAs contain many resources that undergo BTI and can be targeted in pentimenti attacks: bitstream configura
Figure 2: Bias temperature instability (BTI) effects induce circuit delays that differ depending on the data computed. An inverter is pictured on the left, composed of a PMOS (bottom) and NMOS (top) transistor. A \(V_{in}\) value of 0 (1) will allow current to flow through the PMOS (NMOS) transistor, causing it to degrade through NBTI (PBTI). As NBTI (PBTI) manifests, the speed in which the inverter stabilizes on an output, \(V_{out}\), as \(V_{in}\) equals 0 (1) will begin to slow. \(\Delta\)ps is the difference between the 0-input propagation speed (measuring PBTI) and the 1-input propagation speed (measuring NBTI). \(\Delta\)ps will vary depending on whether the inverter was previously computing a 0 or 1 value; thus, it can be used to infer the values of a previous computation.
tion bits, programmable routing, configurable logic blocks (CLBs), digital signal processors (DSPs), and block RAMs (BRAMs). In order to perform a successful attack, the victim resource should meet the following conditions:
* a necessary condition to recover Type A and B data. For example, a route is statically held at constant value.
* **BTI effects must be differentiable:** The target resource should exhibit differences in circuit-level behavior due to BTI degradation and recovery. For example, the route delay profile differs based on whether it was previously held at logical 0 or 1.
* **BTI-effected resources must be observable:** Targeted resources must be in user-visible locations on AWS F1. Some cloud FPGA resources are inaccessible by the user, e.g., resources implementing the AWS shell. The attacker is limited by the interfaces exposed by the cloud provider. The attacker do not have physical access. They cannot use special sensing instrumentation. The BTI sensor must be implementable by any user, without elevated privilege, and pass design rule checks.
FPGA programmable routing meets all three conditions. Specifically, we target the route between an FPGA register and a CLB. The programmable route can be composed to be arbitrarily long sequences of transistors to increase the observable BTI effects, and it is trivial to use as a route. Additionally, programmable routes often carry sensitive data (e.g., encryption keys and machine learning weights). Thus, verifying that a route between a register and LUT is vulnerable to a pentimenti attack threatens the data integrity of most FPGA designs.
## 4 BTI Sensor
Our threat models depend on the ability to detect BTI effects in cloud FPGA resources. BTI degradation and recovery manifests as changes in the timing delays of the victim resource. BTI effects differ depending on the previous state that the victim held on that resource.
Time-to-digital converters (TDCs) measure ns-scale timing changes by sensing the propagation delay through an FPGA-instantiable delay line. There exists a large body of prior work on implementing TDC sensors within cloud FPGAs [15, 20, 22, 25, 55, 56]. Our experiments use the open-source Tunable Dual-Polarity TDC [15].
Figure 3 demonstrates how to use a Tunable Dual-Polarity TDC sensor [15] to measure BTI effects. The original sensor is designed for power measurement; we amend it to perform BTI timing measurement and exploit the threats presented in Section 2. The constituent structures of the TDC are presented below:
**Programmable Clock Generator:** This component generates the two clock domains: the Launch Clock and Capture Clock. These two clocks are identical in frequency with a runtime-programmable phase relationship defined by \(\theta\). Two clocks are necessary as the TDC is comparing how long it takes for the signal from the Launch Clock to reach a destination within the Carry Chain compared to when the Capture Clock causes the Capture Registers to sample. The Launch Clock first must be converted to a logic signal, which is performed by the Transition Generator.
**Transition Generator:** This component is responsible for sending positive (0 \(\rightarrow\) 1) and negative (1 \(\rightarrow\) 0) transitions through the the Route Under Test, Carry Chain, and into the Capture Registers. The same \(\theta\) defines the relationship between the signal traversing the Route Under Test/Carry Chain is launched and when it measured in the Capture Registers. When the sensor is loaded onto an FPGA, \(\theta\) is set to 0; an offset of \(\theta\) is consistent between sensor design loadings. When \(\theta\) is set correctly, a transition will be propagating through the delay line when the Capture Registers are clocked and record a metastable transition region. The distance that transition propagates is called the propagation distance, and is related to the propagation delay of the logic in the Route Under Test and Carry Chain.
**Route Under Test:** The primary intent of the TDC is to measure the timing delay through some FPGA programmable route that is affected by burn-in. When \(\theta\) is properly configured the output of the Capture Registers reflects how far a rising or falling signal has propagated, and consequently the timing delay though the Route Under Test. BTI degradation causes the propagation delay to increase. The propagation delay decreases during BTI recovery.
**Carry Chain:** The primary structure of the TDC is a long linear array of delay elements. This is formed by a series of combinatorial logic elements that are able to propagate rising (0 \(\rightarrow\) 1) and falling transitions (1 \(\rightarrow\) 0). Ideally, each element is identical, with a timing delay of \(\tau\), so that the propagation of signals is uniform at every stage of the Carry Chain. To ensure consistency throughout the chain, the delay elements are uniformly placed and routed in consecutive physical locations. Our chosen sensor uses the fast look-ahead CARRY primitives of the Xilinx FPGA devices to construct this chain.
**Capture Registers:** Each element of the Carry Chain is output to a register, forming the Capture Registers. These registers are activated synchronously by the Capture Clock, that performs a capture of the state of the Carry Chain. If a rising (0 \(\rightarrow\) 1) or falling transition (1 \(\rightarrow\) 0) is propagating through the Carry Chain, and the Capture Registers are activated, the distance that signal has traveled will be captured.
**Propagation Distance:** Each falling and rising transition is captured at the output registers, as shown at the bottom of
Figure 3. Rising Transition 0 shows that the \(0\to 1\) transition reached Output[38]. Falling Transition 0 shows that the \(1\to 0\) propagated to somewhere between Output[21] and Output[23], with some metastability between the two points. Rising Transition 1 propagates differently; the \(0\to 1\) transition propagates to between Output[36] and Output[39]. Similarly, Falling Transition 1 propagates to between Output[20] and Output[23]. These changes can represent deviations in the timing through the Route Under Test (i.e. burn-in).
Post Processing:The output of the Capture Registers can be processed into a single value that represents the propagation time of a signal through a Route Under Test. This is done by computing the _Binary Hamming Distance_ of the output registers. This is defined for the rising transitions as the binary Hamming distance from 64'h_0000_0000_0000, and for falling transitions, the binary Hamming distance from 64'h_ffff_ffff_ffff_ffff. The _Binary Hamming Distance_ of the example samples in Figure 4 will yield the sequence: 39, 22, 38, 22.
## 5 Experimental Setup
We perform a series of experiments to determine the extent of BTI effects on programmable routing and the ability to recover the digital data that was previously held on those routes. We perform the first experiments locally on a ZCU102 Ultrascale+ board. Subsequent experiments are performed remotely on AWS F1 instances. The experiments use a target design and a measurement design described in Section 5.1. Section 5.2 details the three experimental phases that calibrate the sensor, conditions (performs burn-in) on some target routes, and measures the BTI effects on the target routes. Section 5.3 determine the approximate target route lengths for assets from the OpenTitan hardware root of trust. The experiments in Section 6 demonstrate the ability to execute Threat Models 1 and 2 remotely on an AWS F1 instance.
### Experimental Designs
Our experiments use two independent FPGA designs. The _Target_ design holds the routes under test at a constant 1 or 0 value for a pre-determined duration to induce BTI effects. The _Measure_ design records changes in route propagation delays caused by BTI effects. The data held on these routes represent the Type A or B logical data that an attacker wishes to recover.
Target Design:Figure 4 presents the _Target_ design that biases a set of routes by holding them to a fixed 0 or 1 value. We represent a logical 1 bias with the color green and logical 0 bias with the color red.
The region of slices above the routes under test is explicitly left uninitialized (no logic may be placed there) during the compilation process. These slices will be used by the _Measure_ design for the placement of Carry Chains. Using these slices could introduce noise, or worse, erroneous results, into our propagation delay measurements of the test route. In theory this noise would be minimal as the length of the route through these slices is significantly shorter than the route being tested for BTI. While this does not negate all possible burn-in effects in the slices, since the
Figure 3: **Time-to-Digital Converter (TDC) sensor that measures BTI effects in FPGA routes. A Programmable Clock Generator produces two clock domains: the Launch Clock and Capture Clock. The Launch Clock domain generates a signal that propagates through a test route and into a textttCarry Chain – an array of linear delay elements (64 in this example). The signal moves through the Route Under Test and into the TDC Carry Chain, when it is recorded as indicated by the Capture Clock. The TDC Capture Registers record the signal propagation distance. This provides is a measure of propagation delay of the Route Under Test. Taking measurements over time records changes of the propagation delay due BTI remnants stored on the transistors.**
-vendor-determined state of an uninitialized slice could also introduce BTI effects, it at least suggests that all slices will be affected equally. This is consistent with **Threat Model 1**, as the attacker may be the one publishing the maliciously constructed AFI, and thus has the control to leave these slices empty.
The routes containing **Type A** and **Type B** information is surrounded by other computation. We generalize these structures as Arithmetic Heavy circuits implemented as arrays of logic performing a pipelined fused multiply-add operation (similar to a machine learning or lattice cryptography accelerator). This has the added benefit of accelerating the BTI effect through increased heat generation.
**Measure Design:** Figure 5 presents a high level view of the architecture for measuring the propagation delay of the routes under test. These routes represent the **Type A** or **Type B** data an attacker intends to recover. By tracking the change in this propagation delay caused by BTI degradation and recovery, the side channel can be exposed to exploit **Threat Model 1** and **2**.
Each of these routes is a single Route Under Test from Figure 3 and we instantiate an array of TDCs as presented in Section 4. No extraneous routing outside of the the Route Under Test is used to connect the Transition Generator to the Carry Chain. Identical routing constraints from the _Target_ design are used to generate the routes for the _Measure_ design. The routes test a variety of lengths and placements in order to build a general understanding of how burn-in is affected by route characteristics.
### Experimental Phases
These designs will then be used to form three experimental phases: _Calibration_ to configure the TDCs of the _Measure_ design, _Condition_ to induce the BTI effect on a predefined set of routes, _Measurement_ to measure BTI effects.
**Calibration Phase:**_Calibration_ is the first phase, and determines a baseline \(\theta\) value that captures rising and falling transitions traveling through the tested routes into the Carry Chain and Capture Registers. The TDC alone does not provide an absolute measure of the change in propagation delay through a tested route. We can use the TDC determine the change in the propagation delay by examining the increase or decrease over time of the _Binary Hamming Distance_ output of the sensor tuned to this baseline \(\theta\) value. To create a baseline, a short series of \(2^{4}\) samples (called a trace) is taken from each TDC as \(\theta\) is iteratively reduced until the rising and falling transition appear in the output registers. We call this value \(\theta_{init}\), and an individual value is computed and saved for every route under test in the _Measure_ design.
**Condition Phase:** The _Target_ design is loaded onto the FPGA and a pre-defined set of _burn values_ is applied to the
Figure 4: The **Target Design** conditions a set of pre-determined routes to 1/VCC (green) or 0/GND (red) aka the _burn value_. This induces a BTI effects on transistors of each route. The Arithmetic Heavy circuit is used emulate the surrounding logic of many FPGA computations, and also increases on-chip temperature to accelerate BTI. The center of the design is left empty; it will be used in the measure design (Figure 5).
Figure 5: The **Measure Design** records the BTI degradation of multiple Routes Under Test using TDC sensors. As per Section 4, the Transition Generator is used to send rising (\(0\to 1\)) and falling (\(1\to 0\)) transitions through the tested routes. The changing propagation delay of these signals indicates the BTI effects on that route.
routes under test. These _burn values_ will either be **Type A** or **Type B** information that will induce variable BTI effect based on their value. The Arithmetic Heavy circuits are activated in this phase to emulate user computation and exacerbate BTI degradation.
**Measurement Phase:** The _Measurement_ phase loads the _Measure_ design and tunes all of the TDCs to their respective \(\theta_{init}\). Ten traces are taken from each TDC as \(\theta\) is iteratively decreased from \(\theta_{init}\), to avoid relying on a single trace that could be affected by architectural irregularities [15, 17, 21]. For each route, the mean _Binary Hamming Distance_ is computed on across all samples within each trace, and then the mean of all traces is computed to obtain a single value representing the propagation delay through the route under test. This value is converted to a measure of time based on a derived relationship of \(\frac{2.8ps}{bit}\) for UltraScale+ parts [15, 64]. Any deviation in this value represents BTI-induced variation on a route.
### OpenTitan Hardware Root of Trust
We study the OpenTitan hardware root of trust (RoT) to provide a context for a realistic target for our threat models. OpenTitan is a commercial-grade, open-source hardware root of trust [44]. The OpenTitan Earl Grey integrated into systems to carry out core, security-critical functionalities related to trusted platform module (TPM), platform integrity, and 2nd factor authentication. The OpenTitan consists of a security-enhanced RV32IMCB RISC-V Ibex core, cryptographic IP cores (e.g., AES, KMAC, HMAC), and memories (e.g., ROM, eFLASH, SRAM, OTP) protected by access control mechanisms.
OpenTitan is an open-source hardware design; all design files are available online. OpenTitan encourages a design flow where the user solely modifies the boot ROM (the data used to initialize the FPGA memory) of a pre-built bitstream. Thus, it adheres to threat model assumption 1 (Section 2) that the assets locations are known.
OpenTitan has many important security assets that govern its operation. Assets include cryptographic keys to encrypt data stored in off-chip memories (e.g., one-time programmable memory), keys to scramble data before transmission across on-chip busses to limit power side channels, and life-cycle related state values/tokens for attestation, identity management, and debug control.
We identify twenty security-critical assets in the following groups:
* **Cryptographic Keys (CK):** OpenTitan has a number of cryptographic keys that need to be protected that are spread across the design. This includes keys stored in the one-time programmable (OTP) memory and OpenTitan Key Manager. The OTP controller has an access control mechanism that arbitrates OTP data accesses and buffers key values;
* **State Values or Tokens (SV/T):** Assets stored in one-time-programmable (OTP) memory for use in the life-cycle controller, which influences the OpenTitan's DFT functionality, NVM backdoor access, debug, and CPU functionality;
* **Signals (S):** Variables carrying sensitive information from/to security peripherals.
Table 1 reports the route length distribution of twenty security-critical assets in OpenTitan implemented on a Virtex UltraScale+. **Bus Width** records the number of routes associated with each asset. **MEAN** and **SD** are the mean route length and standard deviation for each asset's routes,
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \# & **Asset Paths** & **Type** & **Bus Width** & **MEAN** & **SD** & **MIN** & **25\%** & **50\%** & **75\%** & **MAX** \\ \hline
1 & /otp\_ctrl\_opt\_lc\_data[state] & SV/T & 320 & 169.5 & 98.1 & 39 & 95.5 & 157.5 & 228 & 509 \\
2 & /u\_opt\_ctrl/opt\_ctrl\_opt\_lc\_data[test\_exit\_token] & SV/T & 128 & 197.5 & 115.4 & 37 & 114 & 170 & 242.2 & 534 \\
3 & /otp\_ctrl\_opt\_lc\_data[man\_token] & SV/T & 101 & 239.8 & 122.8 & 38 & 148 & 222 & 325 & 583 \\
4 & /otp\_ctrl\_opt\_lc\_data[test\_unlock\_token] & SV/T & 128 & 207.9 & 120.1 & 38 & 130.5 & 178.5 & 247.2 & 609 \\
5 & /keymgr\_acs\_key[key][11\_282 & CK & 32 & 538.3 & 106.4 & 380 & 433.5 & 551 & 614 & 738 \\
6 & /keymgr\_oth\_key[key][0]\_285 & CK & 384 & 219.8 & 150.9 & 41 & 99 & 167 & 327.2 & 919 \\
7 & /keymgr\_kmac\_key[key][0]\_28 & CK & 256 & 317.6 & 141.7 & 49 & 213.8 & 291 & 408 & 1050 \\
8 & /otp\_ctrl\_opt\_keymy\_key[key]\_share0] & CK & 256 & 187.3 & 200.8 & 37 & 54 & 109 & 217 & 1064 \\
9 & /u\_opt\_ctrl\_part\_symbol\_sp\_data & CK & 64 & 353.4 & 146.1 & 116 & 267.2 & 348.5 & 411.2 & 1075 \\
10 & /keymgr\_acs\_key[key][0]\_283 & CK & 256 & 360.3 & 154.2 & 86 & 270 & 333 & 412.2 & 1311 \\
11 & /u\_opt\_ctrl/u\_opt\_ctrl\_scrmb/gen\_anchor\_keys & CK & 135 & 220.1 & 358.7 & 0 & 57 & 94 & 162.5 & 1333 \\
12 & /opt\_ctrl\_pp\_keymy\_key[key\_share1] & CK & 256 & 262.5 & 273.4 & 37 & 51 & 158 & 335.5 & 1381 \\
13 & /scms\_t\_sr\_bg\_id\_data] & S & 32 & 1291.8 & 105.7 & 1031 & 1244.8 & 1323 & 1359.8 & 1432 \\
14 & /acs\_tl\_sr\_bg\_data] & S & 32 & 1105.3 & 411.4 & 276 & 1135.8 & 1279 & 1369.5 & 1631 \\
15 & /keymgr\_oth\_key[key][11\_284 & CK & 32 & 1062.7 & 281.2 & 480 & 854 & 1074.5 & 1270 & 1670 \\
16 & /u\_opt\_ctrl\_part\_opt\_rdata & S & 64 & 1298.9 & 213 & 933 & 1118.5 & 1311.5 & 1447.2 & 1784 \\
17 & /flash\_ctrl\_opt\_sr\_spk[key] & CK & 128 & 1816.6 & 4046.6 & 1215 & 1503 & 1717.5 & 2010.2 & 3245 \\
18 & /kmac\_app\_spp & S & 777 & 94.2 & 179.7 & 15 & 40 & 58 & 97 & 3398 \\
19 & /flash\_ctrl\_opt\_sr\_bg[rand\_key] & CK & 128 & 1908.1 & 670.7 & 553 & 1337 & 1882 & 2308.8 & 3706 \\
20 & /acs\_tl\_re\_la\_data] & S & 32 & 2114.8 & 471.8 & 1455 & 1805 & 2079.5 & 2337.2 & 3946 \\ \hline \end{tabular}
\end{table} TABLE 1: **OpenTitan Earl Grey Distribution of Route Lengths (ps) on a Virtex UltraScale+.** This table reports the distribution of route lengths (in ps) for a selection of twenty security-critical assets in an OpenTitan Earl Grey implemented on a Virtex UltraScale+. Assets are sorted in ascending order by maximum route length. Route lengths of more than 1000 ps are common, and would increase when OpenTitan shares an FPGA with other logic.
respectively. **MIN** is the minimum route length for each asset. **25%, 50%**, and **75%** are the route length for the 25th, 50th, and 75th percentiles of each asset's routes, respectively. **MAX** records the maximum route length for each asset. The assets are sorted in ascending order according to **MAX** route length.
Most routes are short - only a few hundred picoseconds. However, there are longer route lengths that approach 4 ns. When integrated with other cores or accelerators, it is logical that these route lengths will increase.
## 6 Experimental Results
The experiments interleave the calibration, condition, and measurement phases (Section 5.2) to extract pentimenti - BTI-induced side channels in FPGA programmable routing. The target route lengths are informed by a study of OpenTitan asset delay in Section 5.3. Experiment 1 uses a new ZCU102 Ultrascale+ FPGA development board. This allows us to characterize the burn-in effect while controlling temperature, eliminate previous FPGA usage, and minimize system computation during measurement. The remote cloud platform provides substantially less control over environmental conditions. Experiment 2 validates **Threat Model 1** on the AWS F1 platform. Experiment 3 validates **Threat Model 2** on the AWS F1 platform.
### _Experiment 1 (Lab Environment)_
Experiment 1 studies BTI degradation and recovery effects on a local, new FPGA. The experiment validates that the burn-in degradation occurs and is differentiable, making **Threat Model 1** possible. Additionally, it shows that BTI is non-permanent. BTI recovery is observable and measurable, which is required for **Threat Model 2**.
A new ZCU102 Ultrascale+ is placed in a temperature-controlled forced convection oven (Lab Companion OF-01E) set to 60\({}^{\circ}\)C. The oven maintains a constant temperature, which ensures that temperature changes do not influence the delays. The ZCU102 is factory new; it will experience the largest BTI effects since no degradation has occurred.
64 routes are studied on the ZCU102 Ultrascale+ development board. The first group of 16 routes each has a delay of 1000 ps, the second 2000 ps, the third 5000 ps, and the fourth 10000 ps delay. The delay relates to the number of transistors used to form that programmable route subject to BTI effects.
Experiment 1 starts with a 200-hour burn-in period of the 64 routes. A randomly generated value \(X\) is applied to the routes. \(X\) is held constant over the entire 200 hours. The goal is to induce burn-in on the routes and understand the extent of BTI effects. Burn-in is followed by a 200-hour recovery period that applies a constant \(\overline{X}\) to the routes under test to induce BTI recovery.
Experiment 1 is divided into three experimental periods consisting of phases from Section 5.2:
* **Hour 0:** The _Calibration_ phase is executed to compute the \(\theta_{init}\) for each of the 64 routes.
* **Hours [0,200]:** The burn-in period alternates between _Condition_ and _Measurement_ phases. The _Condition_ phase applies the _burn values_\(X\) to the 64 routes for one hour. Then, the _Measurement_ phase is launched, which tunes the phase difference between the transition generator and capture clock (\(\theta_{init}\)) to ensure the transition falls within the carry chain. The TDC sensors capture data for each of the 64 routes under test as described in Section 5.1. This condition/measurement sequence is repeated \(200\times\) (approximately \(200\) hours). The _Measurement_ phase runs once per hour to record the BTI degradation effects. Measurement is fast, taking less than a minute. Thus the vast majority of the time is spent in the condition phase.
* **Hours [200,400]:** The recovery period inverts the values previously held on the routes under test. This is nearly identical to hours [0,200], except the _Condition_ phase loads \(\overline{X}\), the complement of \(X\), into the 64 routes. This period focuses on understanding BTI recovery effects.
Figure 6 plots the 400-hour results of Experiment 1 for four different route delays. The four graphs each have 16 routes with various delays - (a) 1000ps, (b) 2000ps, (c) 5000ps, and (d) 10000ps. A switch from burn-in values \(\overline{X}\) to recovery values \(\overline{X}\) happens at the 200-hour mark, denoted by the transition between red and green backgrounds.
Every net is measured once per hour. A _Measure_ phase is loaded, and the TDC sensor readings are recorded. Ten traces are taken from each TDC as the phase shift \(\theta\) is decreased to avoid architectural irregularities [15, 17, 21]. We record the rising and falling transitions within each trace. For each transition type, the mean propagation distance is computed across all samples within a trace. Then, we compute the mean of the ten traces to obtain a single value representing the propagation delay through the route under test. Next, we subtract the rising transition distance from the falling transition distance to isolate the BTI effect to the _Route Under Test_. This bit value is converted to seconds based on a derived relationship of \(\frac{2.8ps}{bit}\) for UltraScale+ parts [15, 64]. Finally, we center the data to the point at hour zero; any deviation from zero represents BTI degradation or recovery-induced variation on a route, which we call \(\Delta\) ps. Measurement took about 52 seconds for all the routes in our cloud experiments; only \(1.4\%\) of the overall time is spent performing measurements.
The choice to plot the falling minus rising is based on the fact that architecturally the falling (rising) transition stresses the NMOS (PMOS) transistors in a route. This is due to NMOS (PMOS) devices being best suited to passing 0 (1) values. And so, the timing difference between the rising and falling signal reduces to a single value for each hour that considers both the NBTI and PBTI degradation. Figure 2 discusses this in more depth.
Datapoints are colored cyan if their burn value \(X\) is a logical 0 and purple if their burn value \(X\) is a logical 1. A red background indicates that the value applied to the
routes is the burn value \(X\), and green background is the BTI recovery period where the values are complemented \(\overline{X}\). The resulting time series are smoothed with a kernel regression, which finds a non-linear relationship between a pair of variables. Specifically, the Python statsmodels package's nonparametric kernel regression class is used in continuous mode with a local linear estimator.
A trend is immediately apparent in the first 200 hours (red half) of the charts in Figure 6. Burn value 0 (cyan) routes decrease from hour zero. Burn value 1 (magenta) routes increase from hour zero. This trend occurs regardless of the length of the route, but the magnitude differs. While a 10000 ps route is on the longer end for most designs, 1000 ps routes are commonplace, e.g., see the OpenTitan study in Section 5.3. Furthermore, this helps us understand the limits of our strategy. There appear to be no limitations in route length as to observable burn-in effects, with the 1000 ps tested routes showing a clear difference between GND and VCC burned routes.
_These results indicate that **Threat Model 1** is possible. If an attacker can observe BTI effects on a route before and after a design, they can easily deduce the burn value on a route and observe a side channel._
Figure 5(a) shows that the 1000 ps routes have \(\pm[1,2]\) ps difference between the rising and falling transition at the 200-hour mark. The 2000 ps routes have a \(\pm[2,3]\) ps difference (Figure 5(b)), the 5000 ps routes have a \(\pm[5,6]\) ps difference (Figure 5(c)), and the 10000 ps routes have a \(\pm[10,11]\) ps difference. This data matches our expectation in burn-in behavior: the number of transistors in the route (i.e., the route length) directly increases the route's delay.
At the 200-hour mark, the experiment moves from the burn-in to recovery. The condition route values change from \(X\) to \(\overline{X}\). The routes with logical 1 burn-in in \(X\) in the first 200 hours (and logical 0 in the recovery period) quickly return to their pre-burn state across all route lengths. This recovery takes approximately 30-50 hours before the propagation delay difference between the rising and falling transition has returned to the original state at hour 0. We do not see the same behavior in the routes that were logical 0 in the first 200 hours and logical 1 in the second 200 hours; they recover, but the process takes much longer (over 200 hours).
These results indicate that BTI is elastic and non-permanent. In addition, we can see that the BTI recovery in routes conditioned by burn 1 is substantially faster than in routes with burn 0. This pattern persists for all tested route lengths, suggesting a fundamental difference between the NBTI and PBTI effect on the 16nm FinFET transistors of the UltraScale+ device. The difference in BTI recovery enables **Threat Model 2**.
The quick recovery of the burn 1 routes indicates that they might be easier to detect when targeting Type B data (Threat Model 2). A Threat Model 2 attacker obtains the FPGA during the recovery period. They will not know the initial values and thus cannot complement them. The attacker must set the target values to logical 0 or 1. Since the Burn 1 degradation values see the greatest and fastest recovery, the attacker would set all recovery values to condition to logical 0 to observe the quick recovery. This motivates us to set the recovery values to logical 0 in Experiment 3 (Section 6.3).
Figure 6: **Experiment 1 (Lab Environment):** Sets of 16 routes of varying lengths are initialized on a new ZCU102 FPGA. The experiment occurs at 60\({}^{\circ}\)C in a temperature-controlled environment. During the initial 200-hour burn-in period, a random burn value \(X\) is conditioned into the routes (red background). After which, 200-hour BTI recovery is induced by applying \(\overline{X}\) into the routes under test (green background). The timing difference between each route’s falling and rising transition delay is recorded every hour. Routes conditioned with logical 0 behave differently than routes conditioned with logical 1 in the burn-in and recovery periods. This reveals the unique effects of BTI degradation and recovery. It indicates that **Threat Model 1** and **Threat Model 2** are possible.
### Experiment 2 (Cloud Environment)
Experiment 2 tests the viability of **Threat Model 1** on an AWS F1 cloud FPGA, which aims to extract sensitive data from a rented third-party design. The attacker can load and unload the design and wants to extract design intellectual property, e.g., netlist constant holding cryptographic keys or neural network weights. The cloud environment provides no control over temperature, and it is likely the device is years old, making BTI effects less observable [3]. This experiment is performed in the eu-west-2 AWS region, which puts potentially fours years of wear on the device.3
Footnote 3: [https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-ec2-f1-instance-expands-to-more-regions-adds-new-features-and-improves-development-tools/](https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-ec2-f1-instance-expands-to-more-regions-adds-new-features-and-improves-development-tools/)
We examine 16 1000 ps, 16 2000ps, 16 5000 ps routes and 16 10000 ps delay routes. 64 random bit values \(X\) are applied to these routes in the condition phase. The _Target_ and _Measure_ designs are built around these routes according to Section 5.1. The _Target_ design is configured to utilize 3896 DSPs for this architecture. The _Target_ design consumes 63 W of a maximum of 85 W imposed by AWS.
Experiment 2 is divided into two periods:
* **Hour 0:** The _Calibration_ phase is executed to compute the \(\theta_{init}\) for each of the 64 routes.
* **Hours [0,200]:** During each hour, we run a long _Condition_ phase and a short _Measurement_ phase. The _Condition_ phase applies the _burn values_\(X\) to the 64 routes under test. The _Measurement_ phase tunes TDCs to \(\theta_{init}\) and captures delay estimates for each route as described in Section 5.1. After this brief measurement process (33 seconds), the data is saved. This is repeated 200 times (over 200 hours) to study **Threat Model 1**.
Figure 7 shows the results from Experiment 2, testing **Threat Model 1** on the AWS F1 platform. \(X\) is the **Type A** (Design Data) an attacker wishes to recover. The values of \(X\) are expressed in cyan for burn 0 values, and magenta for burn 1 values but are opaque to the attacker.
Figures 6(a), 6(b), 6(c), 6(d) demonstrate the same trends as in Experiment 1. These results are expectedly noisier than from the ZCU102, which was a new part held at a constant temperature. The difference between the burn 0 rising and falling transitions decreases. In contrast, the burn 1 routes behave the exact opposite; the propagation delay difference between the rising and falling transition increases. This makes the routes easily distinguishable, with burn 0 (cyan) decreasing immediately from hour zero and burn 1 (magenta) increasing immediately from hour zero. This pattern persists irrespective of the route length (1000ps vs. 2000ps vs. 5000 ps vs. 10000 ps), but the magnitude differs. By examining the trends in data, \(X\) can be derived, demonstrating **Threat Model 1** recovery of **Type A** data is possible.
The 1000 ps routes of Figure 6(a) experience \(\pm[0,.2]\) ps difference, the 2000 ps routes of Figure 6(b) experience \(\pm[0,.4]\) ps difference, the 5000 ps routes of Figure 6(c) experience \(\pm[0,1]\) ps difference, and the 10000 ps routes in Figure 6(d) experience \(\pm[0,2]\) ps difference. This further validates our previous conclusion that the amount of burn-in is proportional to the tested route length. It also indicates that the burn-in for the cloud FPGAs is lesser than that of the new ZCU102 from Experiment 1 (compare to Figure 6(c) and 6(d)). This is not surprising, given that cloud FPGAs are older and more used. Thus, it is more challenging to extract pentimenti from cloud FPGAs than a local FPGA.
It is now clear how **Threat Model 1** can be exploited by an attacker. In this model, an attacker instantiates a
Figure 7: **Experiment 2 (Cloud Environment):** A random burn value \(X\) is conditioned into four sets of FPGA routes. 6(a) has 16 1000 ps routes, 7(b) has 16 2000 ps routes, 7(c) has 16 5000 ps routes and 7(d) has 16 10000 ps routes. The propagation delay difference (\(\Delta\)ps) between the falling and rising transition is measured once per hour over 200 hours. Over time, the burn-in of the 0 (cyan) and 1 (purple) values produces observable changes in their route delays due to BTI degradation. This enables an attacker to recover **Type 1** data (Design Data) and execute **Threat Model 1.**
design that contains **Type A** sensitive information on an AWS FPGA. The attacker knows the location of sensitive data routes, and they construct a _Measurement_ design that maps TDCs to these routes. The attacker can interleave measurements every hour to eventually exposes the **Type A** data based on the relationship between the rising and falling transition over time. The attacker can continue the burn-in process until they are satisfied that the sensitive values are extracted.
### _Experiment 3 (Cloud Environment)_
Experiment 3 studies the viability of **Threat Model 2**, which exploits BTI recovery as a side channel. Sixteen 1000 ps, 2000 ps, 5000 ps, and 10000 ps delay routes are instantiated on an AWS F1 FPGA. The routes undergo a burn-in period during which we do not measure the time delay. Then, they switch to the recovery phase. We aim to understand if it is possible to recover pentimenti of a previous user of the FPGA by only measuring during the recovery period. This meets the assumptions required for Threat Model 2. \(\theta_{init}\) is consistent across all FPGAs of the same type, and so capturing \(\theta_{init}\) once on any board is sufficient to assume that \(\theta_{init}\) is known a priori for any attack.
Experiment 3 is divided into three periods:
* **Hours [0, 200]:** A burn-in period induced by the victim computation. The _Condition_ phase executes with a constant, randomly generated \(X\) value loaded into the 64 routes under test. Calibration is **not** performed. The attacker does not have control of the FPGA, and thus measurement is not allowed. The condition phase is run uninterrupted for 200 hours.
* **Hour 200:** The victim relinquishes control of the FPGA, and the attacker gains control.
* **Hours (200, 225]:** The attacker launches the _Measurement_ phase, which tunes TDCs to \(\theta_{init}\), and captures traces of the 64 routes under test as described in Section 5.1. After this quick process, the data is saved. Then, the attacker launches the _Condition_ phase that runs for one hour. The Measurement/Condition sequence is repeated for 25 iterations (hours).
The target victim design holds a constant 64-bit \(X\) value on FPGA routes for 200 hours without interruption. \(X\) represents **Type B** (User Data) the attacker aims to recover. The burn-in \(X\) values are opaque to the attacker. But, \(X\) can be derived by the attacker based on the recovery behavior.
After 200 hours, the attacker gains control of the FPGA and thus can start measuring for BTI effects. The attacker sets all routes under test to logical 0 and measures the propagation delay once per hour over the next 25 hours. The attacker is looking for BTI recovery to extract pentimenti of the previously loaded design data. We do not assume the attacker has prior information about the FPGA before the victim computed upon it.
The choice of setting all routes to logical 0 is motivated by the results in Experiment 1; routes that were logical 1 in \(X\) and were switched to \(\overline{X}\) quickly returned to the original value. Thus, it exhibits a more significant signal for detection. An attacker could also choose to set all lines to logical 1 or a mixture of 0 and 1.
Figure 8 shows the 25-hour recovery period. Note that the graph starts at hour 200 after the burn-in. We have no data about the FPGA before that point. Burn 0 values are shown in cyan burn 1 values in m
Figure 8: **Experiment 3 (Cloud Environment): Four sets of 16 routes with different delays are initialized on an AWS F1 FPGA. A random constant burn value \(X\) is conditioned into the routes for 200 hours. The attacker gains control of the FPGA and instantiates TDC sensors to measure the timing delay of routes that previously held sensitive data. The timing difference between the falling and rising transition is plotted for the following 25-hour period—revealing the BTI recovery. This enables an attacker to execute on **Threat Model 2** and recover **Type 2** data (user data).
that previously held logical 1 immediately begin to decrease in relation to the cyan routes that remain at logical 0 the entire time. The purple logical 1 routes are undergoing Burn 1 BTI recovery, which Experiment 1 showed was much more dramatic than Burn 0 recovery.
We do not observe the same magnitude and clarity of divergence of burn-in or recovery on AWS F1 as the ZCU102. This is likely due to more complex cloud operating factors, including non-constant temperature, FPGA age, and other computations simultaneously running on the AWS F1 system.
Despite these differences, the result is consistent with the elasticity we observed on the ZCU102.
We show an attacker can extract Type B Data demonstrating **Threat Model 2** on a cloud FPGA. An attacker generates a _Measure_ design which maps TDCs to the routes which contained **Type B** (User Data) in a targeted design. After gaining access to an AWS F1 FPGA device running the target design that was previously loaded for tens of hours, the attacker recovers previous user data by measuring the timing behavior of the routes over time.
## 7 Related Work
Our attack is a _single-tenant temporal side channel_ - state within the FPGA that is not wiped correctly, or it is impossible to remove between subsequent users [9]. It is common to "wipe" the FPGA device between successive users [36] as a security precaution. **Our approach subverts these efforts** as it measures analog remanence that remains even after wiping. We show that our data recovery techniques work even after performing the wiping done by AWS. It is impossible to mitigate burn-in risk via a logical erasure of the device because burn-in is a fundamental characteristic of the device transistors that reflects previous logical values.
Tian et al. [61] demonstrate a single-tenant temporal covert channel. They use ring oscillators to heat the FPGA (transmitter) and detect temperature (receiver). They can transmit hundreds of bits over a few minutes on cloud FPGA at Texas Advanced Computing Center using Microsoft Catapult hardware. To make their covert channel, the FPGA transmitter and receiver must alternatively obtain and release the same FPGA, which is possible but very difficult in other cloud infrastructures (e.g., AWS). Using temperature as a side channel requires the user to get on the FPGA quickly; temperature effects are short-term, e.g., the cloud FPGAs return to ambient temperatures within a few minutes [61]. Finally, BTI effects are a more pernicious temporal channel. Instead of measuring the tertiary effects of computation or a covert channel, it is a direct measurement of a previous user or proprietary design data. It can last hundreds of hours, as we have shown in our results.
Zick et al. [71] demonstrate a single-tenant temporal side channel on a local FPGA by recovering previous user data stored in LUT SRAMs. Their experiment has a burn-in period of 922 hours at high temperatures to induce burn-in. Then, the FPGA sat powered off for several weeks. Their experiments were performed on a local Xilinx Kintex-7 KC705 development board. Unfortunately, their experimental requirements are incompatible with the cloud FPGA attack model. They use a highly precise, off-chip oscillator to enhance the on-chip TDC sensor timing resolution. This results in femtosecond-level timing precision. Such precision is impossible on cloud FPGA TDC sensors since an attacker cannot use off-chip components. On-chip TDCs operate at approximately 10 ps precision on the UltraScale+, so it is an order of magnitude difference with their sensor. They perform recovery of data stored in FPGA LUTs (SRAM) and specifically target transistors in the output buffers of the SRAM bits. We ruled out the examination of this resource since their burn-in effects are too subtle to measure with cloud FPGA sensors, which is why they required femtosecond precision. We target FPGA programmable routing. We show that our attacks are deployable on cloud FPGAs (AWS F1 instances).
A significant body of prior work uses ring oscillators (RO)-based sensors to measure long-term FPGA BTI effects [4, 47, 58]. RO sensors build a combinatorial loop through a tested component and an inverter. The oscillation frequency through the loop reflects the time taken for the signal to propagate through that tested component, which changes due to BTI effects. While ROs measure BTI effects, they have two significant limitations. First, ROs have a single variable output--the frequency of oscillation - that integrates the propagation speed through the NMOS and PMOS transistors. This is an essential factor as BTI stresses PMOS vs. NMOS transistors differently. Our TDC sensor can separate the differences in BTI stress on PMOS and NMOS. We use this ability to differentiate between BTI degradation. Second, ROs are often not allowed on cloud FPGAs. ROs use combinatorial loops, which violate the design rule checks and can be detected [32, 34]. Cloud FPGA providers can disallow designs that contain self-oscillating circuits, e.g., as is done by AWS. Our TDC-based sensor is more challenging to detect since it uses computational structures that are common and many FPGA designs. It was implemented on an AWS F1 instance. Thus, it passes AWS design rule checks.
Previous works have recovered SRAM user data on recycled ICs [12, 29, 63]. Even though SRAMs are a form of volatile memory, where logical data is lost on power-off, an imprint is left behind and is recoverable. These techniques rely on measuring the statistical power-on state of SRAM bits. They assume a different threat model, e.g., requiring physical access to the chip.
## 8 Mitigations
This paper demonstrated that **Threat Model 1** and **Threat Model 2** are exploitable in cloud systems. A determined attacker could build more precise sensors to measure BTI on shorter routes with shorter burn-in periods. As a result, users should take precautions to manage sensitive data to mitigate burn-in effects, cloud FPGA providers should look to enforce stronger temporal boundaries between users,
and FPGA manufacturers should consider architectural solutions to mitigate BTI.
### _User Mitigations_
The cloud FPGA user should not allow sensitive data to sit unchanged on the FPGA for long periods to avoid burn-in remnants that the following user could discover. When sensitive data must statically persist for long periods, the user should consider methods to mitigate its burn-in effects.
Techniques that periodically change sensitive data would reduce burn-in. For example, the data could be inverted at predetermined periods (e.g., every hour). Or it could be deterministically shuffled at the source and unshuffled at the receiver. Such data transformation approaches reduce the burn-in effects across the route at the expense of increasing the design complexity. Other ideas related to FPGA wear leveling [57] would likely reduce the burn-in effects but need to be verified.
If there are natural breaks in computation, the user could move between different FPGAs in the cloud. A new device should be checked out from the cloud provider, the application moved between FPGAs, and the burn-in would start fresh on the new FPGA. This would increase user design complexity, e.g., by requiring a robust process to stop, move application data, and restart an FPGA instance. This adds risks related to data corruption when moving between FPGAs.
The user should strive to make routes that hold sensitive data as short as possible. The longer the route, the more transistors affected by burn-in, and the larger the burn-in effects. Consequently, as we have shown in our results, shorter routes are a more secure FPGA design pattern.
FPGA physical design tools generally attempt to make routes as short as possible. Hand-placed routes for sensitive information could produce better results. Physical design tools often focus on routes on the critical path, often at the expense of other routes. The ability to specify that the physical design tools minimize sensitive routes would reduce vulnerability to pentimento-style attacks.
The user or design tools could place sensitive routes in a manner that makes them difficult to connect to a BTI sensor. The input to the route under test must be connected to the transition generator. The output of the route under test is connected to a TDC sensor. Placing the inputs or outputs in locations that make these connections challenging would make it more challenging to extract BTI information from that target route.
Verification tools could analyze the design or bitstream for sensitive data residing on long routes. The ability to provide reports about the route lengths of the sensitive information would allow hardware security verification engineers to better assess their data vulnerabilities w.r.t. to a pentimento attack. Providing a more precised measure of protection (e.g., vulnerability metric) enables even stronger hardware security verification.
Key rotation is common in cryptography [10, 16] and could be employed on cloud FPGAs. This is not always possible, especially if data needs to be embedded into the RTL directly, e.g., in random netlist constants as found in the OpenTitan.
Key masking [2, 26, 42] could help reduce the number and lengths of routes that hold a cryptographic key. Masking is specific to cryptographic algorithms and may not be feasible for other types of sensitive data.
The design could use partial reconfiguration to move the sensitive information - its storage and computation units - to different locations of the chip. This would act as a form of wear leveling. It lessens the burn-in effect at any one physical location. Yet, it also spreads the burn-in over more areas, which could potentially make it easier to exploit the information.
A cloud FPGA user could mitigate the BTI remnants by erasing their design and holding on to the instance for some time before relinquishing it back into the user pool. The tenant could invert the values of the sensitive routes to speed up the recovery and thus limit the remaining BTI signal. Or they could perform some other actions (perhaps toggling the routes). This costs the user money commensurate with the time they deemed sufficient to erase BTI effects.
### _Cloud Provider Mitigations_
The primary issue cloud providers could hope to resolve is the rapid reallocation of FPGA devices once a user relinquishes them. The cloud provider could implement launch rate controls, by withholding devices after they are returned, for days, weeks, or longer to mitigate the ability to recover the burn-in. This would push the mitigation onto the cloud provider rather than the cloud user.
The cloud provider can attempt to combat the accelerators of the BTI effect: higher voltage and temperature. Some FPGAs that operate at different voltages and use a lower voltage would reduce the burn-in effects. Similarly, higher temperatures exacerbate burn-in. Temperature can be managed to some extent, but it would be very challenging to control the on-chip temperature to the point where an attacker can no longer observe BTI. Furthermore, cloud providers are already incentivized to control voltage and temperature to reduce FPGA power consumption and aging.
### _FPGA Manufacturer Mitigations_
FPGA manufacturers can attempt to mitigate FPGA BTI effects. BTI mitigations are already commonly considered to increase reliability. It is unlikely that FPGA manufacturers will be able to eliminate BTI, especially at advanced design nodes. BTI effects are more negligible at less advanced process nodes; thus, falling back on older technology would be a potential mitigation. The performance and power benefits of advanced nodes are likely too much to sacrifice for cloud providers and users.
Manufacturers can help reduce BTI through voltage and temperature mitigations; however, this is already a primary directive due to their negative influence on power consumption. Thus, it is unlikely these mitigations will advance at
a faster pace. FPGA manufacturers could consider more advanced dynamic voltage scaling techniques to allow users to mitigate BTI selectively. This adds complexity to the design, which increases costs.
## 9 Conclusion
We find that a remote attacker can recover "FPGA pentimentos" - long-removed secret data belonging to a prior user or proprietary design image on a cloud FPGA. Just as a pentimento of a painting can be exposed via infrared imaging, FPGA pentimentos can be exposed via signal timing sensors instantiated on a remote cloud FPGA. The sensitive data constituting an FPGA pentimento is imprinted to the device through bias temperature instability effects on the underlying transistors. We demonstrate how this slight degradation can be measured using a time-to-digital converter when an adversary programs one into the target cloud FPGA. This technique allows an attacker to ascertain previously safe information, after it is no longer explicitly present, on cloud FPGAs. Notably, it can allow an attacker to (1) extract proprietary details or keys from an encrypted FPGA design image available on the AWS marketplace and (2) recover information from a previous user of a cloud-FPGA. Both threat models are experimentally validated on the AWS F1 platform.
|
2309.06727 | Empirical Bayes Double Shrinkage for Combining Biased and Unbiased
Causal Estimates | Motivated by the proliferation of observational datasets and the need to
integrate non-randomized evidence with randomized controlled trials, causal
inference researchers have recently proposed several new methodologies for
combining biased and unbiased estimators. We contribute to this growing
literature by developing a new class of estimators for the data-combination
problem: double-shrinkage estimators. Double-shrinkers first compute a
data-driven convex combination of the the biased and unbiased estimators, and
then apply a final, Stein-like shrinkage toward zero. Such estimators do not
require hyperparameter tuning, and are targeted at multidimensional causal
estimands, such as vectors of conditional average treatment effects (CATEs). We
derive several workable versions of double-shrinkage estimators and propose a
method for constructing valid Empirical Bayes confidence intervals. We also
demonstrate the utility of our estimators using simulations on data from the
Women's Health Initiative. | Evan T. R. Rosenman, Francesca Dominici, Luke Miratrix | 2023-09-13T05:04:58Z | http://arxiv.org/abs/2309.06727v1 | # Empirical Bayes Double Shrinkage for Combining Biased and Unbiased Causal Estimates
###### Abstract
Motivated by the proliferation of observational datasets and the need to integrate non-randomized evidence with randomized controlled trials, causal inference researchers have recently proposed several new methodologies for combining biased and unbiased estimators. We contribute to this growing literature by developing a new class of estimators for the data-combination problem: double-shrinkage estimators. Double-shrinkers first compute a data-driven convex combination of the the biased and unbiased estimators, and then apply a final, Stein-like shrinkage toward zero. Such estimators do not require hyperparameter tuning, and are targeted at multidimensional causal estimands, such as vectors of conditional average treatment effects (CATEs). We derive several workable versions of double-shrinkage estimators and propose a method for constructing valid Empirical Bayes confidence intervals. We also demonstrate the utility of our estimators using simulations on data from the Women's Health Initiative.
###### Contents
* 1 Introduction
* 1.1 Prior Work
* 1.2 Contributions
* 2 Main Results
* 2.1 Hierarchical Model
* 2.2 Operationalizing the Estimator
* 2.2.1 Moment Matching
* 2.2.2 Empirical Bayes Maximum Likelihood
* 2.2.3 Unbiased Risk Estimate Minimization
* 2.3 Relationships Among Estimators
* 2.4 Confidence Intervals
Simulations Using Data from the Women's Health Initiative * 3.1 Risk Reduction * 3.2 Confidence Interval Coverage
* 4 Discussion
* A Computation of Unbiased Risk Estimate
* B Confidence Interval Construction
* C WHI Simulations: Minimum Coverage Rates
## 1 Introduction
The modern proliferation of observational datasets - in applications such as disease surveillance, voter mobilization, and e-commerce - provides an opportunity to improve causal estimation. These data are typically large, inexpensive to obtain, and representative of target populations. However, treatments are not randomized in observational studies, meaning that treated and control units may differ in important ways. Hence, there is a fundamental challenge to the utility of observational studies: these studies frequently suffer from unmeasured confounding. As a result, even with reasoned statistical adjustments, the causal estimates obtained from observational data are often biased.
By contrast, the virtues of randomized controlled trials (RCTs) are widely known. Experimental data yield unbiased estimates of causal effects under very mild assumptions. However, high-quality randomized trials are expensive to conduct, and tend to have a limited number of participants. They are frequently underpowered for the estimation of causal effects on subgroups of the population. Given these challenges using experimental data alone, one can find a chorus of recent papers (e.g. Shalit, 2020; Mueller et al., 2018) advocating for methodological advancements in combining experimental and observational data to obtain more statistically efficient estimates of causal effects. Many researchers have heeded this call in the past few years (Kallus et al., 2018; Cheng and Cai, 2021; Yang et al., 2020; Colnet et al., 2020).
This papers adds to the growing literature around data-combination approaches by proposing a new class of estimators for combining biased and unbiased estimators: "double shrinkers." These estimators work by first computing data-driven weights to apply to the biased and unbiased estimators, yielding a convex combination of the estimators. Then, double-shrinkers apply a final, Stein-like shrinkage toward zero. The latter step can be viewed as a form of regularization, inducing some additional bias but reducing the variance of the estimator.
We operate under squared error loss, meaning we seek to obtain estimates \(\boldsymbol{\hat{\tau}}\in\mathbb{R}^{K}\) of causal effects \(\boldsymbol{\tau}\) such that
\[\mathcal{L}(\boldsymbol{\hat{\tau}},\boldsymbol{\tau})=\sum_{k=1}^{K}(\hat{ \tau}_{k}-\tau_{k})^{2}\]
is typically small. Under this loss, the "dual shrinkage" property turns out to generate significantly improved estimation performance in some settings. Moreover, con
structing the estimator from an explicit generative model allows for straightforward construction of confidence intervals with robust coverage properties.
### Prior Work
This manuscript builds primarily upon four previous works: Green et al. (2005), Rosenman et al. (2020). Green and Strawderman (1991), and Xie et al. (2012).
Green et al. (2005) considers how to combine biased and unbiased estimators in the Empirical Bayes framework. The authors suppose they have access to two \(K\)-dimensional estimators, \(\mathbf{\hat{\tau}_{u}}\) and \(\mathbf{\hat{\tau}_{b}}\), such that \(\mathbf{\hat{\tau}_{u}}\) has mean \(\mathbf{\tau}\) and \(\mathbf{\hat{\tau}_{b}}\) has mean \(\mathbf{\tau}-\mathbf{\xi}\), where \(\mathbf{\xi}\) represents a \(K\)-dimensional bias vector. The estimand is \(\mathbf{\tau}\), so \(\mathbf{\hat{\tau}_{u}}\) is an unbiased estimator while \(\mathbf{\hat{\tau}_{b}}\) is a biased estimator. The unbiased estimator is assumed normally distributed and heteroscedastic, such that \(\text{var}(\mathbf{\hat{\tau}_{u}})=\mathbf{\Sigma}_{u}=\text{diag}(\{\sigma_{rk}^{2} \}_{k=1}^{K})\). No other assumptions are placed on the distribution of \(\mathbf{\hat{\tau}_{b}}\).
The authors derive two estimators heuristically, and suggest them for use in different contexts. The first estimator is intended to perform well under the precision-weighted squared-error loss,
\[\mathcal{L}(\mathbf{\hat{\tau}},\mathbf{\tau})=\sum_{k=1}^{K}\frac{\left(\hat{\tau}_{ k}-\tau_{k}\right)^{2}}{\sigma_{k}^{2}}\,.\]
Under this loss, the estimator
\[\mathbf{\delta}_{1}=\mathbf{\hat{\tau}_{b}}+\left(1-\frac{a}{(\mathbf{\hat{\tau}_{u}}-\bm {\hat{\tau}_{u}})^{\mathsf{T}}\mathbf{\hat{\Sigma}}_{u}^{-1}\left(\mathbf{\hat{\tau}_{u }}-\mathbf{\hat{\tau}_{b}}\right)}\right)(\mathbf{\hat{\tau}_{u}}-\mathbf{\hat{\tau}_{b}})\]
dominates \(\mathbf{\hat{\tau}_{u}}\) (in the decision theoretic sense), as long as \(0<a<2(K-2)\) and \(\mathbf{\hat{\Sigma}}_{u}\) is correctly estimated.
The authors propose a different estimator for the more conventional squared error loss,
\[\mathbf{\delta}_{2}=\mathbf{\hat{\tau}_{b}}+\left(\mathbf{I}_{K}-\frac{a\mathbf{\hat{\Sigma}} _{u}^{-1}}{(\mathbf{\hat{\tau}_{u}}-\mathbf{\hat{\tau}_{b}})^{\mathsf{T}}\mathbf{\hat{ \Sigma}}_{u}^{-2}(\mathbf{\hat{\tau}_{u}}-\mathbf{\hat{\tau}_{b}})}\right)(\mathbf{\hat{ \tau}_{u}}-\mathbf{\hat{\tau}_{b}})\,.\]
This estimator can also be shown to dominate \(\mathbf{\hat{\tau}_{u}}\) if \(0<a<2(K-2)\). The authors argue that the shrinkage parameter \(a\) should be selected as \(K-2\) for both estimators.
Rosenman et al. (2020) adopted the same setting as Green et al. (2005), and built upon its results by introducing an alternative, non-heuristic method for deriving data-combination shrinkage estimators. The paper proposes constructing combinations of biased and unbiased estimators by appealing to an unbiased risk estimate (URE) analogous to the classical Stein's unbiased risk estimate (SURE; Stein, 1981). The proposed process involves two steps: positing a shrinkage structure, and then deriving the values of tuning parameters by minimizing over the URE. The latter idea - minimizing an unbiased estimate of risk to obtain the value of hyperparameters - has substantial history in statistics (Xie et al., 2012). Using this approach, two new estimators are developed in Rosenman et al. (2020): \(\mathbf{\kappa}_{1+}\), which shrinks all components of \(\mathbf{\hat{\tau}_{u}}\) by the same multiplicative factor toward \(\mathbf{\hat{\tau}_{b}}\); and \(\mathbf{\kappa}_{2+}\), which shrinks each component proportionally to the variance of each entry of \(\mathbf{\hat{\tau}_{u}}\). These estimators are competitive with \(\mathbf{\delta}_{1}\) and \(\mathbf{\delta}_{2}\) in simulation and in a real data analysis using data from the Women's Health Initiative.
A relevant precedent to Green et al. (2005) is Green and Strawderman (1991), which considered the same problem but in the case of homoscedastic \(\mathbf{\hat{\tau}_{u}}\) and \(\mathbf{\hat{\tau}_{b}}\). The earlier paper proposes a simpler estimator,
\[\mathbf{\delta}=\mathbf{\hat{\tau}_{b}}+\left(1-\frac{(K-2)\sigma^{2}}{||\mathbf{\hat{\tau}_ {b}}-\mathbf{\hat{\tau}_{u}}||^{2}}\right)_{+}(\mathbf{\hat{\tau}_{u}}-\mathbf{\hat{\tau}_ {b}})\,,\]
where \(\sigma^{2}\in\mathbb{R}\) is the variance of each entry in \(\mathbf{\hat{\tau}_{u}}\). \(\mathbf{\delta}\) is not derived explicitly in Green and Strawderman (1991), but the authors mention in passing that their estimator can be constructed from a hierarchical model of the data-generating process, namely
\[p(\mathbf{\tau}) \propto c, \tag{1}\] \[\mathbf{\xi} \sim N(0,\gamma^{2}\mathbf{I}_{K}),\] \[\mathbf{\hat{\tau}_{u}}\mid\mathbf{\tau} \sim N(\mathbf{\tau},\sigma^{2}\mathbf{I}_{k}),\text{ and}\] \[\mathbf{\hat{\tau}_{b}}\mid\mathbf{\tau},\mathbf{\xi} \sim N(\mathbf{\tau}+\mathbf{\xi},\phi^{2}\mathbf{I}_{K}),\]
where the first line represents a noninformative (i.e. improper locally uniform) prior on \(\mathbf{\tau}\).
Lastly, Xie et al. (2012) considers a different setting: that of the classical James-Stein estimator (Stein, 1956), in which the goal is to shrink a multivariate normal mean vector toward a central point. A key complication in Xie et al. (2012) is that the authors do not impose a homoscedasticity assumption on the multivariate normal, unlike Stein in the original work. The literature contains no consensus estimator in the heteroscedastic case. In Xie et al. (2012), as in Green and Strawderman (1991), the authors use a hierarchical model to derive a functional form for a shrinkage estimator. They then propose three different heuristics - based on moment-matching, maximum likelihood estimation, and SURE minimization - to construct usable estimators. In their setting, they find that the estimator derived from SURE minimization often performs best.
Beyond the direct antecedents to this paper, we note that there are several alternate perspectives on estimator construction for the data-combination problem. A growing literature on this problem has developed in causal inference. Several papers propose methods to correct the bias in the observational study estimates using the joint information from the RCT and the observational study. Kallus et al. (2018) proposes a deconfounding technique that relies on estimating a correction term, under the assumption that the confounding bias has a parametric structure that can be modeled and extrapolated to other parts of the covariate space. More recently, Yang et al. (2020) considers a "confounding function" that captures the bias due to unmeasured confounding at each covariate value. They assume the conditional average treatment effect function and the confounding function both have parametric forms, and derive a semiparametric efficient estimator for the model parameters for both functions. Colnet et al. (2020) provides an excellent overview of these and other proposed methods.
Several recent papers have also proposed adaptive schemes for trading off between observational and experimental estimators. Cheng and Cai (2021) proposes an approach that approximates the optimal linear combination of the observational and experimental estimators. Their estimator weights heavily toward the RCT estimator when bias is detected in the observational study, but pools the two data sources when
bias is negligible. Yang et al. (2020a) adopts a testing-based approach, wherein the equality of means between the observational and experimental estimators is tested and the data is pooled only if the test fails to reject the null. Chen et al. (2021) proposes a soft-thresholding estimator for combining the data sources, and also demonstrates that such an estimator achieves the minimax convergence rate for mean squared error of the true causal effects, up to poly-log factors. Lastly, Oberst et al. (2023) considers the problem in the case where the parameter of interest is unidimensional, and propose a simple data-combination estimator with a provably bounded bias.
### Contributions
Recent advances offer practitioners a menu of options for combining data from observational and experimental settings. Yet the choice of estimator for any specific task is not immediately clear. Even in the univariate case, Oberst et al. (2023) show that different estimators do better when the bias in \(\mathbf{\hat{\tau}_{b}}\) is lower or higher - but the magnitude of the bias is typically not known to the researcher a priori. Moreover, some methods require a choice of hyperparameters or significance thresholds (such as those of Cheng and Cai (2021), Chen et al. (2021), and Yang et al. (2020a)) while others require untestable assumptions (such as that of Kallus et al. (2018)). Lastly, not all methods admit confidence intervals or clearly establish the assumptions required for coverage guarantees.
We propose an approach for the case where the unbiased estimator \(\mathbf{\hat{\tau}_{u}}\) and \(\mathbf{\hat{\tau}_{b}}\) are \(K\)-dimensional and heteroscedastic. Our work complements the existing literature in several ways. First, we do not require hyperparameter tuning for estimation or inference, nor do we impose any assumptions beyond the normality of \(\mathbf{\hat{\tau}_{u}}\) and \(\mathbf{\hat{\tau}_{b}}\). Second, our estimator arises from a straightforward derivation using a hierarchical model of the data-generating process. This approach is intuitively appealing; extends prior work from Green and Strawderman (1991); and allows for straightforward construction of Empirical Bayes confidence intervals with robust coverage properties. Third, our estimator is novel in its functional form, utilizing not only a data-driven weighting scheme to trade off between \(\mathbf{\hat{\tau}_{u}}\) and \(\mathbf{\hat{\tau}_{b}}\), but also a Stein-like shrinkage toward a central point. This "dual shrinkage" property generates significantly improved performance in simulations.
The remainder of this paper proceeds as follows. Section 2 introduces our notation and assumptions; works through the construction of the different versions of our estimator; and also details the construction of valid Empirical Bayes confidence intervals Section 3 contains a detailed simulation study using data from the Women's Health Initiative, a 1991 study of the effect of hormone therapy on health outcomes for postmenopausal women. We consider both estimation and inference tasks, finding that the MLE-based double shrinker is particularly performant versus competitor estimators. Section 4 concludes.
## 2 Main Results
### Hierarchical Model
We adopt the same notation as in the prior section, denoting the unbiased estimator as \(\mathbf{\hat{\tau}_{u}}\), the biased estimator as \(\mathbf{\hat{\tau}_{b}}\), the true causal effects as \(\mathbf{\tau}\), and the vector of biases in \(\mathbf{\hat{\tau}_{b}}\) as \(\mathbf{\xi}\). All four objects lie in \(\mathbb{R}^{K}\). We assume both \(\mathbf{\hat{\tau}_{u}}\) and \(\mathbf{\hat{\tau}_{b}}\) are heteroscedastic. Moreover, in keeping with the traditional approach in Empirical Bayes, we assume their diagonal covariance matrices \(\mathbf{\Sigma}_{u}=\text{diag}(\sigma_{uk}^{2})\in\mathbb{R}^{K\times K}\) and \(\mathbf{\Sigma}_{b}=\text{diag}(\sigma_{bk}^{2})\in\mathbb{R}^{K\times K}\) are known. Our goal is to obtain an estimator of \(\mathbf{\tau}\) that performs well under squared error loss.
Rather than deriving our estimator by positing its functional form and optimizing over Stein's unbiased risk estimate (as in Rosenman et al., 2020), we instead obtain it by proposing a hierarchical model for the data-generating process. We assume the following model for the data,
\[\mathbf{\tau} \sim\mathcal{N}\left(0,\eta^{2}\mathbf{I}_{K}\right), \tag{2}\] \[\mathbf{\xi} \sim\mathcal{N}\left(0,\gamma^{2}\mathbf{I}_{K}\right),\] \[\mathbf{\hat{\tau}_{u}}\mid\mathbf{\tau} \sim\mathcal{N}\left(\mathbf{\tau},\mathbf{\Sigma}_{u}\right),\text{ and}\] \[\mathbf{\hat{\tau}_{b}}\mid\mathbf{\tau},\mathbf{\xi} \sim\mathcal{N}\left(\mathbf{\tau}+\mathbf{\xi},\mathbf{\Sigma}_{b}\right),\]
where \(\eta^{2}\) and \(\gamma^{2}\) are unknown hyperparameters. This proposed data-generating process is similar to the model (1) from Green and Strawderman (1991), but it differs in a few key ways, as described below.
The model described in (2) imposes a normal prior on both \(\mathbf{\tau}\) and \(\mathbf{\xi}\), with both centered at \(0\). Conditional on a draw of these parameters, \(\mathbf{\hat{\tau}_{u}}\) is normally distributed about \(\mathbf{\tau}\) with diagonal covariance matrix \(\mathbf{\Sigma}_{u}\), while \(\mathbf{\hat{\tau}_{b}}\) is assumed normally distributed about \(\mathbf{\tau}+\mathbf{\xi}\) with covariance matrix \(\mathbf{\Sigma}_{b}\).
The sampling distributions of \(\mathbf{\hat{\tau}_{u}}\) and \(\mathbf{\hat{\tau}_{b}}\) are themselves an assumption. We are supposing that a Central Limit Theorem holds for each estimator; that we have boundedness and sufficient sample size such that the CLT can appropriately describe the sampling distribution; and that our estimation technique is independent within each stratum, such that the covariance matrices of both estimators are diagonal.
We do not put priors on \(\mathbf{\Sigma}_{u}=\text{diag}\left(\sigma_{u1}^{2},\ldots,\sigma_{uK}^{2}\right)\) and \(\mathbf{\Sigma}_{b}=\text{diag}\left(\sigma_{b1}^{2},\ldots,\sigma_{bK}^{2}\right)\), as these are assumed known (though in practice they are estimated from the data). The given model differs from the one in Green and Strawderman (1991) in that we allow for heteroscedasticity of \(\mathbf{\hat{\tau}_{u}}\) and \(\mathbf{\hat{\tau}_{b}}\), and we use a normal prior rather than an improper uniform prior for \(\mathbf{\tau}\).
Because the priors and likelihoods are normally distributed, we have conjugacy such that the posterior distribution of \(\mathbf{\tau}\) and \(\mathbf{\xi}\) will be jointly normal. We are interested in the posterior mean of \(\mathbf{\tau}\), which evaluates to
\[\mathbb{E}\left(\mathbf{\tau}\mid\mathbf{\hat{\tau}_{u}},\mathbf{\hat{\tau}_ {b}}\right) =\int_{-\infty}^{\infty}\mathbb{E}\left(\mathbf{\tau},\mathbf{\xi}\mid \mathbf{\hat{\tau}_{u}},\mathbf{\hat{\tau}_{b}}\right) \tag{3}\] \[=\left\{\frac{\eta^{2}\left(\hat{\tau}_{uk}\left(\gamma^{2}+ \sigma_{bk}^{2}\right)+\sigma_{uk}^{2}\hat{\tau}_{bk}\right)}{\gamma^{2}\left( \eta^{2}+\sigma_{uk}^{2}\right)+\eta^{2}\left(\sigma_{uk}^{2}+\sigma_{bk}^{2} \right)+\sigma_{uk}^{2}\sigma_{bk}^{2}}\right\}_{k=1}^{K}\]
where \(\hat{\tau}_{uk}\) and \(\hat{\tau}_{bk}\) are the \(k^{th}\) entries of \(\mathbf{\hat{\tau}_{u}}\) and \(\mathbf{\hat{\tau}_{b}}\), respectively.
This can be reorganized into a more suggestive form. We denote our shrinker as \(\mathbf{\hat{\psi}}(\gamma^{2},\eta^{2})\), a function of \(\gamma^{2}\) and \(\eta^{2}\). The \(k^{th}\) entry of the shrinker can be written as:
\[\psi_{k}(\gamma^{2},\eta^{2})=\underbrace{\left(\frac{\eta^{2}\left(\gamma^{2}+ \sigma_{bk}^{2}+\sigma_{uk}^{2}\right)}{\sigma_{uk}^{2}\left(\gamma^{2}+\sigma _{bk}^{2}\right)+\eta^{2}\left(\gamma^{2}+\sigma_{bk}^{2}+\sigma_{uk}^{2} \right)}\right)}_{a_{k}}\cdot\left(\underbrace{\frac{\left(\gamma^{2}+\sigma_{ bk}^{2}\right)}{\gamma^{2}+\sigma_{bk}^{2}+\sigma_{uk}^{2}}}_{\lambda_{k}} \hat{\tau}_{uk}+\underbrace{\frac{\sigma_{uk}^{2}}{\gamma^{2}+\sigma_{bk}^{2} +\sigma_{uk}^{2}}}_{1-\lambda_{k}}\hat{\tau}_{bk}\right)}_{a_{k}}\hat{\tau}_{ bk} \tag{4}\]
In the form of (4), we can interpret this posterior mean as "doubly shrunken." The first term, denoted \(a_{k}\), is a general shrinkage toward zero. This term is analogous to the shrinkage we observe when using the James-Stein estimator. If we take \(\eta^{2}\) to infinity (i.e., make the prior on \(\mathbf{\tau}\) a flat prior), then this term goes to \(1\).
The second term is a convex combination of \(\hat{\tau}_{uk}\) and \(\hat{\tau}_{bk}\), with weights \(\lambda_{k}\) and \(1-\lambda_{k}\). Observe that the weights are inversely proportional to the expected mean squared error of each estimator. If we take \(\sigma_{uk}^{2}\) to \(0\), then the weight concentrates on \(\hat{\tau}_{uk}\), and the opposite happens if we take \(\gamma^{2}+\sigma_{bk}^{2}\) to \(0\).
### Operationalizing the Estimator
We cannot directly use (4) as an estimator, because \(\gamma\) and \(\eta\) are unknown (indeed, they are just constructed from an imagined prior). Nonetheless, if we can obtain estimates \(\hat{\gamma}^{2}\) and \(\hat{\eta}^{2}\) of these parameters from the data, we can then define the functions \(\mathbf{a}:\mathbb{R}^{2}\rightarrow\mathbb{R}^{K}\) and \(\mathbf{\lambda}:\mathbb{R}^{2}\rightarrow\mathbb{R}^{K}\) according to
\[\mathbf{a}\left(\hat{\gamma}^{2},\hat{\eta}^{2}\right)=\left\{a_{k}\left(\hat{ \gamma}^{2},\hat{\eta}^{2}\right)\right\}_{k=1}^{K}=\left\{\left(\frac{\hat{ \eta}^{2}\left(\hat{\gamma}^{2}+\sigma_{bk}^{2}+\sigma_{uk}^{2}\right)}{ \sigma_{uk}^{2}\left(\hat{\gamma}^{2}+\sigma_{bk}^{2}\right)+\hat{\eta}^{2} \left(\hat{\gamma}^{2}+\sigma_{bk}^{2}+\sigma_{uk}^{2}\right)}\right)\right\} _{k=1}^{K},\]
and
Lastly, we can define a usable shrinkage estimator as
\[\mathbf{\hat{\psi}}(\hat{\gamma}^{2},\hat{\eta}^{2})=\mathbf{a}\left(\hat{\gamma}^{2},\hat{\eta}^{2}\right)\circ\left(\mathbf{\lambda}(\hat{\gamma}^{2},\hat{\eta}^{2} )\circ\hat{\mathbf{\tau}}_{\mathbf{u}}+\left(\mathbf{1}-\mathbf{\lambda}(\hat{\gamma}^{2}, \hat{\eta}^{2})\right)\circ\hat{\mathbf{\tau}}_{\mathbf{b}}\right)\]
where \(\circ\) represents an elementwise (Hadamard) product,
The remaining question is how precisely to estimate \(\hat{\gamma}^{2}\) and \(\hat{\eta}^{2}\). In analogy with Xie et al. (2012), we offer three possible paradigms: moment matching, Empirical Bayes maximum likelihood, and URE minimization. Each of these estimation procedures leads to a different version of our estimator.
#### 2.2.1 Moment Matching
One common approach is to find observable quantities whose expectations are equal to \(\gamma^{2}\) and \(\eta^{2}\) under hierarchical model (2). This moment-matching approach generates two candidate estimators.
First, under model (2), observe that
\[\mathbb{E}\left(||\mathbf{\hat{\tau}_{b}}-\mathbf{\hat{\tau}_{u}}||_{2}^{2} \right) =\operatorname{tr}(\mathbf{\Sigma}_{b})+\operatorname{tr}(\mathbf{\Sigma}_{u })+K\gamma^{2}\,,\] \[\mathbb{E}\left(||\mathbf{\hat{\tau}_{b}}||_{2}^{2}-||\mathbf{\hat{\tau}_{ u}}||_{2}^{2}\right) =\operatorname{tr}(\mathbf{\Sigma}_{b})-\operatorname{tr}(\mathbf{\Sigma}_{u })+K\gamma^{2}\,.\]
Assuming \(\mathbf{\Sigma}_{u}\) and \(\mathbf{\Sigma}_{b}\) are estimated well in practice, we can estimate \(\gamma^{2}\) using either
\[\hat{\gamma}_{\text{mm},1}^{2}=\frac{1}{K}\left(||\mathbf{\hat{\tau}_{u}}-\mathbf{ \hat{\tau}_{b}}||_{2}^{2}-\operatorname{tr}(\mathbf{\Sigma}_{u})-\operatorname{tr }(\mathbf{\Sigma}_{b})\right)_{+},\]
or
\[\hat{\gamma}_{\text{mm},2}^{2}=\frac{1}{K}\left(||\mathbf{\hat{\tau}_{b}}||_{2}^{ 2}-||\mathbf{\hat{\tau}_{u}}||_{2}^{2}+\operatorname{tr}(\mathbf{\Sigma}_{u})- \operatorname{tr}(\mathbf{\Sigma}_{b})\right)_{+},\]
where \(u_{+}=\max(u,0)\) represents the positive-part estimator.
Analogously we observe
\[\mathbb{E}\left(||\mathbf{\hat{\tau}_{u}}||_{2}^{2}\right)=\operatorname{tr}(\mathbf{ \Sigma}_{u})+K\eta^{2}\]
and hence we can use the moment matching estimator
\[\hat{\eta}_{\text{mm}}^{2}=\frac{1}{K}\left(||\mathbf{\hat{\tau}_{u}}||_{2}^{2}- \operatorname{tr}(\mathbf{\Sigma}_{u})\right)_{+}.\]
Plugging in our two candidate moment-matching estimators for \(\gamma^{2}\), and our single candidate moment-matching estimator for \(\eta^{2}\), we arrive at two candidate moment-matching versions of our shrinkage estimator,
\[\mathbf{\hat{\psi}_{\text{mm},1}}\equiv\mathbf{\hat{\psi}}(\hat{\gamma}_{\text{mm},1 }^{2},\hat{\eta}_{\text{mm}}^{2})\quad\text{ and }\quad\mathbf{\hat{\psi}_{\text{mm},2}}\equiv\mathbf{\hat{\psi}}(\hat{\gamma}_{\text{ mm},2}^{2},\hat{\eta}_{\text{mm}}^{2}).\]
#### 2.2.2 Empirical Bayes Maximum Likelihood
An alternative approach is to use maximum likelihood. Under model (2), the marginal distributions of \(\mathbf{\hat{\tau}_{u}}\) and \(\mathbf{\hat{\tau}_{b}}\) satisfy
\[\mathbf{\hat{\tau}_{u}}\sim\mathcal{N}\left(0,\eta^{2}\mathbf{I}_{K}+\mathbf{\Sigma}_{u} \right)\quad\text{ and }\quad\mathbf{\hat{\tau}_{b}}\sim\mathcal{N}\left(0,\eta^{2}\mathbf{I}_{K}+ \gamma^{2}\mathbf{I}_{K}+\mathbf{\Sigma}_{b}\right)\,.\]
Hence, we can write
\[f(\mathbf{\hat{\tau}_{u}},\mathbf{\hat{\tau}_{b}})\propto\prod_{k}\left(\eta^{2}+ \sigma_{uk}^{2}\right)^{-1/2}e^{-\frac{\hat{\tau}_{uk}^{2}}{2\left(\eta^{2}+ \sigma_{uk}^{2}\right)}}\,\times\prod_{k}\left(\eta^{2}+\gamma^{2}+\sigma_{ bk}^{2}\right)^{-1/2}e^{-\frac{\hat{\tau}_{bk}^{2}}{2\left(\eta^{2}+\gamma^{2}+ \sigma_{bk}^{2}\right)}}\,.\]
Taking the logarithm, we obtain
\[-\frac{1}{2}\sum_{k}\left(\log\left(\eta^{2}+\sigma_{uk}^{2}\right)+\frac{\hat {\tau}_{uk}^{2}}{\left(\eta^{2}+\sigma_{uk}^{2}\right)}\right)+\left(\log \left(\eta^{2}+\gamma^{2}+\sigma_{bk}^{2}\right)+\frac{\hat{\tau}_{bk}^{2}}{ \left(\eta^{2}+\gamma^{2}+\sigma_{bk}^{2}\right)}\right)\,.\]
This expression is concave, so we can obtain estimates \(\hat{\eta}_{\text{mle}}^{2}\) and \(\hat{\gamma}_{\text{mle}}^{2}\) as zeroes of its gradient, i.e. as solutions to the equations
\[-\frac{1}{2}\sum_{k}\left(\frac{1}{\eta^{2}+\sigma_{uk}^{2}}-\frac{\hat{\tau}_ {uk}^{2}}{\left(\eta^{2}+\sigma_{uk}^{2}\right)^{2}}\right)+\left(\frac{1}{ \eta^{2}+\gamma^{2}+\sigma_{bk}^{2}}-\frac{\hat{\tau}_{bk}^{2}}{\left(\eta^{2} +\gamma^{2}+\sigma_{bk}^{2}\right)^{2}}\right)=0\,, \tag{5}\]
\[-\frac{1}{2}\sum_{k}\left(\frac{1}{\eta^{2}+\gamma^{2}+\sigma_{bk}^{2}}-\frac{ \hat{\tau}_{bk}^{2}}{\left(\eta^{2}+\gamma^{2}+\sigma_{bk}^{2}\right)^{2}}- \right)=0\,, \tag{6}\]
where \(\hat{\eta}_{\text{mle}}^{2}=0\) in the case where the system has no positive solution for \(\eta\) and \(\hat{\gamma}_{\text{mle}}^{2}=0\) when the system has no positive solution for \(\gamma\). Plugging in these estimates gives us the second version of our estimator,
\[\boldsymbol{\hat{\psi}_{\text{mle}}}\equiv\boldsymbol{\hat{\psi}}(\hat{\gamma }_{\text{mle}}^{2},\hat{\eta}_{\text{mle}}^{2}).\]
#### 2.2.3 Unbiased Risk Estimate Minimization
A final approach eschews direct estimation of \(\eta^{2}\) and \(\gamma^{2}\), instead choosing their values to minimize an unbiased estimate of the shrinker's statistical risk. Using results from Rosenman et al. (2020), we can compute the risk of a shrinkage estimator \(\boldsymbol{\hat{\psi}}(\hat{\gamma}^{2},\hat{\eta}^{2})\). The full computation is given in Appendix A. The resultant risk value is
\[\mathbb{E}\left(||\boldsymbol{\hat{\psi}}(\hat{\gamma}^{2},\hat {\eta}^{2})-\boldsymbol{\tau}||_{2}^{2}\mid\boldsymbol{\tau},\boldsymbol{\xi} \right)=\operatorname{tr}(\boldsymbol{\Sigma}_{u})+\mathbb{E}\left(|| \boldsymbol{\hat{\psi}}(\hat{\gamma}^{2},\hat{\eta}^{2})-\boldsymbol{\hat{ \tau}_{u}}||_{2}^{2}\mid\boldsymbol{\tau},\boldsymbol{\xi}\right)- \tag{7}\] \[2\sum_{k}\mathbb{E}\Big{(}\sigma_{uk}^{2}\cdot\left(1-a_{k} \left(\hat{\gamma}^{2},\hat{\eta}^{2}\right)\cdot\lambda_{k}\left(\hat{\gamma }^{2},\hat{\eta}^{2}\right)\right)\mid\boldsymbol{\tau},\boldsymbol{\xi}\Big{)}\,.\]
Removing the (conditional) expectations, this yields an unbiased estimate of the statistical risk of shrinker \(\boldsymbol{\hat{\psi}}(\hat{\gamma}^{2},\hat{\eta}^{2})\), which can be computed directly from the data, i.e.
\[\text{URE}(\hat{\gamma}^{2},\hat{\eta}^{2})=\operatorname{tr}(\boldsymbol{ \Sigma}_{u})+||\boldsymbol{\hat{\psi}}(\hat{\gamma}^{2},\hat{\eta}^{2})- \boldsymbol{\hat{\tau}_{u}}||_{2}^{2}-2\sum_{k}\sigma_{uk}^{2}\cdot\left(1-a_ {k}\left(\hat{\gamma}^{2},\hat{\eta}^{2}\right)\cdot\lambda_{k}\left(\hat{ \gamma}^{2},\hat{\eta}^{2}\right)\right). \tag{8}\]
Equation 8 is useful, because it allows us to optimize hyperparameters over an unbiased estimate of the risk for the purposes of estimator design. Such an approach has substantial precedent in the literature (Li et al., 1985; Xie et al., 2012). We obtain
\[\boldsymbol{\hat{\psi}_{\text{ure}}}\equiv\boldsymbol{\hat{\psi}}(\hat{\gamma }_{\text{ure}}^{2},\hat{\eta}_{\text{ure}}^{2}),\qquad\text{where}\qquad(\hat{ \gamma}_{\text{ure}}^{2},\hat{\eta}_{\text{ure}}^{2})=\operatorname*{arg\,min}_ {\gamma^{2}\geq 0,\eta^{2}\geq 0}\text{URE}(\gamma^{2},\eta^{2})\,.\]
### Relationships Among Estimators
Curiously, our setting departs from that of Xie et al. (2012) in an unexpected way. Xie and co-authors found that, when the multivariate Gaussian distribution of interest was homoscedastic, their three estimator versions - moment-matching, maximum likelihood, and SURE-based - coincided. The different paradigms could therefore be seen as alternative approaches to deal with non-constant variance across components.
In our context, the estimators do not all coincide under homoscedasticity. If \(\boldsymbol{\hat{\tau}_{u}}\) and \(\boldsymbol{\hat{\tau}_{b}}\) are each homoscedastic, then it is straightforward to show that \(\boldsymbol{\hat{\psi}_{\text{mm,2}}}\) and \(\boldsymbol{\hat{\psi}_{\text{mle}}}\) coincide. However, if \(\boldsymbol{\hat{\tau}_{u}}\) and \(\boldsymbol{\hat{\tau}_{b}}\) are each homoscedastic, \(\boldsymbol{\hat{\psi}_{\text{mm,1}}}\) and \(\boldsymbol{\hat{\psi}_{\text{ure}}}\) are not equal to each other, nor are they equal to \(\boldsymbol{\hat{\psi}_{\text{mm,2}}}\) and \(\boldsymbol{\hat{\psi}_{\text{mle}}}\).
### Confidence Intervals
Valid confidence interval construction for shrinkage and data-combination estimators is an open area of research (Armstrong et al., 2020; Hoff and Yu, 2019). The results of Chen et al. (2021) indicate that frequentist confidence intervals for estimators combining observational and experimental data sources cannot be shortened, relative to those obtained from experimental data alone, when the magnitude of the confounding bias is unknown. For inference, we thus appeal to a common approach in the shrinkage literature, which is to use a relaxed notion of interval coverage known as "Empirical Bayes (EB) coverage" (Armstrong et al., 2020; Morris, 1983).
Formally, EB coverage is the requirement that coverage holds over repeated resampling of the entire data-generating process, i.e. over both stages of the hierarchical model defined by Expression 2. This is weaker than frequentist coverage, which would hold over repeated sampling of the data \(\mathbf{\hat{\tau}_{u}}\) and \(\mathbf{\hat{\tau}_{b}}\) conditional on any value of the true causal effects \(\mathbf{\tau}\) and biases \(\mathbf{\xi}\). As discussed in Armstrong et al. (2020), EB coverage also implies that, for any values of \(\mathbf{\tau}\) and \(\mathbf{\xi}\), the intervals should cover a \(1-\alpha\) proportion of the true values of \(\mathbf{\tau}\) across repeated sampling of the data. However, there may be some individual entries in \(\mathbf{\tau}\) which are covered in less than \(1-\alpha\) fraction of samples of the data.
We seek to obtain an interval construction procedure that provides valid EB coverage rates for all of \(\mathbf{\hat{\psi}_{\text{mm},1}},\mathbf{\hat{\psi}_{\text{mm},2}},\mathbf{\hat{\psi}_{ \text{mle}}}\), and \(\mathbf{\hat{\psi}_{\text{ure}}}\). We pattern our construction exactly on the work of Armstrong et al. (2020), which constructed intervals for Stein-like estimators. Our goal is to design intervals that do not rely on the parametric assumptions about the distributions of \(\mathbf{\tau}\) and \(\mathbf{\xi}\), described in Equations 2, in order to guarantee coverage.
We suppose instead that all we know about \(\mathbf{\tau}\) is that its entries are sampled from a distribution with a second moment \(\eta^{2}\); and all we know about \(\mathbf{\xi}\) is that its entries are sampled from a distribution with a second moment \(\gamma^{2}\). These second moments will be replaced by estimates in practice. In describing the construction procedure, we use the generic notation \(\hat{\eta}^{2}\) and \(\hat{\gamma}^{2}\) to refer to estimates of these hyperparameters. These quantities should be understood to represent, e.g. \(\hat{\eta}^{2}_{\text{mm},1}\) and \(\hat{\gamma}^{2}_{\text{mm}}\) when using the estimator \(\mathbf{\hat{\psi}_{\text{mm},1}}\), \(\hat{\eta}^{2}_{\text{mle}}\) and \(\hat{\gamma}^{2}_{\text{mle}}\) when using \(\mathbf{\hat{\psi}_{\text{mle}}}\), etc.
The full confidence interval derivation can be found in Appendix B. The resultant form for confidence interval for each stratum causal effect estimate \(\psi_{k}\) is given in Definition 1.
**Definition 1** (Robust EB Confidence Intervals).: _The robust EB confidence interval for \(\psi_{k}\), the causal effect estimate for stratum \(k\) obtained from any of our double-shrinkage estimators, is given by_
\[\psi_{k}\pm cva(c_{k})\hat{a}_{k}\sqrt{\left(\hat{\lambda}^{2}_{k}\sigma^{2} _{uk}+(1-\hat{\lambda}_{k})^{2}\sigma^{2}_{bk}\right)}\,,\]
_where_
\[\hat{a}_{k}=\left(1-\frac{\sigma^{2}_{uk}\left(\hat{\gamma}^{2}+\sigma^{2}_{ bk}\right)}{\sigma^{2}_{uk}\left(\hat{\gamma}^{2}+\sigma^{2}_{bk}\right)+\hat{ \eta}^{2}\left(\hat{\gamma}^{2}+\sigma^{2}_{bk}+\sigma^{2}_{uk}\right)}\right),\quad\hat{\lambda}_{k}=\left(\frac{\left(\hat{\gamma}^{2}+\sigma^{2}_{bk} \right)}{\hat{\gamma}^{2}+\sigma^{2}_{bk}+\sigma^{2}_{uk}}\right),\]
_and \(cva(c_{k})\) is a stratum-specific inflation factor whose precise form can be found in Appendix B._
A final complicating detail that estimates of \(\eta^{2}\) and \(\gamma^{2}\) may be \(0\) in small samples, which can yield both the estimator and confidence intervals to concentrate on a single point, \(\{0\}\). Armstrong et al. (2020) considered this problem, and proposed an empirical truncation of the hyperparameter estimates to approximate the Bayesian posterior means under a flat prior - an idea originated in Morris (1983). We apply the analogous truncation procedure in computation of our confidence intervals. For more details, see Appendix B.
## 3 Simulations Using Data from the Women's Health Initiative
As an initial evaluation of our efficacy of our proposed estimators, we evaluate them on a familiar dataset: data from the Women's Health Initiative (WHI). The study is a 1991 study involving postmenopausal women, studying the health effects of hormone therapy. The WHI incorporated both a randomized controlled trial as well as an observational study.
In total, 16,608 women were included in the trial. Half were randomly selected to take 625 mg of estrogen and 2.5 mg of progestin, while the remainder were given a placebo. The observational component of the WHI included 53,054 women clinically comparable to those in the trial. Roughly one third of women in the observational study used estrogen plus progestin. The remaining women were not using hormone therapy (Prentice et al., 2005).
### Risk Reduction
As in Rosenman et al. (2023), we investigate the treatment's effect on coronary heart disease incidence. We draw \(1,000\) bootstrap samples from the RCT component and observational component of the WHI. With the data from each bootstrap sample, we compute each of our double-shrinkage estimators as well as several competitor estimators. The bootstrap is a useful proxy for sampling from a super-population, as the causal estimates computed on the RCT bootstrap samples are normally distributed and centered on the "true" causal estimates computed using the entire RCT sample. Thus, we can estimate the statistical risk of our estimators by computing their average mean squared error in estimating these true causal quantities.
We use the same set of three stratification variables from Rosenman et al. (2023): two clinically relevant variables (cardiovascular disease history, or "CVD", and age), as well as one clinically irrelevant variable ("Langley" scatter, a measure of solar irradiance). For full details on the choice of these variables, see Rosenman et al. (2023).
In Table 1, we provide results of \(1,000\) simulations in which the pseudo-RCTs contain \(1,000\) units. The rows correspond to different stratification schemes, created by stratifying on a subset of our stratification variables. In the second column, we give the number of strata. The following ten columns give the average mean square error of different estimators. For ease of interpretation, we report these average MSE values as a percentage of the average MSE of \(\mathbf{\hat{\tau}_{u}}\), which is a simple difference-in-means estimator applied to each stratum.
Any value less than \(100\%\) indicates a loss reduction. We compare our four proposed
estimators - \(\mathbf{\hat{\psi}_{\text{mm},1}},\mathbf{\hat{\psi}_{\text{mm},2}},\mathbf{\hat{\psi}_{\text{ mile}}}\), and \(\mathbf{\hat{\psi}_{\text{ure}}}\) - against several competitors. We consider \(\mathbf{\delta}_{1}\) and \(\mathbf{\delta}_{2}\) from Green et al. (2005), as well as \(\mathbf{\kappa}_{1+}\) and \(\mathbf{\kappa}_{2+}\) from Rosenman et al. (2023). For reference we also consider \(\mathbf{\hat{\tau}_{b}}\), the observational study estimator. We apply an inverse propensity weighting adjustment to \(\mathbf{\hat{\tau}_{b}}\) using the propensity score generated in Rosenman et al. (2020), but we do not assume unconfoundedness holds in this setting, meaning residual bias remains in \(\mathbf{\hat{\tau}_{b}}\). We also consider \(\mathbf{\hat{\tau}_{w}}\), a "precision-weighted" estimator which computes a convex combination of \(\mathbf{\hat{\tau}_{u}}\) and \(\mathbf{\hat{\tau}_{b}}\) where each estimator is weighted according to the inverse of its variance. In the given simulation regime - in which the RCT is much smaller than the observational study, and a careful propensity score adjustment has been applied to the observational data - both of these estimators have lower MSE than the estimator computed from the small pseudo-RCTs, \(\mathbf{\hat{\tau}_{u}}\).
Results in Table 1 are highly encouraging. In this data regime, the observational study estimator \(\mathbf{\hat{\tau}_{b}}\) and the precision-weighted estimator \(\mathbf{\hat{\tau}_{w}}\) exhibit significantly lower MSE than the RCT-derived unbiased estimator \(\mathbf{\hat{\tau}_{u}}\). The four competitor shrinkers - \(\mathbf{\kappa}_{1+},\mathbf{\kappa}_{2+},\mathbf{\delta}_{1}\), and \(\mathbf{\delta}_{2}\) - are all able to achieve significantly lower risk than \(\mathbf{\hat{\tau}_{u}}\), but they typically do no better than \(\mathbf{\hat{\tau}_{b}}\) and worse than \(\mathbf{\hat{\tau}_{w}}\). This is, at least in part, due to the fact that these estimators can only take data-driven convex combinations of \(\mathbf{\hat{\tau}_{u}}\) and \(\mathbf{\hat{\tau}_{b}}\), and do not provide any additional stabilization via shrinkage toward zero.
The four "double-shrinkers," \(\mathbf{\hat{\psi}_{\text{mm},1}},\mathbf{\hat{\psi}_{\text{mm},2}},\mathbf{\hat{\psi}_{ \text{mile}}}\), and \(\mathbf{\hat{\psi}_{\text{ure}}}\), by contrast, all consistently outperform the competitor shrinkers \(\mathbf{\kappa}_{1+},\mathbf{\kappa}_{2+},\mathbf{\delta}_{1}\), and \(\mathbf{\delta}_{2}\). Moreover, we see that all four double-shrinkers are able to outperform \(\mathbf{\hat{\tau}_{w}}\) in five of the seven conditions, corresponding to all stratification schemes with five or more subroups. It appears that \(\mathbf{\hat{\psi}_{\text{mile}}}\) performs slightly better among stratifications with few subgroups, while \(\mathbf{\hat{\psi}_{\text{mm},2}}\) is especially performant when there are a larger number of subgroups. The performance
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & & & & & & Loss as a \% of \(\mathbf{\hat{\tau}_{u}}\) Loss & & & & \\ \cline{3-10} Subgroup & \# of & \(\mathbf{\hat{\tau}_{b}}\) & \(\mathbf{\hat{\tau}_{w}}\) & \(\mathbf{\kappa}_{1+}\) & \(\mathbf{\kappa}_{2+}\) & \(\mathbf{\delta}_{1}\) & \(\mathbf{\delta}_{2}\) & \(\mathbf{\hat{\psi}_{\text{mm},1}}\) & \(\mathbf{\hat{\psi}_{\text{mm},2}}\) & \(\mathbf{\hat{\psi}_{\text{mile}}}\) & \(\mathbf{\hat{\psi}_{\text{ure}}}\) \\ \hline CVD & 2 & 9\% & 8\% & 36\% & 36\% & 100\% & 100\% & 21\% & 17\% & 16\% & 32\% \\ Age & 3 & 17\% & 15\% & 37\% & 30\% & 62\% & 73\% & 21\% & 18\% & 16\% & 34\% \\ Langley & 5 & 23\% & 20\% & 28\% & 22\% & 39\% & 52\% & 11\% & 10\% & 9\% & 15\% \\ CVD, Age & 6 & 42\% & 36\% & 39\% & 42\% & 40\% & 83\% & 21\% & 21\% & 21\% & 27\% \\ CVD, Langley & 10 & 35\% & 32\% & 34\% & 36\% & 33\% & 87\% & 17\% & 17\% & 17\% & 19\% \\ Age, Langley & 15 & 21\% & 18\% & 22\% & 21\% & 21\% & 44\% & 8\% & 7\% & 8\% & 10\% \\ CVD, Age, Langley & 30 & 51\% & 46\% & 51\% & 51\% & 51\% & 80\% & 20\% & 19\% & 20\% & 20\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Simulation results for each stratification scheme, with an RCT sample size of \(1,000\). The best-performing estimator is underlined for each stratification scheme.
of the double shrinkers, in aggregate, suggests that these estimators can do quite well in cases when the unbiased estimator has much higher variance than the biased estimator.
Next, we consider a setting in which the RCT sample size is \(8,000\) units. We provide these results in Table 2. The simulation set-up is otherwise identical.
This setting is distinct because, due to the larger RCT sample size, \(\mathbf{\hat{\tau}_{b}}\) has higher MSE than \(\mathbf{\hat{\tau}_{u}}\) in all stratifications involving three or more strata. As a consequence, the precision-weighted estimator \(\mathbf{\hat{\tau}_{w}}\) is also no longer particularly accurate, and is outperformed by the double shrinkers in all settings. The double shrinkers also consistently outperform each of \(\mathbf{\kappa}_{1+},\mathbf{\kappa}_{2+},\delta_{1}\), and \(\delta_{2}\).
The best-performing double shrinker is less clear in this data regime. \(\mathbf{\hat{\psi}_{\text{mle}}}\) does best in four of the seven settings, while \(\mathbf{\hat{\psi}_{\text{mm,1}}}\) does best in the remaining three. There is no clear relationship between the number of subgroups and the best-performing shrinker
We note briefly that \(\mathbf{\hat{\psi}_{\text{ure}}}\) is a surprising laggard among the double shrinkers, rarely achieving the performance of \(\mathbf{\hat{\psi}_{\text{mm,1}}},\mathbf{\hat{\psi}_{\text{mm,2}}}\), and \(\mathbf{\hat{\psi}_{\text{mle}}}\). This is an unexpected result, as the SURE-minimizing estimator in (Xie et al., 2012) was consistently the best estimator. One hypothesis is that the URE is somewhat challenging to minimize precisely over the positive orthant, and it is feasible that the R function optim() - used in our code - can only achieve approximate optima. Another hypothesis is that the high-noise setting in which we simulate may be unattractive for use of URE minimization.
Taken together, these simulation results demonstrate the practical improvement in point estimation that can be achieved using our new class of estimators. We next turn our attention to questions of inference.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & & & & & & Loss as a \% of \(\mathbf{\hat{\tau}_{u}}\) Loss & & & & & \\ \cline{3-11} Subgroup & \# of & \(\mathbf{\hat{\tau}_{b}}\) & \(\mathbf{\hat{\tau}_{w}}\) & \(\mathbf{\kappa}_{1+}\) & \(\mathbf{\kappa}_{2+}\) & \(\mathbf{\delta}_{1}\) & \(\mathbf{\delta}_{2}\) & \(\mathbf{\hat{\psi}_{\text{mm,1}}}\) & \(\mathbf{\hat{\psi}_{\text{mm,2}}}\) & \(\mathbf{\hat{\psi}_{\text{mle}}}\) & \(\mathbf{\hat{\psi}_{\text{ure}}}\) \\ \cline{3-11} Vars & Strata & & & & & & & & & & \\ \hline CVD & 2 & 83\% & 49\% & 69\% & 62\% & 100\% & 100\% & 31\% & 29\% & 25\% & 47\% \\ Age & 3 & 152\% & 76\% & 83\% & 95\% & 84\% & 90\% & 49\% & 48\% & 44\% & 65\% \\ Langley & 5 & 173\% & 90\% & 77\% & 144\% & 77\% & 83\% & 29\% & 28\% & 27\% & 35\% \\ CVD, Age & 6 & 233\% & 102\% & 87\% & 127\% & 80\% & 92\% & 60\% & 62\% & 64\% & 72\% \\ CVD, Langley & 10 & 148\% & 66\% & 68\% & 103\% & 64\% & 95\% & 33\% & 34\% & 34\% & 43\% \\ Age, Langley & 15 & 121\% & 66\% & 62\% & 104\% & 59\% & 81\% & 28\% & 29\% & 28\% & 37\% \\ CVD, Age, Langley & 30 & 157\% & 67\% & 78\% & 147\% & 67\% & 95\% & 38\% & 39\% & 39\% & 41\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Simulation results for each stratification scheme, with an RCT sample size of \(8,000\). The best-performing estimator is underlined for each stratification scheme.
### Confidence Interval Coverage
We also simulate confidence interval coverage, where the intervals are constructed using the robust Empirical Bayes method described in Section 2.4. Again, we draw \(1,000\) bootstrap samples from the data. In each sample, we construct the robust \(95\%\) intervals for each of the double shrinkers, for each subgroup causal effect, under each of the seven stratifications. We then compute the frequency with which the intervals cover the "true" causal effects computed from the entire RCT population, as well as their average lengths.
We first consider the case when the RCT includes just \(1,000\) units. In Table 3, we report the average coverage rate across the subgroups under each stratification scheme. In Table 4, we provide the average confidence interval length; for ease of interpretation, we report the length as a percentage of the average length of a standard Wald interval computed using the RCT data only (i.e., the default confidence interval that would be used in the absence of the observational data).
In Table 3, we can see that the average coverage rate is consistently above the nominal rate of \(95\%\), indicating that Empirical Bayes coverage indeed holds in practice. In fact, we appear to somewhat overcover, a likely consequence of the robustness property of the intervals defined via Definition 1. As discussed in Appendix B, these intervals are designed to achieve EB coverage against a "least-favorable" distribution of \(\mathbf{\tau}\) and \(\mathbf{\xi}\) which still satisfy a second-moment condition. If our distributions of \(\mathbf{\tau}\) and \(\mathbf{\xi}\) are not the most adversarial distributions possible, we may have some overcoverage.
Nonetheless, in Table 4, we observe that the intervals are consistently shorter than those computed the RCT data only, with length reductions ranging from \(15\) to \(50\%\) and reductions typically larger in settings with more subgroups. Moreover, we see that there is very little variability across the four double shrinkage estimators. All the estimators consistently achieve the desired EB coverage rate, and all achieve similar length reductions.
In Tables 5 and 6, we provide the analogous results for the case when the RCT sample size is \(8,000\). The story is essentially unchanged: Empirical Bayes coverage is achieved at the \(95\%\) level, and the intervals are \(15\) to \(50\%\) shorter on average than standard intervals computed using the RCT data alone.
Targeting the weaker notion of Empirical Bayes coverage means that certain subgroups' causal effects can be undercovered by our procedure, as long as average coverage rates remain at or above the nominal level of \(1-\alpha\). This is discussed in more detail in Appendix C.
## 4 Discussion
Our work contributes to the active and growing literature on designing estimators to trade off between biased and unbiased estimators of causal effects (Oberst et al., 2023; Chen et al., 2021; Yang et al., 2020; Cheng and Cai, 2021). Building on ideas discussed in prior Empirical Bayes papers - namely, those of Green and Strawderman (1991) as well as Xie et al. (2012) - we propose two innovations. First, we obtain the functional form of our estimator by appealing to a hierarchical model and computing a posterior mean. This yields the novel structure of our shrinkage estimator, which empirically estimates a convex combination of the biased and unbiased estimators, and
then also applies a stabilizing shrinkage toward zero. Second, we operationalize our estimator by considering several different paradigms for estimating hyperparameters in the hierarchical model. This yields four different versions of our "double shrinkage" estimator, three of which - \(\mathbf{\hat{\psi}_{\text{mm,1}}},\mathbf{\hat{\psi}_{\text{mm,2}}}\), and \(\mathbf{\hat{\psi}_{\text{mle}}}\) - appear to be consistently performant.
We also propose a method for constructing robust confidence intervals to guarantee coverage under a weaker inferential paradigm known as "Empirical Bayes coverage." This approach allows us to make use of the biased dataset in order to construct shorter confidence intervals for each subgroup causal effect. We demonstrate the utility of these methods using data from the Women's Health Initiative. Simulating from the data, we find that our double shrinkers consistently outperform competitor estimators in terms of mean squared error in estimating subgroup causal effects. Our confidence intervals also empirically achieve their nominal EB coverage rates, while achieving 15-50% average reductions in interval length.
There are many potential areas for further work on this topic. We find very little difference between the performances of \(\mathbf{\hat{\psi}_{\text{mm,1}}},\mathbf{\hat{\psi}_{\text{mm,2}}}\), and \(\mathbf{\hat{\psi}_{\text{mle}}}\) in our simulation study. Further work to identify the relative strengths of each estimator may yield stronger guidelines for when to make use of each estimator.
Relatedly, the surprisingly poor performance of \(\mathbf{\hat{\psi}_{\text{ure}}}\), the estimator built upon an unbiased risk estimate (URE), is one major contrast to the work of Xie et al. (2012). This presents an opportunity for further exploration and potential modification of the estimator. Two extensions discussed in Xie et al. (2012) - estimators that shrink toward a fixed point other than zero; and semiparametric shrinkers whose parameters are optimized over the URE - represent potential future steps.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & & \multicolumn{4}{c}{Coverage Rate} \\ \cline{3-6} Subgroup & \# of & \(\mathbf{\hat{\psi}_{\text{mm,1}}}\) & \(\mathbf{\hat{\psi}_{\text{mm,2}}}\) & \(\mathbf{\hat{\psi}_{\text{mle}}}\) & \(\mathbf{\hat{\psi}_{\text{ure}}}\) \\ \hline CVD & 2 & 98\% & 99\% & 99\% & 98\% \\ Age & 3 & 100\% & 100\% & 100\% & 99\% \\ Langley & 5 & 100\% & 100\% & 100\% & 100\% \\ CVD, Age & 6 & 100\% & 100\% & 100\% & 100\% \\ CVD, Langley & 10 & 99\% & 99\% & 99\% & 99\% \\ Age, Langley & 15 & 100\% & 100\% & 100\% & 100\% \\ CVD, Age, Langley & 30 & 100\% & 100\% & 100\% & 100\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Robust confidence interval average coverage rate for each stratification scheme, with an RCT sample size of \(1,000\). The reported values are the average coverage levels, where averages are computed across the subgroups and over the \(1,000\) bootstrap samples.
The double-shrinkage paradigm can also be extended to at least three more complex cases. First, we might consider settings involving multiple observational studies and multiple experiments, in which case we would want to construct estimators to compute weights for each of \(\mathbf{\hat{\tau}_{u1}},\dots,\mathbf{\hat{\tau}_{uu_{u}}},\mathbf{\hat{\tau}_{b1}},\dots, \mathbf{\hat{\tau}_{n_{b}}}\), and then shrink toward a fixed point. Second, we can consider the case beyond stratification, where CATEs are estimated by continuous functions of the covariates, \(\mathbf{\hat{\tau}_{u}}(x)\) and \(\mathbf{\hat{\tau}_{b}}(x)\), which need to be combined and shrunken appropriately. Lastly, we can consider settings involving continuous treatments, such that we combine biased and unbiased estimators of dose-response curves rather than treatment effect estimates.
|
2309.07439 | DePT: Decoupled Prompt Tuning | This work breaks through the Base-New Tradeoff (BNT)dilemma in prompt tuning,
i.e., the better the tuned model generalizes to the base (or target) task, the
worse it generalizes to new tasks, and vice versa. Specifically, through an
in-depth analysis of the learned features of the base and new tasks, we observe
that the BNT stems from a channel bias issue, i.e., the vast majority of
feature channels are occupied by base-specific knowledge, resulting in the
collapse of taskshared knowledge important to new tasks. To address this, we
propose the Decoupled Prompt Tuning (DePT) framework, which decouples
base-specific knowledge from feature channels into an isolated feature space
during prompt tuning, so as to maximally preserve task-shared knowledge in the
original feature space for achieving better zero-shot generalization on new
tasks. Importantly, our DePT is orthogonal to existing prompt tuning methods,
hence it can improve all of them. Extensive experiments on 11 datasets show the
strong flexibility and effectiveness of DePT. Our code and pretrained models
are available at https://github.com/Koorye/DePT. | Ji Zhang, Shihan Wu, Lianli Gao, Heng Tao Shen, Jingkuan Song | 2023-09-14T05:45:40Z | http://arxiv.org/abs/2309.07439v2 | # DePT: Decoupled Prompt Tuning
###### Abstract
This work breaks through the Base-New Tradeoff (BNT) dilemma in prompt tuning, i.e., the better the tuned model generalizes to the base (or target) task, the worse it generalizes to new tasks, and vice versa. Specifically, through an in-depth analysis of the learned features of the base and new tasks, we observe that the BNT stems from a channel bias issue - the vast majority of feature channels are occupied by base-specific knowledge, resulting in the collapse of task-shared knowledge important to new tasks. To address this, we propose the **D**ecoupled **P**rompt **T**uning (**DePT**) framework, which decouples base-specific knowledge from feature channels into an isolated feature space during prompt tuning, so as to maximally preserve task-shared knowledge in the original feature space for achieving better zero-shot generalization on new tasks. Importantly, our DePT is orthogonal to existing prompt tuning methods, hence it can improve all of them. Extensive experiments on 11 datasets show the strong flexibility and effectiveness of DePT. Code: [https://github.com/Koorye/DePT](https://github.com/Koorye/DePT).
## 1 Introduction
Recent few years have witnessed remarkable progress in large vision-language pre-trained models (VLPMs). One of the striking successes has been achieved by the contrastive language-image pretraining (CLIP) [29] model, which formulates the learning objective as a contrastive loss to learn an alignment between images and their textual descriptions in a common feature space. Despite the ability on capturing open-set visual concepts, the zero-shot generalization performance of VLPMs is greatly reduced when there is a severe _category shift_, _distribution shift_ or _domain shift_ between the upstream training data and the downstream tasks.
Inspired by the success of prompt engineering in NLP, _prompt tuning_ (or _context optimization_[44]) has emerged as a parameter-efficient learning paradigm to transfer knowledge in VLPMs to downstream tasks. Relying on a handful of training examples, prompt tuning learns a task-specific prompt (i.e., a set of trainable vectors) for each base (a.k.a. target) task, while keeping the weights of VLPMs frozen. Although the advantages are remarkable, existing methods usually fail to escape the Base-New Tradeoff (BNT) dilemma, i.e., the better the tuned/adapted model generalizes to the base task, the worse it generalizes to new tasks, and vice versa. Recently, many efforts [37, 43, 45] have been paid to alleviate the performance degradation of the tuned model on new tasks by applying anti-overfitting strategies during prompt tuning. However, the BNT problem is still far from being resolved and its underlying mechanisms are poorly understood over the past years.
In this work, we bridge the gap by proposing **D**ecoupled **P**rompt **T**uning (**DePT**), a first framework tackling the BNT problem in prompt tuning from a feature decoupling perspective. Concretely, through an in-depth analysis on the feature channels of the base and new tasks learned by the standard Image Text Matching (ITM) head, we observe that the BNT stems from a _channel bias_ issue: the vast majority of feature channels are occupied by _base-specific_ knowl
Figure 1: Performance of prompt tuning methods _w/_ or _w/o_ our DePT framework on base (seen) and new (unseen) tasks. DePT is orthogonal to existing prompt tuning methods, and it can improve the performance on both base and new tasks for all of them. The results are the average on 11 datasets in Table 2.
edge (i.e., task-specific knowledge of the base task), resulting in the collapse of _task-shared_ knowledge important to new tasks (Section 2.2). Inspired by this, we propose to conquer the BNT problem by simultaneously preserving base-specific and task-shared knowledge in feature channels during prompt tuning. To this end, a Channel Adjusted Transfer (CAT) head is devised to capture base-specific knowledge in an isolated feature space, thus facilitating the preservation of task-shared knowledge in the original feature space (Section 2.3). The CAT head is naturally orthogonal to the standard ITM head, hence they can complement each other to circumvent the BNT problem in prompt tuning. Specifically, by performing prompt tuning with the two heads, we establish remarkable zero-shot prediction performance on new tasks through the ITM head, without compromising the results on the base task obtained by the CAT head. Besides, by simply fusing base-specific and task-shard knowledge through the two heads at inference, we boost the performance on the base task significantly (Section 3.2).
**Flexibility and Effectiveness.** Our DePT framework is orthogonal to existing prompt tuning methods, hence it can be flexibly used to overcome the BNT problem for all of them. We evaluate DePT using a broad spectrum of baseline methods, including the _single-modal_ prompt tuning methods CoOp [44], CoCoOp [43] and KgCoOp [37], and the _multi-model_ prompt tuning method MaPLe [19]. Experimental results on 11 datasets show that DePT consistently improves the performance of those methods, regardless of whether there is a _category shift_ (Table 2), _distribution shift_ (Table 3) or _domain shift_ (Table 4) between base and new tasks, demonstrating the strong flexibility and effectiveness of DePT (Section 3.3). Notably, unlike most previous methods that improve the generalization capability of pretrained models either on base or new tasks, DePT enhances the performance of all those baselines on both base and new tasks - on the four strong baselines, DePT achieves an absolute gain of **1.31%\(\sim\)2.69%** (resp. **0.71%\(\sim\)2.05%**) on base (resp. new) tasks, averaged over 11 datasets (Figure 1).
**Contributions.** Our main contributions are threefold. **1)** We provide an insightful view to analyze the BNT problem in prompt tuning, and for the first time reveal that the BNT stems from the channel bias issue. **2)** We propose the DePT framework to tackle the BNT problem from a feature decoupling perspective, and DePT is orthogonal to existing prompt tuning methods. **3)** We perform experiments on 11 diverse datasets and show that DePT consistently enhances the performance of a broad spectrum of baseline methods1.
Footnote 1: We note that our DePT also has the potential to improve _adapter_ based task adaptation methods, and we consider it as future work.
## 2 Methodology
In this section, we first provide an insightful view to investigate the BNT problem in prompt tuning, then we elaborate on our proposed DePT framework. Before that, we introduce some preliminary concepts.
### Preliminaries
**Contrastive Language-Image Pre-training (CLIP) [33].** CLIP targets learning an alignment between image and text features produced by an image encoder and a text encoder, respectively. After seeing 400 million image-text association pairs and performing a contrastive learning paradigm in a common feature space, CLIP captures diverse open-set visual concepts that can readily be generalized to downstream applications. For example, we can achieve zero-shot classification by formulating the classification task as an image-text matching problem. Concretely, we first craft a prompt
Figure 2: Illustration of our DePT framework (in CoOp [44] style). Unlike previous methods (_right_) that use the same Image Text Matching (**ITM**) head for training/inference on the base task and zero-shot generalization on new tasks, our DePT (_left_) employs a Channel Adjusted Transfer (**CAT**) head to capture _base-specific_ knowledge in an isolated feature space, so as to maximally preserve _task-shared_ knowledge in the original feature space for achieving better zero-shot generalization on new tasks. At inference, we further boost the performance on the base task by simply fusing base-specific and task-shard knowledge through the two heads. \(\copyright\) denotes the concatenation operation.
(e.g., "a photo of a") to obtain the text features of all inner-task classes, by feeding the class-extended prompt (i.e., "a photo of a [CLASS]") to the text encoder. Then, we use the image encoder to obtain the image feature of an input example, and predict the class of the example by comparing the cosine distances between the image feature and the text features of classes.
**Prompt Tuning with the Image-Text Matching Head.** Instead of using a hand-crafted prompt (e.g., "a photo of a"), prompt tuning aims to learn a _task-specific_ prompt using a handful of training data from the base (or target) task. Let \([\mathbf{v}]_{1}[\mathbf{v}]_{2}...[\mathbf{v}]_{l}\) denote \(l\) trainable vectors, we forward the class-extended prompt \(\mathbf{c}_{i}=[\mathbf{v}]_{1}[\mathbf{v}]_{2}...[\mathbf{v}]_{l}\)[CLASS] to the text encoder \(g(\cdot)\) to obtain the text feature of the \(i\)-th class: \(g(\mathbf{c}_{i})\). Let \(\mathbf{f}\) denote the the image feature of an example \(\mathbf{x}\) obtained by the image encoder, the task-specific prompt can be optimized using a parameter-free Image-Text Matching (ITM) head, which formulates the learning objective as:
\[\mathcal{L}_{\texttt{ITM}}=-\sum_{i}\mathbf{y}_{i}\log\mathcal{P}_{\texttt{ITM}}( \mathbf{c}_{i}|\mathbf{x}), \tag{1}\]
where \(\mathbf{y}\) is the one-hot label,
\[\mathcal{P}_{\texttt{ITM}}(\mathbf{c}_{i}|\mathbf{x})=\frac{\exp(<g(\mathbf{c}_{i}),\mathbf{f }>/\tau)}{\sum_{i=1}^{M}\exp(<g(\mathbf{c}_{i}),\mathbf{f}>/\tau)}, \tag{2}\]
\(<\cdot>\) denotes cosine similarity, \(M\) is the number of classes, and \(\tau\) is the temperature learned by CLIP. During training, the gradients calculated in the ITM head can be back-propagated all the way through the text encoder \(g(\cdot)\) to optimize the trainable vectors in the prompt.
### A Closer Look at the BNT Problem
Due to the BNT problem, adapting the pretrained model to the base task \(\mathcal{T}_{\mathrm{base}}\) will decrease the generalization of the model on the new task \(\mathcal{T}_{\mathrm{new}}\), and vice versa. In this part, provide an insightful view to analyze the BNT problem.
**1** Deriving an Oracle Model on \(\mathcal{T}_{\mathrm{base}}\) and \(\mathcal{T}_{\mathrm{new}}\).** We start the investigation of the BNT problem by deriving an _oracle_ model on \(\mathcal{T}_{\mathrm{base}}\) and \(\mathcal{T}_{\mathrm{new}}\). Specifically, we adapt the pretrained model to both \(\mathcal{T}_{\mathrm{base}}\) and \(\mathcal{T}_{\mathrm{new}}\) by jointly training the model on the data of the two tasks during prompt tuning. The derived oracle model therefore can be seen as an approximation of a _BNT-free_ model, because it does not overfit to either \(\mathcal{T}_{\mathrm{base}}\) or \(\mathcal{T}_{\mathrm{new}}\). Here, we use the word "oracle", because the model is derived by leveraging the data of the new task, which is not available in prompt tuning.
**2 Calculating Channel Importance for \(\mathcal{T}_{\mathrm{base}}\) or \(\mathcal{T}_{\mathrm{new}}\).** Denote \(\mathbf{f}_{j}\) and \(\mathbf{e}_{*}\in\{\mathbf{e}_{i}=g(\mathbf{c}_{i})\}_{i=1}^{M}\) the \(d\)-dimensional image and text features of the example \(\mathbf{x}_{j}\) in the learned feature space, respectively. We define the Channel Importance (**CI**) of the \(r\)-th (\(r=1,...,d\)) feature channel for the task \(\mathcal{T}_{\mathrm{base}}\) or \(\mathcal{T}_{\mathrm{new}}\) as follows:
\[\mathbf{CI}^{(r)}=\frac{1}{N}\sum_{j=1}^{N}\frac{\mathrm{ReLU}(\bar{\mathbf{e}}_{* }^{(r)}\bar{\mathbf{f}}_{j}^{(r)})}{1/M\sum_{i=1}^{M}\mathrm{ReLU}(\bar{\mathbf{e}}_{i }^{(r)}\bar{\mathbf{f}}_{j}^{(r)})}, \tag{3}\]
where \(\bar{}=\cdot/||\cdot||_{2}\), \(N\) is the number of examples in the task. \(\mathrm{ReLU}\)[1] is used to avoid the denominator being equal to 0. The derived Eq. (3) has an intuitive explanation: a feature channel is of greater importance if it can better distinguish the classes in the task, i.e., the image features are close to the ground-truth text features and far away from the text features of other classes at this channel.
**Analysis.** Based on **1** and **2**, we wonder what are the differences between the model learned by standard prompt tuning and the derived oracle model w.r.t. the obtained CI distributions of \(\mathcal{T}_{\mathrm{base}}\) and \(\mathcal{T}_{\mathrm{new}}\). To this end, we take CoOp [44] as the baseline scheme to establish the oracle model using the training data from \(\mathcal{T}_{\mathrm{base}}\cup\mathcal{T}_{\mathrm{new}}\) (see **Sup. Mat.(A)** for details). In Figure 3, we plot the calculated CI distributions of the testing data of \(\mathcal{T}_{\mathrm{base}}\) and \(\mathcal{T}_{\mathrm{new}}\) for CoOp and the oracle model on the datasets FGVCAircraft [25] and EuroSAT [11]. As seen in **(a)(c)**, the CI distributions of base and new tasks obtained by Oracle show greater consistency compared to that by CoOp, which is further confirmed by the results in **(b)(d)**, where the computed values of "CI-Base : CI-New" are close to (resp. larger than) **1.0** in most
Figure 3: Channel Importance (**CI**) distributions of base and new tasks learned by the Oracle model and CoOp [44] w/ or w/o our DePT on the datasets FGVCAircraft [25] and EuroSAT [11]. In (a)(c), the indexes of channels in the \(x\)-axis are reordered based on the CI of the base task, a point indicates a channel. In (b)(d), the frequency distributions of CI-Base : CI-New are presented, where CI-Base and CI-New are the CI of base and new tasks, respectively; “H” denotes the Harmonic mean [43] of base-task and new-task accuracies.
cases for Oracle (resp. CoOp). In **(b)(d)**, we also report the Harmonic mean (H) [43] of base-task and new-task accuracies to evaluate the tradeoff. As observed, the oracle model outperforms CoOp considerably, which reveals that most feature channels produced by the oracle model contain _task-shared_ knowledge that is valuable for the generalization on both base and new tasks. What's more, from the results of CoOp in the figure, it is obvious that the achieved CI values of new tasks are significantly lower than that of base tasks at the vast majority of channels. This indicates that most feature channels are occupied by _base-specific_ knowledge after prompt tuning, resulting the collapse of task-shared knowledge in the feature channels - we refer this as a _channel bias_ issue in this work. Inspired by the above observations, we raise the following question:
_Relying solely on the training data of the base task, can we preserve both base-specific and task-shared knowledge in feature channels to overcome the BNT problem in prompt tuning?_
### Decoupled Prompt Tuning
In this work, we answer the above question by proposing Decoupled Prompt Tuning (DePT), a first framework mitigating the BNT problem in prompt tuning from a feature decoupling perspective. An overview of the DePT framework is presented in Figure 2.
**A Plug-and-play Channel Adjusted Transfer Head.** Due to the channel bias issue, striving for base-specific knowledge during prompt tuning will inevitably trigger the catastrophic forgetting of task-shared knowledge in the learned feature channels. To overcome this, our DePT is proposed to decouple base-specific knowledge from feature channels into an isolated feature space, thus facilitating the preservation of task-shared knowledge in the original feature space. To this end, we develop a Channel Adjusted Transfer (CAT) head, which is naturally orthogonal to the standard ITM head (in Section 2.1). Denote \(\mathcal{S}_{\text{ing}}=\{\mathbf{f}_{j}\}_{j=1}^{J}\) and \(\mathcal{S}_{\text{text}}=\{\mathbf{e}_{j}\}_{j=1}^{J}\) the sets of image and text features for a batch of training examples respectively, and \(\mathbf{f}_{j}\), \(\mathbf{e}_{j}\in\mathbb{R}^{d}\). First, the CAT head leverages a channel-wise Transformation (cwT) layer to transform both \(\mathcal{S}_{\text{ing}}\) and \(\mathcal{S}_{\text{text}}\) to a new common feature space. Formally, \(\mathcal{S}_{\text{ing}}^{\prime}=\{\mathbf{f}_{j}^{\prime}\}_{j=1}^{J}\), and
\[\mathbf{f}_{j}^{\prime}=\mathbf{\gamma}\odot\mathbf{f}_{j}+\mathbf{\beta},\quad j=1,...,J, \tag{4}\]
where \(\mathbf{\gamma}\), \(\mathbf{\beta}\in\mathbb{R}^{d}\) are trainable scaling and shift vectors. Denote \(\mathcal{S}_{\text{text}}^{\prime}=\{\mathbf{e}_{j}^{\prime}\}_{j=1}^{J}\) similar to \(\mathcal{S}_{\text{ing}}^{\prime}=\{\mathbf{f}_{j}^{\prime}\}_{j=1}^{J}\). Next, the CAT head employs a linear classifier to promote the mining of base-specific knowledge using \(\mathcal{S}_{\text{ing}}^{\prime}\) and \(\mathcal{S}_{\text{text}}^{\prime}\). Denote \(\mathcal{S}_{\cup}=\mathcal{S}_{\text{ing}}^{\prime}\cup\mathcal{S}_{\text{ text}}^{\prime}=\{\mathbf{s}_{j}\}_{j=1}^{2J}\) and \(\mathcal{Y}_{\cup}=\{\mathbf{y}_{j}\}_{j=1}^{2J}\), where \(\mathbf{y}_{j}\in\mathbb{R}^{M}\) is the one-hot label for \(\mathbf{s}_{j}\), and \(M\) is the number of classes of the task. For each pair of (\(\mathbf{s}\), \(\mathbf{y}\)), the CAT head targets minimizing the following cross-entropy loss:
\[\mathcal{L}_{\texttt{CAT}}=-\sum_{i}\mathbf{y}_{i}\log\mathcal{P}_{\texttt{CAT}} (\mathbf{c}_{i}|\mathbf{x}), \tag{5}\]
where \(\mathbf{y}\) denotes the one-hot label,
\[\mathcal{P}_{\texttt{CAT}}(\mathbf{c}_{i}|\mathbf{x})=\sigma(\mathbf{W}\cdot\mathbf{s})[i], \tag{6}\]
\(\mathbf{W}\in\mathbb{R}^{M\times d}\) denotes the projection matrix for classification, \(\sigma\) denotes the softmax operation. During training, the gradients calculated by \(\mathcal{L}_{\texttt{CAT}}\) are back-propagated to update the weights in the CAT head (i.e., \(\mathbf{\gamma}\), \(\mathbf{\beta}\), \(\mathbf{W}\)) as well as the trainable prompt (i.e., \([\mathbf{v}_{1}]_{1}[\mathbf{v}_{2}...[\mathbf{v}_{l}]_{l}\)). Ablation studies in Section 3.2 show that using two independent cwT layers is more effective than using a shared cwT layer to transform \(\mathcal{S}_{\text{ing}}\) and \(\mathcal{S}_{\text{text}}\) to the new feature space.
**Prompt Tuning with Dual Heads.** Instead of solely using the designed CAT head to facilitate the preservation of task-shared knowledge during prompt tuning, our DePT also retains the standard ITM head to learn an alignment of positive image-text pairs in the original feature space, which is of great importance for establishing better zero-shot generalization on new tasks (as proven in Section 3.2). Thus, the overall learning objective of DePT is expressed as:
\[\mathcal{L}=\lambda\mathcal{L}_{\texttt{CAT}}+(1-\lambda)\mathcal{L}_{\texttt {ITM}}, \tag{7}\]
where \(\lambda\) is a balance weight controlling the relative importance of the two loss.
**Test-time Knowledge Fusion for the Base Task.** At inference, the standard ITM head is used to achieve zero-shot generalization/prediction on new tasks in the original feature space. For the base task, our CAT head directly takes the image feature of a testing example as input to predict the in-distribution class label with the linear classifier. Notably, we can further boost the performance on the base task by simply fusing base-specific knowledge in the CAT head as well as task-shard knowledge in the ITM head at inference. According to Eq. (2) and Eq. (6), the prediction probability of the in-distribution testing example \(\mathbf{x}\) belonging to the \(i\)-th class can be computed as:
\[\mathrm{p}(\mathbf{c}_{i}|\mathbf{x})=\lambda\mathcal{P}_{\texttt{CAT}}(\mathbf{c}_{i}| \mathbf{x})+(1-\lambda)\mathcal{P}_{\texttt{ITM}}(\mathbf{c}_{i}|\mathbf{x}), \tag{8}\]
where the balance weight \(\lambda\) in Eq. (7) is directly used to control the contributions of the two heads for simplification. Pytorch-like pseudocode for the implementation of DePT is presented in **Sup. Mat.(B)**.
## 3 Experiments
In this section, we first present ablation studies to analyze the impacts of different factors on DePT. Next, we validate the flexibility and effectiveness of DePT by applying it to several baseline schemes. We start with an introduction of experimental setup below.
### Experimental Setup
**Baselines.** We apply our DePT to a broad spectrum of baseline approaches, including the _single-modal_ prompt tuning methods CoOp [44], CoCoOp [43], KgCoOp [37] and the _multi-modal_ prompt tuning method MaPLe [19]. In particular, by inserting trainable prompts in both the image encoder and text encoder of CLIP, MaPLe yielded state-of-the-art performance. More details of the four baseline methods are available in **Sup. Mat.(C)**.
**Datasets.** We conduct experiments on several datasets from diverse sources. Concretely, for the settings of _base-to-new generalization_ and _cross-dataset generalization_, we use **11** datasets: ImgNet [6], Caltech [7], OxfordPets [27], StanfordCars [21], Flowers [26], Food101 [2], FGVCAircraft [25], EuroSAT [11], UCF101 [32], DTD [5], and SUN397 [36]; for the _domain generalization_ setting, we use ImgNet as the source domain (i.e. the base task), and its four variants ImgNet-V2 [31], ImgNet-Sketch [35], ImgNet-A [8] and ImgNet-R [12] as target domains (i.e. new tasks).
**Implementation Details.** Our implementations is based on the open-source repository of MaPLe [19]2. For each baseline method, we use the same experimental setup (e.g., feature backbone, prompt lengths and learning rate) as it used in the original implementation. For DePT, the value of \(\lambda\) in Eq. (7)/(8) is set to 0.7; and the learning rate for updating the parameters in the devised CAT head is set to \(6.5\times\delta\), where \(\delta\) is the adopted learning rate for prompt tuning by each baseline. For fair comparison, all baselines w/ or w/ our DePT are trained for 10 epochs in our experiments. The above hyperparameters are fixed across all datasets. Unless stated otherwise, the base task is constructed as a many-way 16-shot task. We report base-task and new-task accuracies as well as their harmonic-mean (H) [43] averaged over 3 runs to compare the performance of different methods. All experiments are performed based on NVIDIA V100.
Footnote 2: [https://github.com/mauzirkhattak/multimodal-prompt-learning](https://github.com/mauzirkhattak/multimodal-prompt-learning)
### Ablation Studies
Here, we first conduct ablative analysis for DePT in Table 1. Then, we investigate the impact of the balance weight \(\lambda\) on DePT in Figure 4 (**Left**). Next, we scrutinize the performance of DePT on different training epochs in Figure 4 (**Right**). Finally, we validate the effectiveness of DePT under different shots (i.e., numbers of training examples per class in base tasks) in Figure 5. We perform experiments using the baseline CoOp [44] in the base-to-new generalization setting, results averaged on the **11** datasets are reported.
**Effectiveness of the Revised Components in DePT.** Our DePT contains two key components, including a plug-and-play CAT head for capturing base-specific knowledge in an isolated feature space, as well as a test-time knowledge fusion strategy for exploring both base-specific and task-shard knowledge to improve the performance on the base task. We conduct component-wise analysis on the two components by progressively adding one of them to the baseline method CoOp [44] in Table 1, where the results are averaged over 11 datasets. From 1 and 2, we observe that integrating our CAT head with the standard ITM head for prompt tuning improves both base-task and new-task accuracies of the baseline method, achieving a clear enhancement of the harmonic-mean (by **1.45**%). Notably, 2 outperforms 1 by up to **2.05**% in terms of new-task accuracy, which demonstrates the effectiveness of our CAT head in facilitating the preservation of task-shared knowledge during prompt tuning. Besides, we also compare the CAT head with its three variants. Concretely, we replace the two independent cwT layers (one for each modality) with a shared cwT layer in _v1_, we replace the linear classifier with an ITM classifier in _v2_, and we only feed image features to the liner classifier in _v3_ (more details are in **Sup. Mat.(D)**). As shown, all the three variants underperform our designed CAT head. What is noteworthy is that directly appending a standard ITM classifier in the cwT-transformed feature space also considerably improves the performance of the baseline on new tasks (see _v2_), showing the effectiveness of the CAT head for decoupling base-specific and task-shared knowledge during prompt tuning. Besides, we see that using only image features in the CAT head damages the performance on the base task (see _v3_). This is possibly due to that relying on a limited number of examples for model optimization, the parameters in the CAT head may overfit to the training data of the base task when the gradients of \(\mathcal{L}_{\text{CAT}}\) can not be back-propagated to the text encoder to optimize the parameters of the prompt. What's more, by simply fusing base
\begin{table}
\begin{tabular}{c|c|c c|c|c|c c} \multirow{2}{*}{Setting} & \multirow{2}{*}{ITM Head} & \multicolumn{2}{c|}{CAT Head} & \multicolumn{2}{c|}{Test-time fusion} & \multicolumn{2}{c}{Average accuracy over 11 datasets (\%)} \\ \cline{3-8} & & cwT+LC & cwT+ITM & for the _Base_ task & Base & New & H \\ \hline \multicolumn{8}{l}{()ITM only (**Baseline**)} & ✓ & \(\times\) & \(\times\) & \(\times\) & 81.50 & 69.77 & 75.18 \\ \multicolumn{8}{l}{2} & ITM+CAT (**CAT**=c**vT+LC**)} & ✓ & ✓ & \(\times\) & \(\times\) & 82.14 (+0.64) & **71.82 (+2.05)** & 76.63 (+1.45) \\ \multicolumn{8}{l}{_v1_. Use a shared cwT in CAT} & ✓ & ✓ & \(\times\) & \(\times\) & 82.24 (+0.74) & 70.85 (+1.08) & 76.12 (+0.94) \\ \multicolumn{8}{l}{_v2_. Use an ITM classifier in CAT} & ✓ & \(\times\) & ✓ & \(\times\) & 82.16 (+0.66) & 71.31 (+1.54) & 76.35 (+1.17) \\ \multicolumn{8}{l}{_v3_. Only use image features in CAT} & ✓ & ✓ & \(\times\) & \(\times\) & 81.11 (-0.39) & 70.93 (+1.16) & 75.68 (+0.50) \\ \multicolumn{8}{l}{3} & ITM+CAT+**Fusion (**Our DePT**)**} & ✓ & ✓ & ✓ & **83.66 (+2.16)** & **71.82 (+2.05)** & **77.29 (+2.11)** \\ \end{tabular}
\end{table}
Table 1: Ablation study for the designed components of DePT. The baseline method is CoOp [44], and the average accuracy on 11 datasets are reported. The metric “H” indicates the Harmonic-mean [43] of base-task and new-task accuracies. “LC”: Linear Classifier.
specific knowledge in the CAT head and task-shard knowledge in the ITM head at inference, the performance on the base task can be enhanced considerably, achieving an absolute gain of **2.16**% in accuracy compared to the baseline method, as shown in 3.
**Impact of the Balance Weight \(\lambda\) on DePT.** In the proposed DePT, we employ the balance weight \(\lambda\) to control the relative importance of the standard ITM head and our devised CAT head in Eq. (7)/(8). It is necessary to investigate the impact of \(\lambda\) on the performance of DePT. To this end, we set \(\lambda\) to the values of \(\{0.0,0.1,0.2,...,1.0\}\), and report the average testing results on the 11 datasets in Figure 4 (**Left**). Overall, the performance of DePT gradually increases as the \(\lambda\) value grows from 0.0 to 0.7, after which the performance of DePT gradually decreases and reaches the lowest value when \(\lambda\)=1.0. In particular, when \(\lambda\)=0.7 DePT establishes the best performance on both base and new tasks. What is noteworthy is that when \(\lambda\)=1.0, i.e., only our CAT head is used for training, the performance of DePT on new tasks sharply decreases, which suggests that retaining the ITM head to learn an alignment of positive image-text features in the original feature space is of great importance for achieving better zero-shot prediction performance on new tasks.
**Performance of DePT at Different Training Epochs.** In Figure 4 (**Right**), we report the obtained results of the baseline method w/ or w/o our DePT at different training epochs. As can be observed, our DePT consistently improves the baseline method after epoch 2, in terms of base-task, new-task, and harmonic-mean (H) accuracies. One possible reason for the failure case at epoch 1 is that the weights in the CAL head (i.e., \(\boldsymbol{\gamma}\), \(\boldsymbol{\beta}\), \(\boldsymbol{W}\)) are initialized randomly, thus it is difficult for the CAL head to fully capture base-specific knowledge with only one training epoch. We also see the baseline method fails to address the BNT problem during prompt tuning - overall, the accuracy of the baseline on new tasks decreases as it's performance on base tasks increases from epoch 1 to epoch 10. It is obvious that DePT helps the baseline mitigate the BNT problem effectively. Overall, the performance of the baseline method as well as our DePT is saturated at epoch 10.
**Robustness of DePT under Different Shots.** All the aforementioned results are obtained on many-way 16-shot base tasks - for every base task, 16 training examples are sampled from each class for prompt tuning. It is interesting to further scrutinize the robustness of our DePT under different shots. To achieve this, we set the shots to \(\{4,8,16\}\), and report the average testing results of the baseline method w/ or w/o our DePT on the 11 datasets in Figure 5. As can be observed, our DePT consistently improves the baseline method across all 4-shot, 8-shot and 16-shot settings, in terms of base-task, new-task, and harmonic-mean (H) accuracies, showing the robustness of DePT under few shots. In the following section, we follow [19, 43, 37, 44] to evaluate methods in the 16-shot setting. **Sup. Mat.(E)** also presents the performance of DePT on the four baseline approaches for 4-shot and 8-shot settings, where DePT still maintains the advantages as in the 16-shot setting.
### Experimental Results
In this part, we apply our DePT to the four baseline approaches, and demonstrate the flexibility and effectiveness of DePT in the following settings: **i)** base-to-new generalization (in Table 2), **ii)** cross-dataset generalization (in Table 3), and **iii)** domain generalization (in Table 4).
**Base-to-New Generalization.** The base-to-new generalization setting evaluates whether the models learned on base tasks can generalize to new tasks with unseen classes, i.e., there is a _category shift_ between base and new tasks. For each dataset, we first construct a base task and a new task by equally dividing the dataset into two sets of classes, then we perform prompt tuning on the base task and test the learned model on both the base and new tasks. Table 2 shows the base-to-new generalization performance of the four baselines w/ or w/o DePT on 11 datasets. From the average results, we observe a tradeoff between base-task and new-task accuracies for most of the baseline methods, e.g., CoCoOp outperforms CoOp on new tasks but underperforms CoOp on base tasks. Notably, DePT consistently improves the performance of all baselines without any performance tradeoffs on base and new tasks. Specifically, DePT improves each baseline in terms of all base-task, new-task and harmonic-mean accuracies. From the results on 11 datasets, we also observe some failure cases, e.g., on the Oxford-Pets dataset, DePT fails to bring clear performance gains on most baseline methods. Possible reasons are as following. **1)** The optimal hyperparameters of DePT for different
Figure 4: **Left**: Impact of the balance weight \(\lambda\) in Eq. (7)/(8) on DePT. **Right**: Performance of DePT at different training epochs.
Figure 5: Robustness of DePT under different shots (i.e., numbers of training samples per class in each base task).
datasets and baselines are quite different, while we fix them across all datasets and baselines. **2)** When the _category shift_ between downstream tasks and the upstream data for model (i.e. CLIP) pretraining is too small, the advantages of our DePT as well as prompt tuning for task adaptation become less significant.
**Cross-Dataset Generalization.** The cross-dataset generalization setting evaluates whether the model learned on the source dataset can generalize to unseen target datasets, i.e., there is a _distribution shift_ between base and new tasks. In this experiment, we follow the baselines to regard ImgNet as the source dataset and other 10 datasets as target datasets. Table 3 presents the performance of the four baselines w/ or w/o our DePT on the 11 datasets. As can be seen, our DePT consistently improves the accuracy on the source dataset for all baselines, without compromising the performance on 10 target datasets in most cases. Notably, on average our DePT consistently enhances the performance of all baselines on both the source and target datasets, suggesting DePT is robust to the distribution shift. Moreover, we see that MaPLe establishes the best performance among the four baseline methods in Table 2, but its performance is inferior to CoCoOp and KgCoOp in Table 3. This is probably due to the
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c||}{Avg over 11 datasets} & \multicolumn{3}{c|}{ImageNet} & \multicolumn{3}{c}{Caltech101} & \multicolumn{3}{c}{OxfordPets} \\ \cline{2-13} & Base & New & H & Base & New & H & Base & New & H & Base & New & H \\ \hline CoOp [44] & 81.50 & 69.77 & 75.18 & 76.57 & 69.97 & 73.12 & 98.17 & **94.83** & **96.47** & **95.57** & 97.53 & **96.54** \\
**+DePT** & **83.66** & **71.82** & **77.29** & **77.13** & **70.10** & **73.45** & **98.33** & 94.33 & 96.29 & 94.70 & **97.63** & 96.14 \\ \hline CoCoOp [43] & 81.18 & 72.18 & 76.42 & 75.90 & **70.73** & 73.23 & 97.70 & 93.20 & 95.40 & **94.93** & **97.90** & **96.39** \\
**+DePT** & **83.80** & **72.89** & **77.97** & **76.87** & 70.47 & **73.53** & **98.37** & **93.87** & **96.06** & 94.03 & 97.20 & 95.59 \\ \hline KgCoOp [37] & 80.93 & 73.88 & 77.25 & 76.17 & **70.53** & 73.24 & 97.87 & 94.03 & 95.91 & **95.47** & **97.80** & **96.62** \\
**+DePT** & **83.62** & **75.04** & **79.10** & **77.03** & 70.13 & **73.42** & **98.30** & **94.60** & **96.41** & 94.33 & 97.23 & 95.76 \\ \hline MaPLe [19] & 83.54 & 73.76 & 78.35 & 77.23 & 69.63 & 73.24 & 98.30 & 93.70 & 95.94 & **95.17** & 97.77 & **96.45** \\
**+DePT** & **84.85** & **74.82** & **79.52** & **77.87** & **70.23** & **73.85** & **98.53** & **95.03** & **96.75** & 95.03 & **97.83** & 96.41 \\ \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{StanfordCars} & \multicolumn{3}{c|}{Flowers102} & \multicolumn{3}{c|}{Food101} & \multicolumn{3}{c}{FGCVAEircraft} \\ \cline{2-13} & Base & New & H & Base & New & H & Base & New & H & Base & New & H \\ \hline CoOp [44] & 74.30 & 72.10 & 73.18 & 97.07 & **74.33** & **84.19** & 90.43 & 90.97 & 90.70 & 31.70 & 17.30 & 22.38 \\
**+DePT** & **79.67** & **72.40** & **75.86** & **98.20** & 72.00 & 83.08 & **90.43** & **91.33** & **90.88** & **42.53** & **22.53** & **29.46** \\ \hline CoCoOp [43] & 70.77 & 72.50 & 71.62 & 95.03 & 69.07 & 80.00 & **90.57** & 91.20 & **90.88** & 35.63 & **32.70** & 34.10 \\
**+DePT** & **79.87** & **73.33** & **76.46** & **98.33** & **72.57** & **83.51** & 90.30 & **91.30** & 90.80 & **43.07** & 31.30 & **36.25** \\ \hline KgCoOp [37] & 71.13 & 74.67 & 72.86 & 95.90 & 74.83 & 84.07 & 90.47 & **91.60** & 91.03 & 35.10 & **35.20** & 35.15 \\
**+DePT** & **79.13** & **75.47** & **77.26** & **98.00** & **76.37** & **85.84** & **90.50** & **91.60** & **91.05** & **43.20** & 34.83 & **38.57** \\ \hline MaPLe [19] & 76.30 & **72.53** & 74.37 & 97.23 & 72.07 & 82.78 & 90.30 & **91.53** & 90.91 & 40.57 & **36.47** & **38.31** \\
**+DePT** & **80.93** & 71.73 & **76.06** & **98.03** & **73.17** & **83.79** & **90.33** & **91.53** & **90.93** & **44.53** & 32.80 & 37.78 \\ \hline \multicolumn{13}{c|}{SUN397} & DTD & \multicolumn{3}{c|}{FuroSAT} & \multicolumn{3}{c|}{UCF101} \\ \cline{2-13} & Base & New & H & Base & New & H & Base & New & H & Base & New & H \\ \hline CoOp [44] & 81.13 & **76.07** & 78.52 & 79.33 & 49.70 & 61.11 & **89.35** & 57.30 & 69.82 & 83.87 & 69.80 & 76.19 \\
**+DePT** & **82.37** & 75.07 & **78.55** & **83.20** & **56.13** & **67.04** & 88.27 & **66.27** & **75.70** & **85.43** & **72.17** & **78.24** \\ \hline CoCoOp [43] & 79.50 & 76.27 & 77.85 & 77.37 & 52.97 & 62.88 & 87.97 & 63.63 & 73.85 & 82.33 & 72.40 & 77.05 \\
**+DePT** & **82.20** & **76.73** & **79.37** & **82.77** & **55.40** & **66.37** & **90.27** & **66.87** & **76.82** & **85.70** & **72.80** & **78.73** \\ \hline KgCoOp [37] & 80.40 & 77.30 & 78.82 & 78.27 & 57.93 & 66.58 & 85.77 & 63.40 & 72.91 & 83.73 & 75.40 & 79.35 \\
**+DePT** & **82.33** & **77.80** & **80.00** & **82.20** & **59.13** & **68.78** & **89.03** & **71.07** & **79.04** & **85.80** & **77.23** & **81.29** \\ \hline MaPLe [19] & 81.93 & **76.50** & 79.12 & 81.93 & 58.20 & 68.06 & **94.67** & 66.73 & 78.28 & 85.30 & 76.23 & 80.51 \\
**+DePT** & **82.90** & 76.40 & **79.52** & **83.87** & **59.93** & **69.91** & 94.43 & **76.23** & **84.36** & **86.87** & **78.10** &
task-shared knowledge learned by MaPLe is not generalizable enough under distribution shift.
**Domain Generalization.** The domain generalization setting evaluates whether the model learned on the source domain can generalize to unseen domains, i.e., there is a _domain shift_ between base and new tasks. Following the baselines, we regard the ImgNet dataset as the source domain and other four ImgNet variants as target domains. Table 4 presents the domain generalization performance of the four baselines w/ or w/o our DePT. As seen, DePT still maintains the advantages as in previous experiments. Specifically, our DePT consistently improves the accuracy on the source domain without compromising the performance on the target domains in most cases for all baselines, which reveals that the models learned with DePT are more domain-generalizable. Moreover, we see that the performance gains achieved by DePT in Table 4 are not as significant as that achieved in Tables 2 and 3. Possible reasons are twofold. **1)** The domain generalization setting is more challenging compared to the base-to-new/cross-dataset generalization setting, as shown in prior works [37, 43, 45]. **2)** It is difficult for DePT to simultaneously capture task-shard and domain-agnostic knowledge without accessing the data of target domains during prompt tuning.
## 4 Related Work
**Vision-Language Pre-training.** By establishing a connection between images and natural language from countless image-text pairs, large vision-language pre-trained models (VLPMs) have shown strong zero-shot generalization on various downstream tasks. Generally, VLPMs leverage three types of pre-text tasks for modeling the semantic correspondence between the vision and language modalities, including **1)** image-text matching [17, 20], **2)** contrastive learning [16, 22], and **3)** masked vision/language prediction [20, 23]. In this work, we mainly focus on VLPMs establishing image-text alignment with contrastive learning, motivated by their excellent generalization ability to downstream tasks. For example, after seeing 400 million text-image pairs, CLIP [29] learns an alignment between visual and textual features output by a image encoder and a text encoder respectively. Beyond recognition [19, 41, 43], CLIP also show great potential for other downstream applications, such as such as image manipulation [28, 33], video-text retrieval [24, 4], and dense prediction [30, 42].
**Task Adaptation on VLPMs.** The remarkable success of VLPMs have brought new light but also pose a new question: how to efficiently adapt the knowledge from VLPMs to different downstream tasks? The most direct solution is _full-finetuning_, which fixes the architecture of VLPMs and tunes all the parameters on the target task. While the results are impressive, this line of work becomes prohibitively expensive with the ever-increasing size of parameters of VLPMs. To remedy this, _partial-finetuning_ has been proposed to update only a small number of extra parameters (a.k.a. _adapters_) while keeping most pre-trained parameters frozen. Representative approaches are Adapters [13], CLIP-Adapter [9], LoRA [14], BitFit [38] and Diff-pruning [10].
**Prompt Tuning.** Inspired from the field of NLP, a rich line of recent works adapts VLPMs to downstream tasks by learning task-specific prompts in an end-to-end manner [3, 45, 34]. Since only a handful of labeled examples are available during training, prompt tuning can be regarded as few-shot learning task [39, 40]. In particular, CoOp [44] performs task adaptation by optimizing a set of prompt vectors at the language branch of CLIP. While simple and effective, CoOp tends to achieve poor generalization on new tasks after overfitting to the base (or target) task. To overcome this issue, CoCoOp [43] learns a lightweight meta-net to generate an input-conditional token for each input image. By reducing the discrepancy between the hand-crafted prompt and the trainable prompt tokens, KgCoOp [43] significantly improves the generalization of the adapted models on new tasks. Similarly, ProGrad [45] mitigates the overfitting issue by regularizing each tuning step that is not to conflict with the general knowledge of the hand-crafted prompt. Unlike the aforementioned methods that mainly focus on developing efficient textual prompts, a rich line of works also explores visual prompts for task adaptation [15, 18]. In the recent work [19], authors propose the first multi-model prompt tuning method MaPLe [19]. By adding trainable prompts at both the language and text branches of CLIP, MaPLe yields remarkable performance on both the base task and new tasks. In this work, we propose DePT to tackle the base-new tradeoff problem in prompt tuning. More importantly, DePT is orthogonal to existing prompt tuning methods, hence it can improve all of them.
## 5 Conclusions
This work proposes Decoupled Prompt Tuning (DePT), a novel framework tackling the Base-New Tradeoff (BNT) problem in prompt tuning from a feature decoupling perspective. First, we provide an insightful view to analyze the BNT problem, and for the first time reveal that the BNT stems from the channel bias issue. Second, we present the DePT framework for tackling the BNT problem, and DePT is orthogonal to existing prompt tuning methods. Finally, we apply our DePT to a broad spectrum of baseline methods, and the obtained experimental results on 11 datasets validate the flexibility and effectiveness of DePT. We hope this work can bring some inspiration to prompt tuning and other related fields. To facilitate future research, we have made our code and pretrained models publicly available at: [https://github.com/Koorye/DePT](https://github.com/Koorye/DePT). |
2309.04130 | Gravitational wave memory for a class of static and spherically
symmetric spacetimes | This article aims at comparing gravitational wave memory effect in a
Schwarzschild spacetime with that of other compact objects with static and
spherically symmetric spacetime, with the purpose of proposing a procedure for
differentiating between various compact object geometries. We do this by
considering the relative evolution of two nearby test geodesics with in
different backgrounds in the presence and absence of a gravitational wave pulse
and comparing them. Memory effect due to a gravitational wave would ensure that
there is a permanent effect on each spacetime and the corresponding geodesic
evolution, being metric dependent, would display distinct results in each case.
For a complete picture, we have considered both displacement and velocity
memory effect in each geometry. | Soumya Bhattacharya, Shramana Ghosh | 2023-09-08T05:08:52Z | http://arxiv.org/abs/2309.04130v1 | # Gravitational wave memory for a class of static and spherically symmetric spacetimes
###### Abstract
This article aims at comparing gravitational wave memory effect in a Schwarzschild spacetime with that of other compact objects with static and spherically symmetric spacetime, with the purpose of proposing a procedure for differentiating between various compact object geometries. We do this by considering the relative evolution of two nearby test geodesics with in different backgrounds in the presence and absence of a gravitational wave pulse and comparing them. Memory effect due to a gravitational wave would ensure that there is a permanent effect on each spacetime and the corresponding geodesic evolution, being metric dependent, would display distinct results in each case. For a complete picture, we have considered both displacement and velocity memory effect in each geometry.
## I Introduction
Einstein's theory of General Relativity has been the most successful theory of gravity till date. However, it fails to explain with certainty regions of extreme gravity, for example: in the vicinity of singularities and the starting point of the universe itself. With the rise of observational gravitational wave (GW) data from detectors like LIGO [7; 8] as well as the observations of the shadow of supermassive compact central objects e.g., the M87* and the SgrA* by the Event Horizon Telescope (EHT) [9; 10; 11; 12; 13; 14; 15], we now have the means necessary to propose and possibly check alternative theories of gravity by looking for higher dimensions or changes in the known structure or behaviour of black hole spacetime. Such predictions of black hole like compact objects have lead to new studies in Exotic Compact Objects (ECOs) [16; 17; 18; 19; 20; 21; 22; 23; 24; 25] arising from theories of quantum fluctuations and/or dark matter. These objects behave as black holes for solar system tests of gravity but they may display distinct features when probed using strong field tests of gravity, such as gravitational waves. Although black holes are a widely studied exact solution of Einstein's field equations and we have a huge inventory of data that points to their existence, we still cannot say for sure because of the lack of experimental/observational evidence of the event horizon which is a defining feature of any black hole. Hence it is interesting to study to what extent the current or upcoming experiments or detectors can observationally establish the existence of black holes given that there are so many possible black hole mimickers. The first image released of the supermassive black hole at the centre of M87*, an elliptical galaxy did not just boost research in the realm of black hole physics but also gave rise to a very fundamental question,i.e. in the absence of any proof of an event horizon is it really a black hole? Hence, although black hole research is a prominent field of study today, given the ever increasing volume of gravitational wave data, this particular question arises because all that we can verify is the existence of a photon sphere. However, the proof of photon sphere alone is not enough for the existence of a black hole. In the darkness beyond a photon sphere, many postulate that other exotic compact objects may exist, although their stability studies are not at par with those of a black hole.
There are two ways in which this question can be resolved - either we attempt to prove the existence of the horizon or we find some procedure by which we can differentiate between various compact object geometries. In this work, we try to pursue the latter path and present a comparative study between various ECO geometries. We take gravitational wave _memory effect_ as our measurable phenomenon and consider static and spherically symmetric solutions of various models of wormholes and other theories of gravity for comparison with a simple Schwarzschild black hole. As these regions of extreme gravity are perfect laboratories for studying higher dimensional theories of gravity hence we consider those models as well. However, we must warn that current gravitational wave detectors like LIGO do not have the sensitivity required to detect such a minute difference caused due to memory effect. To demonstrate this we try to find the order of strain sensitivity required to detect memory effect with the current strain sensitivity of LIGO. But we do expect next generation of detectors to be able to detect gravitational wave memory effect which will enable us to
realise this study in the experimental front. The improvement in the detection prospects for future ground based GW detectors and also the launch of the space based-detector LISA in the near future, may provide us an opportunity to observe GW _memory effect_[26]. This effect refers to the lasting change in the relative distance between test particles when a GW passes through the ambient spacetime [27; 28]. The memory effect encompasses both the strong-field, as well as non-linear aspects of general relativity, which is yet to be observed.
Keeping in mind the above importance we study memory effect for various spherically symmetric spacetimes of GR and theories beyond GR. We analyse the memory effect here by studying the geodesic evolution of two test particles both in the absence and in the presence of a gravitational wave and making a comparative study. We believe this study will shed some important light on our understanding of memory effect as well as some broad aspects like the strong gravity regime of GR and theories beyond GR.
This paper is organized as follows: First we introduce some basic features of static, spherically symmetric geometry in section II. We then study the gravitational wave memory for various static and spherically symmetric black hole geometries in section III. In section IV, gravitational wave memory have been studied for various static, spherically symmetric wormhole solutions which are possible candidates of black hole mimickers. Then we make a comparative study of gravitational wave memory in section V. Finally, concluding remarks have been given in section VI.
II Geodesic evolution in static and spherically symmetric spacetime in the presence and absence of gravitational waves
In this section, we briefly discuss the geodesic equations for any static and spherically symmetric spacetime in presence and absence of gravitational waves. The general equations we get here will be used in later sections to describe memory effect for various static, spherically symmetric backgrounds. So first let us write down the line element that describes any static and spherically symmetric spacetime
\[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{g(r)}+r^{2}d\Omega_{2}^{2} \tag{1}\]
If \(f(r)=0\) at some value of \(r=r_{H}\) then the geometry described by eqn 1 represents a black hole. However if there is no horizon, or singularity present in the solution, and the solution is asymptotically flat then the solution represents a geometry featuring two asymptotic regions connected by a bridge or a wormhole solution. We now consider the geodesic equations for this metric. We know for arbitrarily general spacetime, described by spacetime coordinates \(x^{a}\), the geodesic equation for a free-falling object in this spacetime can be constructed as follows
\[\frac{d^{2}x^{a}}{d\tau^{2}}+\Gamma^{a}_{\ bc}\frac{dx^{b}}{d\tau}\frac{dx^{c }}{d\tau}=0 \tag{2}\]
Where \(\Gamma^{a}_{\ bc}\) are the affine connections corresponding to the arbitrary spacetime geometry and \(\tau\) is the affine parameter. So the geodesic equations corresponding to the line element 1 look like the following (considering equatorial plane \(\theta=\pi/2\)):
\[\ddot{t}+\frac{f^{\prime}(r)}{f(r)}\ \dot{r}\dot{t}=0 \tag{3}\] \[\ddot{r}-\frac{g^{\prime}(r)}{2g(r)}\ \dot{r}^{2}+\frac{f^{ \prime}(r)g(r)}{2}\ \dot{t}^{2}-rg(r)\ \dot{\phi}^{2}=0\] (4) \[\ddot{\phi}+\frac{2}{r}\ \dot{r}\dot{\phi}=0 \tag{5}\]
The convention we follow is: dot represents derivative with respect to proper time \(\tau\) and dash represents derivative with respect to the associated coordinate, for example \(f^{\prime}(r)=df/dr\) and \(\dot{r}=dr/d\tau\). To solve these geodesic equations, we need to specify initial conditions which will be determined by the velocity normalisation condition.
\[g_{ab}u^{a}u^{b}=-1 \tag{6}\]
which for our case reduces to
\[-f(r)\dot{t}^{2}+\frac{1}{g(r)}\dot{r}^{2}+r^{2}\dot{\phi}^{2}=-1 \tag{7}\]
Now, for our gravitational wave we take the pulse profile
\[H(t)=A\ \text{sech}^{2}(t-t_{0}) \tag{8}\]
If we consider the cross-components of the GW to be zero for simplicity then we can write the above line element in 1 in transverse-traceless (TT) gauge as follows
\[ds^{2} = -f(r)dt^{2}+\frac{dr^{2}}{g(r)}+\left(r^{2}+rH(t)\right)d\theta^{2}+ \left(r^{2}-rH(t)\right)\sin^{2}\theta d\phi^{2} \tag{9}\]
Corresponding to this, the geodesic equations would be
\[\ddot{t}-\frac{rH^{\prime}(t)}{2f(r)}\ \dot{\phi}^{2}+\frac{f^{ \prime}(r)}{f(r)}\ \dot{r}\dot{t}=0 \tag{10}\] \[\ddot{r}-\frac{g^{\prime}(r)}{2g(r)}\ \dot{r}^{2}+\frac{f^{ \prime}(r)g(r)}{2}\ \dot{t}^{2}+\left(\frac{H(t)-2r}{2}\right)g(r)\ \dot{\phi}^{2}=0\] (11) \[\ddot{\phi}+\left(\frac{2r-H(t)}{r^{2}-rH(t)}\right)\ \dot{r}\dot{\phi}-\left(\frac{H^{ \prime}(t)}{r-H(t)}\right)\ \dot{t}\dot{\phi}=0 \tag{12}\]
We will now make use of these equations to demonstrate memory effect in specific spacetime geometries. For standardisation purposes, we have taken \(M=1\) in all our calculations and we confine our calculations in the equatorial plane i.e. \(\theta=\pi/2\) for simplicity but without losing any generality. Another important point to be noted here is that the parameter \(\tau\) we will use for our work is an approximate affine parameter, i.e. it remains affine only till a certain range in which the coordinate \(t\) in the presence of a gravitational wave shows linear behaviour. Since we are considering only the range of \(\tau\) where the gravitational wave impacts our test particle geodesic hence this approximation is fair.
## III Memory effect in static and spherically symmetric black hole solutions
In this section we consider some static spherically symmetric black hole (BH) solutions starting from Schwarzschild solution as well as some other BH solutions from theories beyond GR and explore the GW memory effects for these black hole backgrounds.
### Memory Effect in Schwarschild spacetime
The Schwarzschild solution is a static, spherically symmetric solution of the vacuum Einstein equations. This solution represents the black hole solution as well as the exterior geometry of any spherically symmetric gravitational source. The uniqueness of this solution is guaranteed by Birkhoff's theorem which states that any spherically symmetric solution of the vacuum field equations must be static and asymptotically flat and hence must be described by the line element 1, for which \(f(r)\) and \(g(r)\) will have the following form
\[f(r)=g(r)=\left(1-\frac{2}{r}\right) \tag{13}\]
Let us demonstrate Memory effect in the simplest case of a Schwarzschild black hole background. We follow the idea of memory effect from the work of Braginskij [39] which talks about how memory effect can be interpreted simply as a Newtonian force acting between two particles when a gravitational wave passes through them and thus integrating the force equation using the appropriate conditions shows us that both position and velocity of a particle with respect to another particle considered to be at the origin will change. Hence here we will consider two test particles in the presence and absence of a gravitational wave. In both cases, we will take the appropriate metric and solve the geodesic equations numerically to determine the trajectory of the particles. What we will emphasize on is the difference in the co-ordinates of the two particles. When a gravitational wave passes, it causes a permanent change in the co-moving distance between two particles, this is known as displacement memory effect. Hence, by tracking the relative distance between two particles as they evolve in time we expect to see that the relative co-ordinate separation between the two particles would be different in the presence of a gravitational wave as compared to that in the absence of a gravitational wave. Let us first solve the geodesic equations in the absence of a gravitational wave. We use the velocity normalisation condition 6 to determine the initial conditions. As we have three coordinates here (\(t,\ r,\ \phi\)), we will require six initial conditions to determine the geodesic solution. We can choose any five of those initial conditions and equation 6 will determine the remaining initial condition such that we get a time-like geodesic curve. The geodesic equations (3 - 5) will take the following form
\[\ddot{t}+\frac{2}{r^{2}-2r}\ \dot{r}\dot{t}=0 \tag{14}\]
\[\ddot{r}-\frac{\dot{r}^{2}}{r^{2}-2r}+\frac{1}{r^{2}}\left(1-\frac{2}{r}\right)\ \dot{t}^{2}-(r-2)\ \dot{\phi}^{2}=0 \tag{15}\]
\[\ddot{\phi}+\frac{2}{r}\ \dot{r}\dot{\phi}=0 \tag{16}\]
Now, consider a gravitational wave that looks like 8. We could have also taken any other pulse profile, for example, a Gaussian profile. In this study we have taken \(A=10\) and \(t_{0}=9\) for our GW. We can write the Schwarzschild line element in presence of gravitational waves as 9 with \(f(r)\) and \(g(r)\) shown in 13. The geodesic equations (10-12) will take the following form
\[\ddot{t}+\frac{2}{r^{2}-2r}\ \dot{r}\dot{t}+\frac{r^{2}H^{\prime}(t)}{2(r-2)}\ \dot{\phi}^{2}=0 \tag{17}\]
\[\ddot{r}-\frac{\dot{r}^{2}}{(r^{2}-2r)}+\frac{1}{r^{2}}\left(1-\frac{2}{r} \right)\ \dot{t}^{2}+\left(\frac{H(t)-2r}{2}\right)\left(1-\frac{2}{r}\right)\ \dot{\phi}^{2}=0 \tag{18}\]
\[\ddot{\phi}+\left(\frac{2r-H(t)}{r^{2}-rH(t)}\right)\ \dot{r}\dot{\phi}+ \left(\frac{H^{\prime}(t)}{H(t)-r}\right)\ \dot{t}\dot{\phi}=0 \tag{19}\]
We now compare the relative differences in \(t,\ r\) and velocity, where velocity is defined as \(v=d(\Delta r)/d\tau\), between these two test particles in the following manner
\[\Delta x_{\rm withoutGW}=\Delta x_{\rm geodesic2}-\Delta x_{\rm geodesic1} \tag{20}\]
\[\Delta x_{\rm withGW}=\Delta x_{\rm geodesic2}-\Delta x_{\rm geodesic1} \tag{21}\]
Here \(x\) denotes any of the \(t,\ r\) or \(v\) (\(=d(\Delta r)/d\tau\)) coordinates. We then compare these co-ordinate differences by plotting them together to show what effect a gravitational wave would have on the relative geodesic evolution of two particles in this particular background. For example,we compare \(\Delta t_{\rm without\ GW}\) and \(\Delta t_{\rm with\ GW}\) depicted in red and blue respectively to demonstrate memory effect and continue the same process with the other two co-ordinate differences.
Consider the plot for \(v=dr/d\tau\) against the proper time \(\tau\). Here, we do see memory effect manifested as deviation in the velocity but as we asymptotically approach larger \(\tau\) values, we see this deviation disappear which might be a consequence of the fact that \(\tau\) is not exactly an affine parameter for larger values.
### Braneworld Black holes
Here we consider brane localised black holes in the Randall-Sundrum braneworld scenario which may have reflective boundary conditions arising due to quantum corrections near the horizon. As is the standard braneworld scenario, all matter exists on the four dimensional brane and only gravity can propagate through the five dimensional bulk [30; 31; 32]. Thus the effective gravitational field equations can be derived using the Gauss-Codazzi formalism [33; 34; 35; 36; 37] and can then be solved for a static and spherically symmetric spacetime whose metric looks similar to that of a Reissner-Nordstrom black hole with its electric charge replaced by a Tidal charge \(Q\). This tidal charge is essentially a manifestation of the presence of higher dimensions and thus observational indications of this tidal charge can open a window into the study of such higher dimensional theories. Although the electric charge in Reissner-Nordstrom black hole can take only positive electric charge, here the tidal charge parameter can take both positive and negative values. The Braneworld black hole metric given in [29] looks like 1 with \(f(r)\) and \(g(r)\) will have the following form:
\[f(r)=g(r)=1-\frac{2}{r}-\frac{Q}{r^{2}} \tag{22}\]
The overall sign of the \(Q/r^{2}\) term in the expression for \(f(r)\) determines whether it mimics a Reissner-Nordstom black hole (if the sign is positive) or if it indicates higher dimensional black hole(if the sign is negative). This term originates from the projection of the bulk Weyl tensor which is the correction term in the effective gravitational field equations in braneworld scenario [29]. The tidal charge determines how much distance the horizon penetrates into the bulk. In particular, as the tidal charge parameter increases, the extent of the horizon in the bulk spacetime decreases, i.e., the black hole becomes more and more localized.
Figure 1: Memory effect in Schwarzschild spacetime
For the given metric, in the absence of a gravitational wave, the geodesic equations are:
\[\ddot{t}+\frac{2}{r}\left(\frac{r+Q}{r^{2}-2r-Q}\right)\ \dot{r}t =0 \tag{23}\] \[\ddot{r}+\frac{(r+Q)(r^{2}-2r-Q)}{r^{5}}\ \dot{t}^{2}-\frac{1}{r} \left(\frac{r+Q}{r^{2}-2r-Q}\right)\ \dot{r}^{2}-r\left(1-\frac{2}{r}-\frac{Q}{r^{2}}\right)\ \dot{\phi}^{2} =0\] (24) \[\ddot{\phi}+\frac{2}{r}\ \dot{r}\dot{\phi} =0 \tag{25}\]
In presence of a gravitational wave, equations (10-12) will take the following form
\[\ddot{t}+\frac{2}{r}\left(\frac{r+Q}{r^{2}-2r-Q}\right)\ \dot{r} \dot{t}-\frac{r^{3}H^{\prime}(t)}{2(r^{2}-2r-Q)}\dot{\phi}^{2} =0 \tag{26}\] \[\ddot{r}+\frac{\left(-Q+r^{2}-2r\right)\dot{\phi}^{2}(h(t)-2r)}{2 r^{2}}+\frac{(Q+r)\dot{r}^{2}}{r\left(Q-r^{2}+2r\right)}+\frac{\left(Q+r\right) \left(-Q+r^{2}-2r\right)\dot{t}^{2}}{r^{5}} =0\] (27) \[\ddot{\phi}+\left(\frac{2r\dot{r}-\dot{r}H(t)-rH^{\prime}(t) \dot{t}}{r^{2}-rH(t)}\right)\ \dot{\phi} =0 \tag{28}\]
Here, we have a charge parameter named \(Q\) which can have both positive and negative values but for our study we consider only positive values.
Figure 2: Memory effect in Braneworld Black Hole with \(Q=+0.1\)
Maeda Dadhich solution
A static and spherically symmetric black hole solution in Einstein-Gauss-Bonnet theory of gravity in \(n(>6)\) dimensional Kaluza-Klein spacetime was given by Hideki Maeda and Naresh Dadhich [1]. The line element for this solution [2] is of the form 1, with \(f(r)\) and \(g(r)\) are given by,
\[f(r)=g(r)=1-\frac{2G}{r}+\frac{4G^{2}\tilde{q}}{r^{2}} \tag{29}\]
We briefly mention this metric here because it is similar in form to the braneworld black hole solution, the third term in \(f(r)\) here is similar to the charge term in the braneworld black hole metric and hence would display identical memory effect in the presence of a gravitational wave.
### Charged Dilaton Black Holes
Here we consider the static charged black hole solution in string theory which is valid for curvature below the Planck scale and is labelled by its mass, charge and asymptotic value of the scalar field called the dilaton field \(\phi\). The 4-dimensional low energy Lagrangian that gives rise to this solution [38] is
\[S=\int d^{4}x\sqrt{-g}[-R+2(\nabla\phi)^{2}+e^{-2\phi}F^{2}] \tag{30}\]
where \(F_{\mu\nu}\) is the Maxwell field associated with a U(1) subgroup of Spin(32)/\(Z_{2}\) and the remaining gauge fields and anti-symmetric tensor field have been set to zero. When we extremise this Lagrangian and try to obtain a static and spherically symmetric solution with asymptotic flatness to the corresponding field equations, we get
\[ds^{2}=-\left(1-\frac{2}{r}\right)dt^{2}+\frac{dr^{2}}{1-\frac{2}{r}}+(r^{2}+ 2Dr)d\Omega_{2}^{2} \tag{31}\]
where
\[D=-\frac{Q^{2}e^{2\phi_{0}}}{M} \tag{32}\]
Here \(\phi_{0}\) is the asymptotic value of the charged dilaton field and \(Q\) is magnetic charge. Thus, \(D\) is a constant here and we have studied memory effect for different values of \(D\). The geodesic equations in the absence of gravitational waves are:
\[\ddot{t}+\frac{2}{r(r-2)}\ \dot{r}\dot{t} =0 \tag{33}\] \[\ddot{r}-\frac{\dot{r}^{2}}{r(r-2)}+\frac{1}{r^{2}}\left(1-\frac{ 2}{r}\right)\ \dot{t}^{2}-(r+D)\left(1-\frac{2}{r}\right)\ \dot{\phi}^{2} =0\] (34) \[\ddot{\phi}+2\left(\frac{r+D}{r^{2}+2Dr}\right)\ \dot{r}\dot{\phi} =0 \tag{35}\]
In the presence of gravitational waves the metric looks like:
\[ds^{2}=-\left(1-\frac{2}{r}\right)dt^{2}+\frac{dr^{2}}{\left(1-\frac{2}{r} \right)}+\left(r^{2}+2Dr+rH(t)\right)d\theta^{2}+\left(r^{2}+2Dr-rH(t)\right) d\phi^{2} \tag{36}\]
And the corresponding geodesic equations look like
\[\ddot{t}+\frac{2}{r(r-2)}\ \dot{r}\dot{t}-\left(\frac{r^{2}H^{\prime}(t)} {2r-4}\right)\ \dot{\phi}^{2} =0 \tag{37}\] \[\ddot{r}-\frac{\dot{r}^{2}}{r(r-2)}+\frac{\dot{t}^{2}}{r^{2}} \left(1-\frac{2}{r}\right)-\left(\frac{2r+2D-H(t)}{2}\right)\left(1-\frac{2}{ r}\right)\ \phi^{2} =0\] (38) \[\ddot{\phi}+\left(\frac{2r\dot{r}+2D\ddot{r}-H(t)\dot{r}-H^{ \prime}(t)\dot{t}r}{r^{2}+2Dr-rH(t)}\right)\ \dot{\phi} =0 \tag{39}\]
The displacement and velocity memory effect has been depicted in figure 3.
A comparison for different values of the parameter \(D\) is shown in figure 4.
Figure 4: Comparison between different values of the parameter \(D\) for Charged Dilaton Black Hole solution
Figure 3: Memory effect in Charged Dilaton Black Hole solution with \(D=0.01\)
### Boulware-Deser black hole solution
The spherically symmetric static solution of Einstein-Gauss-Bonnet theory was obtained by Boulware and Deser in [3] and a simpler form of the metric is given in [4]
\[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\psi^{2}+r^{2}\sin^{2}\psi d\theta^ {2}+r^{2}\sin^{2}\psi\sin^{2}\theta d\phi^{2} \tag{40}\]
where
\[f(r)=1+\frac{r^{2}}{4\alpha}\left(1+\sigma\sqrt{1+\frac{16\alpha M}{r^{4}}+ \frac{4\alpha\Lambda}{3}}\right) \tag{41}\]
Here, \(\sigma^{2}=1\) and \(\Lambda\) is the cosmological constant. This is the most general spherically symmetric solution to the Einstein-Gauss-Bonnet theory, on the condition that the metric is smooth everywhere. For \(\alpha>0\) and \(\sigma=-1\), this solution represents a black hole whose horizon is located at \(r_{+}=\sqrt{2(M-\alpha)}\), given that \(\Lambda=0\). However, for \(\alpha>0,\ M>0\) and \(\sigma=+1\), this solution has a naked singularity at \(r=0\). In this paper, we study the former case of a black hole solution. In that case, \(f(r)\) takes the form (putting \(M=1\)),
\[f(r)=1+\frac{r^{2}}{4\alpha}\left(1-\sqrt{1+\frac{16\alpha}{r^{4}}}\right) \tag{42}\]
Let us consider that \(\theta\) is fixed at \(\pi/2\). Our metric then becomes
\[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\psi^{2}+r^{2}\sin^{2}\psi d\phi^ {2} \tag{43}\]
In the absence of a gravitational wave, the geodesic equations look like:
\[\ddot{t}+\frac{r\left(2-2\sqrt{\frac{16\alpha}{r^{4}}+1}\right)}{ \sqrt{\frac{16\alpha}{r^{4}}+1}\left(r^{2}\left(\sqrt{\frac{16\alpha}{r^{4}}+1 }-1\right)-4\alpha\right)}\ \dot{r}\dot{t}=0 \tag{44}\] \[\ddot{r}-\frac{\dot{r}^{2}}{2}\frac{r\left(2-2\sqrt{\frac{16 \alpha}{r^{4}}+1}\right)}{\sqrt{\frac{16\alpha}{r^{4}}+1}\left(r^{2}\left( \sqrt{\frac{16\alpha}{r^{4}}+1}-1\right)-4\alpha\right)}+\frac{r\left(\sqrt{ \frac{16\alpha}{r^{4}}+1}-1\right)\left(r^{2}\left(\sqrt{\frac{16\alpha}{r^{4} }+1}-1\right)-4\alpha\right)}{16\alpha^{2}\sqrt{\frac{16\alpha}{r^{4}}+1}}\ \dot{t}^{2}-\] \[r\left(1+\frac{r^{2}}{4\alpha}\left(1-\sqrt{1+\frac{16\alpha M}{ r^{4}}}\right)\right)\left(\dot{\psi}^{2}+\sin^{2}\psi\ \dot{\phi}^{2}\right)=0\] (45) \[\ddot{\psi}+\frac{2}{r}\ \dot{\psi}\dot{r}-\sin\psi\cos\psi\ \dot{ \phi}^{2}=0\] (46) \[\ddot{\phi}+\frac{2}{r}\ \dot{r}\dot{\phi}+\frac{2\cos\psi}{\sin\psi} \ \dot{\phi}\dot{\psi}=0 \tag{47}\]
Now in the presence of a gravitational wave, the metric is given by,
\[ds^{2}=-\left(1+\frac{r^{2}}{4\alpha}\left(1-\sqrt{1+\frac{16\alpha}{r^{4}}} \right)\right)dt^{2}+\frac{dr^{2}}{1+\frac{r^{2}}{4\alpha}\left(1-\sqrt{1+ \frac{16\alpha}{r^{4}}}\right)}+\left(r^{2}+rH(t)\right)\psi^{2}+\left(r^{2}- rH(t)\right)\sin^{2}\psi d\phi^{2} \tag{48}\]
The geodesic equations then look like
\[\ddot{t}+\frac{r\left(2-2\sqrt{\frac{16\alpha}{r^{4}}+1}\right)}{ \sqrt{\frac{16\alpha}{r^{4}}+1}\left(r^{2}\left(\sqrt{\frac{16\alpha}{r^{4}}+1}- 1\right)-4\alpha\right)}\;\dot{r}\dot{t}+\frac{rH^{\prime}(t)}{2}\frac{4\alpha} {r^{2}\left(\sqrt{\frac{16\alpha}{r^{4}}+1}-1\right)-4\alpha}\left(\dot{\psi} ^{2}-\sin^{2}\psi\;\dot{\phi}^{2}\right)=0 \tag{49}\] \[\ddot{r}-\frac{\dot{r}^{2}}{2}\frac{r\left(2-2\sqrt{\frac{16 \alpha}{r^{4}}+1}\right)}{\sqrt{\frac{16\alpha}{r^{4}}+1}\left(r^{2}\left( \sqrt{\frac{16\alpha}{r^{4}}+1}-1\right)-4\alpha\right)}+\frac{r\left(\sqrt{ \frac{16\alpha}{r^{4}}+1}-1\right)\left(r^{2}\left(\sqrt{\frac{16\alpha}{r^{4} }+1}-1\right)-4\alpha\right)}{16\alpha^{2}\sqrt{\frac{16\alpha}{r^{4}}+1}}\; \dot{t}^{2}\] \[-\left(1+\frac{r^{2}}{4\alpha}\left(1-\sqrt{1+\frac{16\alpha M}{r ^{4}}}\right)\right)\left(\left(2r+H(t)\right)\frac{\dot{\psi}^{2}}{2}+\sin^{ 2}\psi\left(2r-H(t)\right)\frac{\dot{\phi}^{2}}{2}\right)=0\] (50) \[\ddot{\psi}+\left(\frac{2r\dot{r}+\dot{r}H(t)+rH^{\prime}(t)\dot{ t}}{r^{2}+rH(t)}\right)\dot{\psi}-\sin\psi\cos\psi\left(\frac{r^{2}-rH(t)}{r^{2}+rH(t)} \right)\dot{\phi}^{2}=0\] (51) \[\ddot{\phi}+2\frac{\cos\psi}{\sin\psi}\;\dot{\phi}\dot{\psi}+ \left(\frac{2r\dot{r}-\dot{r}H(t)-rH^{\prime}(t)\dot{t}}{r^{2}-rH(t)}\right) \dot{\phi}=0 \tag{52}\]
The displacement and velocity memory effect for the Boulware-Deser solution is depicted in figure 5. From these figures one can see that the deviation of \(\Delta r\) with \(\tau\) is extremely small and from the general behaviour in previous plots we can predict that the corresponding velocity variations would be even smaller in scale. Hence in the velocity memory plot in figure (d)d, the memory effect is not quite visible because the deviation is much smaller than the vertical scale of the plot. However, when we compared the exact numerical values, we did notice some deviation as can be seen from the magnified region as shown in the inset of figure (d)d.
Figure 5: Memory effect in Boulware-Deser solution with \(\alpha=0.01\)
Memory effect in static and spherically symmetric wormhole solutions
Wormholes are solutions of Einstein's field equations that are characterised by the absence of an event horizon and the presence of a throat connecting two distant regions in spacetime. These are usually unstable structures that require exotic matter to sustain and hence are till-date hypothetical in nature. However they still serve as good toy models to study regions of extreme gravity. Here we consider some static and spherically symmetric wormhole solutions and explore the GW memory by analysing the geodesic evolution as has been done for Schwarzschild case.
### Damour Solodukhin Wormhole
Let us consider the simplest spherically symmetric wormhole solution which was given by Damour and Solodukhin [40] whose metric is given by,
\[ds^{2}=-\Big{(}f(r)+\lambda^{2}\Big{)}dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega_ {2}^{2} \tag{53}\]
here, \(f(r)=1-2/r\). This is a wormhole solution as is evident from the fact that there is no event horizon because the null horizon that we get from \(g^{rr}\) does not coincide with the killing horizon. However, it becomes an event horizon when \(\lambda=0\) in which case we get back the Schwarzschild metric. For non-zero values of \(\lambda\) we get the usual wormhole structure, i.e. absence of an event horizon and a throat region at \(r=2M\). This is an example of a Lorentzian wormhole [41]. The Damour-Solodukhin metric also exhibits bizarre features, for example, the \(G_{tt}\) component vanishes which implies that matter with vanishing energy density is required to sustain such a structure [6]. Let us consider some substitutions in this metric as \(t\) here does not correspond to asymptotic observer, therefore, performing \(t\to 1/\sqrt{1+\lambda^{2}}\) and \(M\to M(1+\lambda^{2})\). The metric now becomes,
\[ds^{2}=-\left(1-\frac{2M}{r}\right)dt^{2}+\frac{dr^{2}}{1-\frac{2M(1+\lambda^ {2})}{r}}+r^{2}d\Omega_{2}^{2} \tag{54}\]
The equations of motion in the absence of a gravitational wave would look like,
\[\ddot{t}+\frac{2}{r(r-2)}\ \dot{r}\dot{t} =0 \tag{55}\] \[\ddot{r}-\frac{\dot{r}^{2}}{r}\left(\frac{1+\lambda^{2}}{r-2(1+ \lambda^{2})}\right)+\left(1-\frac{2(1+\lambda^{2})}{r}\right)\left(\frac{\dot {t}^{2}}{r^{2}}-r\dot{\phi}^{2}\right) =0\] (56) \[\ddot{\phi}+\frac{2}{r}\ \dot{r}\dot{\phi} =0 \tag{57}\]
Using the above equations of motion, we obtained two solutions for two geodesics, each with same initial conditions for \(\phi(0),\ \dot{\phi}(0),\ \dot{r}(0)\) and \(\dot{t}(0)\) but differing in the initial values \(r(0)\) and \(t(0)\). We then compute the \(\Delta r\), \(\Delta t\) and \(\Delta v\) values which are the difference between the respective coordinates in each geodesic solution and plot them. Now, to see the effect of a passing gravitational wave in this spacetime, we must modify the above metric. We can do this by keeping in mind that gravitational waves are described in TT gauge and thus (considering zero cross-polarisation) we modify the \(g_{\theta\theta}\) and \(g_{\phi\phi}\) components.
In the presence of gravitational wave, the metric becomes,
\[ds^{2}=-\left(1-\frac{2M}{r}\right)dt^{2}+\frac{dr^{2}}{1-\frac{2M(1+\lambda^ {2})}{r}}+\Big{(}r^{2}+rH(t)\Big{)}d\theta^{2}+\Big{(}r^{2}-rH(t)\Big{)}\sin^ {2}\theta d\phi^{2} \tag{58}\]
And the corresponding geodesic equations look like:
\[\ddot{t}+\frac{2}{r(r-2)}\ \dot{r}\dot{t}-\frac{r^{2}H^{\prime}(t)}{2(r-2 )}\ \dot{\phi}^{2} =0 \tag{59}\] \[\ddot{r}-\frac{\dot{r}^{2}}{r}\left(\frac{1+\lambda^{2}}{r-2(1+ \lambda^{2})}\right)+\left(1-\frac{2(1+\lambda^{2})}{r}\right)\left(\frac{\dot {t}^{2}}{r^{2}}+\frac{(H(t)-2r)}{2}\dot{\phi}^{2}\right) =0\] (60) \[\ddot{\phi}+\left(\frac{2r\dot{r}-\dot{r}H(t)-rH^{\prime}(t)\dot{ t}}{r^{2}-rH(t)}\right)\dot{\phi} =0 \tag{61}\]
The displacement and velocity memory effects for the Damour-Solodukhin wormhole solution have been shown in the figure 6.
Since the Damour-Solodukhin metric depends on the wormhole hair \(\lambda\), we would like to see the how the memory effects depend on the wormhole hair \(\lambda\). We depict the effect in figure 7 for two different values \(\lambda=0.01\) and \(\lambda=0.1\).
Figure 6: Memory effect in Damour-Solodukhin wormhole with \(\lambda=0.01\)
Figure 7: Comparison between different values of \(\lambda\) for Damour-Solodukhin wormhole
### Wormhole solution in Kalb-Ramond Theory
The Einstein-Kalb-Ramond theory is essentially a scalar coupled theory which involves a term \(H_{\mu\nu\lambda}\), which is the source term for the gauge field, that is antisymmetric in three indices and hence is interpreted as the torsion factor that arises in covariant derivative of a tensor when the indices of the Christoffel symbol are antisymmetric. The action [42] for this gauge-invariant theory is
\[S=\int d^{4}x\sqrt{-g}\left(\frac{R(g)}{\kappa}-\frac{1}{12}H_{\mu\nu\lambda}H ^{\mu\nu\lambda}\right) \tag{62}\]
\(R(g)\) is the Ricci scalar curvature and \(\kappa\sim(\text{Planck mass})^{-2}\) is the gravitational coupling constant. Here we consider the Einstein-Kalb-Ramond theory in 4-dimensions where the simplest static and spherically symmetric solution in this theory looks like:
\[ds^{2}=-e^{\nu(r,t)}dt^{2}+e^{\lambda(r,t)}dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{ 2}\theta d\phi^{2} \tag{63}\]
Again it is evident from the metric why this is a wormhole solution and not a black hole solution because there is no event horizon. Using the following substitutions gives us a static and spherically symmetric wormhole for a real Kalb-Ramond field [42],
\[e^{\nu}=1,\hskip 56.905512pte^{-\lambda}=1-\frac{b}{r^{2}} \tag{64}\]
where \(b\) is a positive constant and captures the information of the Kalb Ramond field.
The metric then becomes
\[ds^{2}=-dt^{2}+\frac{dr^{2}}{1-\frac{b}{r^{2}}}+r^{2}d\Omega_{2}^{2} \tag{65}\]
The corresponding geodesic equations are:
\[\ddot{t} =0 \tag{66}\] \[\ddot{r}-\frac{\dot{r}^{2}}{r(r^{2}-b)}-\left(r-\frac{b}{r} \right) \dot{\phi}^{2} =0\] (67) \[\ddot{\phi}+\frac{2}{r} \dot{r}\dot{\phi} =0 \tag{68}\]
Now, in the presence of a gravitational wave of the form 8, this metric would look like :
\[ds^{2}=-dt^{2}+\frac{dr^{2}}{1-\frac{b}{r^{2}}}+\Big{(}r^{2}+rH(t)\Big{)}d \theta^{2}+\Big{(}r^{2}-rH(t)\Big{)}\sin^{2}\theta d\phi^{2} \tag{69}\]
And the corresponding geodesic equations are
\[\ddot{t}-\frac{rH^{\prime}(t)}{2}\ \dot{\phi}^{2} =0 \tag{70}\] \[\ddot{r}+\frac{b\dot{r}^{2}}{r(r^{2}-b)}-\left(\frac{2r-H(t)}{2} \right)\left(1-\frac{b}{r^{2}}\right)\ \dot{\phi}^{2} =0\] (71) \[\ddot{\phi}+\left(\frac{2r\dot{r}-H(t)\dot{r}-r\dot{t}H^{\prime}( t)}{r^{2}-rH(t)}\right)\ \dot{\phi} =0 \tag{72}\]
The displacement and velocity memory effects for the Kalb-Ramond wormhole are depicted in figure 8.
### Wormhole solution in Kalb-Ramond Theory
The Einstein-Kalb-Ramond theory is essentially a scalar coupled theory which involves a term \(H_{\mu\nu\lambda}\), which is the source term for the gauge field, that is antisymmetric in three indices and hence is interpreted as the torsion factor that arises in covariant derivative of a tensor when the indices of the Christoffel symbol are antisymmetric. The action [42] for this gauge-invariant theory is
\[S=\int d^{4}x\sqrt{-g}\left(\frac{R(g)}{\kappa}-\frac{1}{12}H_{\mu\nu\lambda}H ^{\mu\nu\lambda}\right) \tag{63}\]
\(R(g)\) is the Ricci scalar curvature and \(\kappa\sim(\text{Planck mass})^{-2}\) is the gravitational coupling constant. Here we consider the Einstein-Kalb-Ramond theory in 4-dimensions where the simplest static and spherically symmetric solution in this theory looks like:
\[ds^{2}=-e^{\nu(r,t)}dt^{2}+e^{\lambda(r,t)}dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{ 2}\theta d\phi^{2} \tag{64}\]
Again it is evident from the metric why this is a wormhole solution and not a black hole solution because there is no event horizon. Using the following substitutions gives us a static and spherically symmetric wormhole for a real Kalb-Ramond field [42],
\[e^{\nu}=1,\hskip 56.905512pte^{-\lambda}=1-\frac{b}{r^{2}} \tag{65}\]
where \(b\) is a positive constant and captures the information of the Kalb Ramond field.
The metric then becomes
\[ds^{2}=-dt^{2}+\frac{dr^{2}}{1-\frac{b}{r^{2}}}+r^{2}d\Omega_{2}^{2} \tag{66}\]
The corresponding geodesic equations are:
\[\ddot{t} =0 \tag{67}\] \[\ddot{r}-\frac{\dot{r}^{2}}{r(r^{2}-b)}-\left(r-\frac{b}{r} \right) \dot{\phi}^{2} =0\] (68) \[\ddot{\phi}+\frac{2}{r} \dot{r}\dot{\phi} =0 \tag{69}\]
Now, in the presence of a gravitational wave of the form 8, this metric would look like :
\[ds^{2}=-dt^{2}+\frac{dr^{2}}{1-\frac{b}{r^{2}}}+\Big{(}r^{2}+rH(t)\Big{)}d \theta^{2}+\Big{(}r^{2}-rH(t)\Big{)}\sin^{2}\theta d\phi^{2} \tag{70}\]
And the corresponding geodesic equations are
\[\ddot{t}-\frac{rH^{\prime}(t)}{2}\ \dot{\phi}^{2} =0 \tag{71}\] \[\ddot{r}+\frac{b\dot{r}^{2}}{r(r^{2}-b)}-\left(\frac{2r-H(t)}{2} \right)\left(1-\frac{b}{r^{2}}\right)\ \dot{\phi}^{2} =0\] (72) \[\ddot{\phi}+\left(\frac{2r\dot{r}-H(t)\dot{r}-r\dot{t}H^{\prime}( t)}{r^{2}-rH(t)}\right)\ \dot{\phi} =0 \tag{73}\]
The displacement and velocity memory effects for the Kalb-Ramond wormhole are depicted in figure 8.
The metric 65 depends on the wormhole hair \(b\). We show in figure 9 how the memory effect depend on the wormhole hair \(b\) for two different values of \(b\) from \(O(10^{-1})\) to \(O(1)\) and thus notice a difference.
We again try to demonstrate memory effect in a static and spherically symmetric wormhole solution of the Kalb-Ramond theory but this time we consider a more general expression given in [43], which is of the form 1, with \(f(r)\)
Figure 8: Memory effect in Static and Spherically symmetric solution of Kalb Ramond field with \(b=0.1\)
Figure 9: Comparison between different values of the parameter \(b\) in Static and Spherically symmetric solution of Kalb-Ramond field
and \(g(r)\) are given by,
\[f(r) =1-\frac{2}{r}-\frac{b}{3r^{3}} \tag{73}\] \[g(r) =1-\frac{2}{r}-\frac{b}{r^{2}} \tag{74}\]
This equation helps us set the initial conditions for the geodesic evolution. For the given metric, in the absence of a gravitational wave, the geodesic equations are:
\[\ddot{t}+\frac{3}{r}\left(\frac{2r^{2}+b}{3r^{3}-6r^{2}-b}\right) \ \dot{r}\dot{t} =0 \tag{75}\] \[\ddot{r}-\left(\frac{r+b}{r^{2}-2r-b}\right)\frac{\dot{r}^{2}}{r}+ \frac{(2r^{2}+b)(r^{2}-2r-b)}{2r^{6}}\ \dot{t}^{2}-\left(r-2-\frac{b}{r}\right)\ \dot{\phi}^{2} =0\] (76) \[\ddot{\phi}+\frac{2}{r}\ \dot{r}\dot{\phi} =0 \tag{77}\]
In presence of a gravitational wave of the form 8 in TT gauge, the metric becomes
\[ds^{2}=-\left(1-\frac{2}{r}-\frac{b}{3r^{3}}\right)dt^{2}+\frac{dr^{2}}{1- \frac{2}{r}-\frac{b}{r^{2}}}+\Big{(}r^{2}+rH(t)\Big{)}d\theta^{2}+\Big{(}r^{2 }-rH(t)\Big{)}\sin^{2}\theta d\phi^{2} \tag{78}\]
The corresponding equations of motion are
\[\ddot{t}-\frac{3r^{4}H^{\prime}(t)}{2(3r^{3}-6r^{2}-b)}\ \dot{\phi}^{2}+\frac{3}{r}\left(\frac{2r^{2}+b}{3r^{3}-6r^{2}-b}\right)\ \dot{r}\dot{t} =0 \tag{79}\] \[\ddot{r}-\left(\frac{r+b}{r^{2}-2r-b}\right)\frac{\dot{r}^{2}}{r} +\frac{(2r^{2}+b)(r^{2}-2r-b)}{2r^{6}}\ \dot{t}^{2}+\left(\frac{H(t)-2r}{2}\right)\left(1- \frac{2}{r}-\frac{b}{r^{2}}\right)\ \dot{\phi}^{2} =0\] (80) \[\ddot{\phi}+\left(\frac{2r\dot{r}-\dot{r}H(t)-rH^{\prime}(t)\dot{ t}}{r^{2}-rH(t)}\right)\ \dot{\phi} =0 \tag{81}\]
The displacement and velocity memory effects for this general form of Kalb-Ramond solution are depicted in figure
The dependence of the memory effect on the wormhole hair \(b\) is shown in figure 11.
### Braneworld wormholes
The braneworld theory [5] is a higher dimensional theory of spacetime which says that all matter in our universe exists on a four dimensional brane. The length between two such branes may be dynamic in nature and is filled with the
Figure 11: Comparison between different choices for the parameter \(b\) in static and spherically symmetric solution of the Kalb-Ramond theory
Figure 10: Memory effect in static and spherically symmetric solution in Kalb-Ramond theory with \(b=0.1\)
five dimensional bulk. The advantage of working with wormholes in the braneworld scenario is that most models of wormholes require exotic matter to sustain such structures which gives rise to questions regarding their stability and existence but in this case we can avoid dealing with exotic matter because of the presence of a higher dimension. The exotic matter is an attribute of the five dimensional bulk and since we live in four dimensions, we can work around it. A Braneworld Wormhole connects spacetime regions on the same brane. A detailed analysis of GW memory in this background has been provided in [25] using Bondi-Sachs coordinates. Here we write down the metric as follows
\[ds^{2}=-\left(\alpha+\lambda\sqrt{1-\frac{2}{r}}\right)^{2}dt^{2}+\frac{dr^{2} }{1-\frac{2}{r}}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d\phi^{2} \tag{82}\]
where \(\alpha\) and \(\lambda\) are taken to be real and positive to avoid the formation of a naked singularity. However, the above metric is not asymptotically flat. Hence we redefine the time coordinate as \(t\to t/(\alpha+t)\) and write the metric in terms of a new parameter \(p=\alpha/\lambda\)
\[ds^{2}=-\left(\frac{p+\sqrt{1-\frac{2}{r}}}{p+1}\right)^{2}dt^{2}+\frac{dr^{2} }{1-\frac{2}{r}}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d\phi^{2} \tag{83}\]
Again note that this is not a black hole but a wormhole solution because for non-zero values of the parameter \(p\), the \(r=2M\) surface is a null surface but is not the killing horizon for the killing vector \(\xi_{t}^{\mu}=\left(\partial/\partial t\right)^{\mu}\) and hence there is no event horizon. However, the \(p=0\) limit gives back the original Schwarzschild black hole metric. The corresponding geodesic equations are:
\[\ddot{t}+\frac{2\dot{r}t}{r\left(p\sqrt{1-\frac{2}{r}}r+r-2\right)}=0 \tag{84}\] \[\ddot{r}+\frac{\sqrt{1-\frac{2}{r}}\left(p+\sqrt{1-\frac{2}{r}} \right)\dot{t}^{2}}{(p+1)^{4}r^{2}}+\frac{\dot{r}^{2}}{2r-r^{2}}-(r-2)\dot{ \phi}^{2}=0\] (85) \[\ddot{\phi}+\frac{2}{r}\ \dot{r}\dot{\phi}=0 \tag{86}\]
Whereas in the presence of a gravitational wave in TT gauge with a pulse profile, the metric looks like
\[ds^{2}=-\left(\frac{p+\sqrt{1-\frac{2}{r}}}{p+1}\right)^{2}dt^{2}+\frac{dr^{2} }{1-\frac{2}{r}}+\Big{(}r^{2}+rH(t)\Big{)}d\theta^{2}+\Big{(}r^{2}-rH(t) \Big{)}\sin^{2}\theta d\phi^{2} \tag{87}\]
And the corresponding geodesic equations are:
\[\ddot{t}+\frac{2\ \dot{r}\dot{t}}{r\left(p\sqrt{1-\frac{2}{r}}r+r-2 \right)}-\frac{rH^{\prime}(t)(p+1)^{2}}{2\left(p+\sqrt{1-\frac{2}{r}}\right)^ {2}}\ \dot{\phi}^{2}=0 \tag{88}\] \[\ddot{r}+\frac{r-2}{r^{3}(1+p)^{2}}\left(1+\frac{p}{\sqrt{1-\frac {2}{r}}}\right)\ \dot{t}^{2}-\frac{\dot{r}^{2}}{r(r-2)}+\left(\frac{H(t)-r}{2}\right)\left(1- \frac{2}{r}\right)\ \dot{\phi}^{2}=0\] (89) \[\ddot{\phi}+\left(\frac{2r\dot{r}-\dot{r}H(t)-rH^{\prime}(t)t}{r ^{2}-rH(t)}\right)\dot{\phi}=0 \tag{90}\]
The displacement and velocity memory effects are shown in figure 12.
Dependence of the memory effects on the wormhole hair \(p\) are depicted in figure 13 for two different values of \(p\).
## V Comparison of memory effect
We have studied memory effect for various static and spherically symmetric geometries, some of which represent black holes and others are wormholes. We now combine all our results in a single plot which shows that the memory effect
Figure 12: Memory effect in Braneworld Wormhole with \(p=0.01\)
Figure 13: Comparison between different values of the metric parameter \(p\) in Braneworld Wormhole
obtained in difference geometries are quite distinct from each other. In the following plot (figure 14) we have taken specific values of the parameters but a more extensive study with a variety of parameter values can be performed in comparison to a Schwarzschild black hole to standardise the differences between those spacetime geometries. In figures (a)a and (b)b we can clearly see that memory effect manifests differently for different spacetime geometries. Figure (c)c represents a comparative study of velocity memory effect for various static, spherically symmetric spacetimes.
## VI Conclusion
With the current advancements in Gravitational Wave research and the promise of highly sensitive upcoming detectors, we now have the opportunity of studying systems and phenomena which were out of our reach before. Memory effect is one such occurrence whose detection is becoming more realistic with the improving technology of gravitational wave detectors. We hence propose Memory effect as a criterion for differentiating between various compact object geometries.
In this paper we discuss memory effect in different static and spherically symmetric solutions of Einstein gravity as well as of theories beyond General Relativity. We, at first, have briefly discussed the geometries of various wormhole and black hole spacetimes that those spherically symmetric solutions represent. Then we have analysed the displacement and velocity memory effects by studying neighbouring geodesics in each of these backgrounds in the presence of a localised GW pulse. We have shown explicitly, how the geodesic separations evolve before and after the passage of the pulse. This clearly establishes the existence of both displacement and velocity memory effect. We then compare the result with that of Schwarzschild background.
In all our computations we have taken parameter values at least an order of magnitude lower than \(1\) as we have set \(M=1\). If we keep the memory effect in Schwarzschild metric as a benchmark, we can see that all the other geometries
Figure 14: Comparison of the effect of a Gravitational Wave in various spacetime geometries
lie only on one side of the Schwarzschild plot and we expect that they should not cross the Schwarzschild benchmark curve for any positive value of the parameter (keeping in mind the standard of \(M=1\) and the corresponding range of values that parameters can take under the purview of Solar system tests of GR). (Note: We have not shown Boulware-Deser in figures 14a and 14c as the scale in which memory effect is manifested in this spacetime is outside the scale of all other metrics and hence it could not be included in this plot which has been set to a particular vertical scale).
One caveat regarding the use of gravitational memory effect as a measuring tool is that it can only differentiate between different exterior geometries, for example, it cannot be used to differentiate between a black hole and a compact object having identical static and spherically symmetric geometry in the exterior. Therefore this method can only differentiate between compact objects if they give rise to different background geometry. Most of the spacetime geometries that we have considered here are dependent on certain parameters representing black hole/wormhole hairs. We have shown through our analysis how the GW memory depends on these hairs for a wide range of values for the parameters (but small values as we fix \(M=1\) everywhere).
Current ground based gravitational wave detectors, like LIGO, have a strain sensitivity of about \(10^{-20}\). LIGO is insensitive to the memory from most sources because the detector response timescale is generally much shorter (of the order of few milliseconds) than the rise-time for the memory signals [28]. We hope that future detectors like LISA would be the perfect setup for seeing memory effect due to the fact that it will have higher strain sensitivity (of the order of \(10^{-23}\)) in the low-frequency band where typical memory sources are stronger [45; 28]. LISA has a longer detector response time scale (of the order of few years) and hence has a higher chance of data accumulation [46; 44]. Since it is a space-based system, it will naturally be in free-fall throughout its course. From an experimental point of view, memory effect is important because it permits a measurement to be made, not during a short burst of gravitational radiation, but over a much longer time, during which the particles can still be assumed to be free.
Although we have considered static and spherically symmetric spacetime geometries, observational data indicates that most astrophysical systems in our universe undergo rotation. Hence a possible future goal would be to study gravitational memory effect in rotating compact objects. It would be interesting to see how the memory effect, for example, in a rotating black hole would differ from a stationary black hole as well as any other black hole or wormhole model. Also we can study the memory effect using symmetries at null infinity using the Bondi-Sachs formalism and explore how the variation in the Bondi mass aspect, related to the memory effect, depend explicitly on different black hole and wormhole backgrounds used here, following the formalism used in [25]. We hope we can address these issues in near future.
## VII Acknowledgement
We acknowledge Sumanta Chakraborty for initiating this project. We also thank him for various insightful comments and discussions during different stages of this project. SG acknowledges IACS (Indian Association for the Cultivation of Science) for providing financial assistance through the Master's Fellowship. SB acknowledge DAE for providing a post-doctoral fellowship through RRF scheme (grant no: \(1003/(6)/2021/\text{RRF}/\text{R\&D}-\text{II}/4031,\text{dated}:20/03/2021\)).
|
2309.12752 | Insights from exact social contagion dynamics on networks with
higher-order structures | Recently there has been an increasing interest in studying dynamical
processes on networks exhibiting higher-order structures, such as simplicial
complexes, where the dynamics acts above and beyond dyadic interactions. Using
simulations or heuristically derived epidemic spreading models it was shown
that new phenomena can emerge, such as bi-stability/multistability. Here, we
show that such new emerging phenomena do not require complex contact patterns,
such as community structures, but naturally result from the higher-order
contagion mechanisms. We show this by deriving an exact higher-order SIS model
and its limiting mean-field equivalent for fully connected simplicial
complexes. Going beyond previous results, we also give the global bifurcation
picture for networks with 3- and 4-body interactions, with the latter allowing
for two non-trivial stable endemic steady states. Differently from previous
approaches, we are able to study systems featuring interactions of arbitrary
order. In addition, we characterise the contributions from higher-order
infections to the endemic equilibrium as perturbations of the pairwise
baseline, finding that these diminish as the pairwise rate of infection
increases. Our approach represents a first step towards a principled
understanding of higher-order contagion processes beyond triads and opens up
further directions for analytical investigations. | István Z. Kiss, Iacopo Iacopini, Péter L. Simon, Nicos Georgiou | 2023-09-22T09:53:35Z | http://arxiv.org/abs/2309.12752v1 | # Insights from exact social contagion dynamics
###### Abstract
Recently there has been an increasing interest in studying dynamical processes on networks exhibiting higher-order structures, such as simplicial complexes, where the dynamics acts above and beyond dyadic interactions. Using simulations or heuristically derived epidemic spreading models it was shown that new phenomena can emerge, such as bi-stability/multistability. Here, we show that such new emerging phenomena do not require complex contact patterns, such as community structures, but naturally result from the higher-order contagion mechanisms. We show this by deriving an exact higher-order SIS model and its limiting mean-field equivalent for fully connected simplicial complexes. Going beyond previous results, we also give the global bifurcation picture for networks with 3- and 4-body interactions, with the latter allowing for two non-trivial stable endemic steady states. Differently from previous approaches, we are able to study systems featuring interactions of arbitrary order. In addition, we characterise the contributions from higher-order infections to the endemic equilibrium as perturbations of the pairwise baseline, finding that these diminish as the pairwise rate of infection increases. Our approach represents a first step towards a principled understanding of higher-order contagion processes beyond triads and opens up further directions for analytical investigations.
## I Introduction
Complex networks provide a powerful representation for the backbone of complex systems by describing their structural connections in terms of nodes, i.e., individuals, that interact through links [1; 2; 3; 4]. Recently there has been an increasing interest in non-pairwise approaches to networked populations [5; 6; 7]. In fact, many real-world systems are composed by elements that interact in groups of different sizes, stretching the order of these fundamental interactions beyond the dyads. This is particularly true for social systems, where most social interactions involve more than two people at the time --while links, by definition, can connect only pairs of individuals [8]. Higher-order network approaches can instead be used to explicitly encode the many-body social interactions that mediate the daily communication of human and non-human animal societies [9; 10; 11]. Hypergraphs and simplicial complexes, differently from networks, allow for interactions between any number of units, and can therefore provide a more accurate "higher-order" description for those systems featuring group interactions [12; 13].
Modelling efforts in this directions have already shown that landmark dynamical processes on networks can behave very differently when we let nodes interact in groups composed by more than two individuals at the time [14; 15; 16; 17; 18]. Among these, extensions of adoption processes beyond pairwise interactions led to the appearance of new phenomenologies, such as critical mass effects and bi-stability in social contagion and norm emergence, whose presence depends on both dynamical and structural quantities [19]. For example, groups can amplify small initial opinion biases accelerating consensus formation in voter models [20]. In higher-order spreading processes it has been shown that the inclusion of peer pressure coming from group interactions can change the nature of the phase transition from an epidemic-free to an endemic state, allowing for their co-existence [21; 22; 23]. Structural features, as heterogeneity, can suppress the onset of bi-stability [24; 25], or even lead to multi-stability and intermittency when the system presents community structure and groups follow a critical-mass dynamics [26]. Group size plays also an important role. A disease spreading through higher-order mechanisms tends to be concentrated and sustained by the largest groups [27]. By contrast, when groups modulate the diffusion of social conventions starting from a seed of committed agents, a non-monotonic dependence on the group size has been found [28].
Despite the interesting insights obtained via the inclusion of these higher-order mechanisms, rigorous theoretical studies in this directions have been very limited, and most analytical treatments either rely on heuristic methods for their derivation or on approximations that are true only under certain conditions; this is the case for homogeneous and
heterogeneous mean-field models [21; 24; 29], pair-based and triadic approximations [30; 31], microscopic Markov-chain approaches [22; 32], and approximate master equations [25; 27; 33].
Here, we devise a principled analytical formulation of the simplicial contagion model [21] on fully connected higher-order structures and investigate its emerging dynamics. While most previous studies focused on the dynamics mediated by 2-body (1-simplices) and 3-body (2-simplices), we study the model in details up to 4-body interactions, i.e., when the infection spreads in complete 3-complexes. We then show, starting from the resulting Kolmogorov forward equations, how one can derive mean-field models for complete simplicial 2- and 3-complexes, and we prove their exactness for a number of nodes that tends to infinity. Going even further, we conjecturing a mean-field equation for simplicial contagion on a complete structure up to an arbitrary order \(M\), a complete simplicial \(M\)-complex. In addition, we show how one can map the additional infection pressure brought by higher-order interactions as perturbations of the classic system where infections are simply mediated by traditional pairwise links (1-simplices). Finally, through an extensive bifurcation analysis of the contagion dynamics on a system up to 4-body interactions, we show that, in addition to the bi-stability already found when adding 3-body contributions to a spreading dynamics, the phenomenology at higher-orders is further enriched and multi-stability can emerge --where more than one non-trivial stationary state can co-exist.
The paper is structured as follows. In section II, we formulate the exact Susceptible-Infected-Susceptible (SIS) contagion dynamics on fully connected structures. We formulate the Kolmogorov forward equations for a simplicial contagion on a complete simplicial 2- and 3-complex, derive the associate mean-field models, and prove that they are exact in the termodinamic limit. We close the section with a conjecture on the most general case of a complete simplicial \(M\)-complex. In section III, we present a full bifurcation analysis of the resulting mean-field models for complete simplicial 2- and 3-complexes. For the case up to 2-complexes, we study the full phase diagram and the stability of the solutions as a function of the infection parameters. The system presents two distinct bifurcation scenarios, either a traditional transcritical bifurcation, or a fold bifurcation leading to bi-stability. We then repeat the analysis for contagion dynamics running on complete simplicial 3-complexes, finding a much richer scenario where the system can display the transcritical behaviour, bi-stability, and fold bifurcation with the fold placed both before or after the transcritical point (multistability). Finally, in section IV we conclude by interpreting our findings and give a brief overview of the possible next challenges.
## II From Exact Stochastic to Limiting Mean-Field Models
In this section we lay down the basic formulation for an exact simplicial SIS-like dynamics on a fully connected simplicial complex composed by \(N\) interconnected nodes.
We represent the individuals of a social system as a simplicial complex, that is a collection of \(k\)-simplices [12; 13; 34]. Each \(k\)-simplex (where \(k\) denotes the order) represents a group interaction among \(k+1\) nodes (size \(k+1\)). Under this framework, nodes are called 0-simplices, pairwise links are 1-simplices, etc. In addition, in order for a simplicial complex to be valid it requires downward closure, that is that all the sub-sets of it simplices need to be also part of the complex [35; 36]. For example, if the complex contains the 2-simplex \([i,j,k]\), it must also contain the lower-order combinations \([i,j]\), \([j,k]\), \([i,k]\), \([i]\), \([j]\) and \([k]\). In the social context, it means that whenever a group of individuals is having, for example, a conversation, all the possible sub-groups contained are also assumed to be interacting.
We then consider a contagion dynamics that follows the simplicial contagion model introduced in Ref. [21], according to which an infection can spread in a population at different rates through group interactions of different sizes. More precisely, in addition to the traditional infection along links, or pairwise contagion, we allow for contagion events through simplices (groups of nodes) of arbitrary order --as allowed by the size of the complex. Nodes can belong, as per a traditional SIS model, to two compartments: susceptible nodes (\(S\)) that can acquire an infection upon a contact with an infectious node; infectious nodes (\(I\)) that can pass the infectious to susceptible ones in the neighbourhood. While spontaneous recovery transitions from \(I\) to \(S\) are not affected by the higher-order structure, contagion events controlling the transitions from \(S\) to \(I\) can happen at different orders. In particular, a susceptible node receives a stimulus from every "active", or "contagious" simplex it is part of. A simplex is considered to be infectious if all the nodes composing it, except a single susceptible one, are infectious as well. This is clear in the examples shown in Fig. 1, where the higher-order contagion dynamics is explained for the case of a \(S\) node \(i\) belonging to a 3-simplex under three different scenarios of (rows). Given the nested structure imposed by the simplicial complex, the maximal infection pressure is reached when all the other nodes in the simplex are infectious.
We now derive the exact forward Kolmogorov equations for the case of a simplicial contagion running exclusively via pairwise (1-simplices) and three-body interactions (2-simplices), and we show how a mean-field limit can be derived starting from them. Thereafter, we rigorously show that this model is exact in the limit of \(N\to\infty\). Using the same approach, we then extend the derived SIS model up to four-body interactions (3-simplices), deriving again the associated mean-field formalism (the proof of its exactness is not given as it naturally follows the same idea as for the
previous case up to 2-simplices). Finally, we conjecture the mean-field limit for the most general formulation of the model, that is when infection is mediated by higher-order interactions over arbitrary \(k\)-simplicies, \(k\) being bounded only be network size, i.e. \(1\leq k\leq N-1\). Table 1 provides a recap of the notation that will be used from now on to denote infection rates associated to simplices of different order.
### Higher-order \(Sis\) epidemics on a complete simplicial 2-complex
Let us start from the easiest case of simplicial contagion where there are only two possible mechanisms of transmission: via pairs (1-simplices) and via "triangles" (2-complexes). Given the simplicial SIS model defined above, let us now focus on the possible transitions between the two classes of individuals. If a node is infected (i.e. it is at state \(I\)), then it switches to state \(S\) at a recovery rate \(\gamma>0\). When the number of infected nodes in the simplicial complex is \(k\), one of them will switch to \(S\) with a rate
\[c_{k}=\gamma k.\] (II.1)
If a node is in state \(S\), then it switches to state \(I\) at a rate proportional to the number of infected nodes \(I\) it is linked to--via simplices of different order; in particular, if the \(S\) node is linked to \(k\) infected ones via 1-simplices, then it switches to \(I\) at a rate \(\tau k\), \(\tau>0\). In this case, the number of infected nodes goes up by 1 with a rate of
\[a_{k}=\tau k(N-k).\] (II.2)
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & group size & infectivity & rescaled infectivity \\ \hline
1-simplex & 2 & \(r_{1}=\tau\) & \(s_{1}=\lambda\) \\
2-simplex & 3 & \(r_{2}=\beta\) & \(s_{2}=\mu\) \\
3-simplex & 4 & \(r_{3}=\delta\) & \(s_{3}=\theta\) \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \(k\)-simplex & \(k+1\) & \(r_{k}\) & \(s_{k}\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameters associated to the infection coming from simplices of different order.
Figure 1: Schematic illustration of the different possible channels of infection of the simplicial contagion model. The susceptible (S) node \(i\) participates to the depicted 3-simplex (left column); being part of a simplicial complex, by definition all the sub-faces of the simplex are also present. The infection pressure that \(i\) receives grows with the number of infectious nodes (I) within the considered simplex. Top row: one node is infectious, thus \(i\) can be infected by a single 1-simplex (link) with \(\tau>0\). Middle row: two nodes are infectious, thus \(i\) can be infected either by the two 2-simplices or by the “infectious” 2-simplex (triangle) with \(\beta>0\). Bottom row: all three nodes are infectious, thus \(i\) can be infected either by the three 1-simplices, by the three 2-simplices that they compose, or by the 3-simplex (tetrahedron) with \(\delta>0\).
In addition, three-body interactions provide extra infection pressure to susceptible individual \(S\) (peer pressure). Since the simplicial complex is complete, any distinct pair of infected neighbours of the selected node would act as a an infectious 2-simplex that could also infect the susceptible node. Assuming that there are \(k\) infected neighbours, there is an infection event for that node at an extra rate
\[b_{k}^{\triangle}=\beta\binom{k}{2},\quad\beta>0.\] (II.3)
Since we are on the fully connected structure, the \(k\) infected nodes are neighbours to all the susceptible ones. As such, the number of \(I\) goes up due to the effect of the 2-simplices with a rate
\[\beta_{k}^{\triangle}=\beta\binom{k}{2}(N-k),\quad\beta>0.\] (II.4)
After calling \(I_{t}^{N}\) the number of infected nodes at time \(t\), let us define
\[p_{k}(t)=\mathbb{P}\{I_{t}^{N}=k\},0\leq k\leq N.\] (II.5)
From the symmetry of the structure, due to its full connectivity, the epidemics (i.e. the number of infectious nodes at any given \(t\)) can be mapped to a branching process whose Kolmogorov equations are given by
\[\frac{d}{dt}p_{k}(t)=\begin{cases}(a_{k-1}+\beta_{k-1}^{\triangle})p_{k-1}(t) -(a_{k}+c_{k}+\beta_{k}^{\triangle})p_{k}(t)+c_{k+1}p_{k+1}(t),&3\leq k\leq N- 1,\\ a_{k-1}p_{k-1}(t)-(a_{k}+c_{k}+\beta_{k}^{\triangle})p_{k}(t)+c_{k+1}p_{k+1}(t),&k=2,\\ -(a_{k}+c_{k})p_{k}(t)+c_{k+1}p_{k+1}(t),&k=1,\\ -c_{k}p_{k}(t)+c_{k+1}p_{k+1}(t),&k=0,\\ (a_{k-1}+\beta_{k-1}^{\triangle})p_{k-1}(t)-(a_{k}+c_{k})p_{k}(t),&k=N.\end{cases}\] (II.6)
Note that the Eq. (II.6) can be reduced to
\[\frac{d}{dt}p_{k}(t)=(a_{k-1}+\beta_{k-1}^{\triangle})p_{k-1}(t)-(a_{k}+c_{k} +\beta_{k}^{\triangle})p_{k}(t)+c_{k+1}p_{k+1}(t),\quad 0\leq k\leq N\] (II.7)
subject to the boundary conditions
\[a_{0}=a_{-1}=0,\quad\beta_{0}^{\triangle}=\beta_{1}^{\triangle}=\beta_{-1}^{ \triangle}=0,\quad c_{N+1}=0.\] (II.8)
The expected number of infected nodes \(m^{N}(t)=\mathbb{E}(I_{t}^{N})=\sum_{k=0}^{N}kp_{k}(t)\) at time \(t\), after an index shift shown in Appendix A, satisfies the equation
\[\frac{d}{dt}m^{N}(t)=\sum_{k=0}^{N}k[a_{k-1}p_{k-1}(t)-(a_{k}+c_{k})p_{k}(t)+c _{k+1}p_{k+1}(t)]+\beta\sum_{\ell=2}^{N-1}(N-\ell)\binom{\ell}{2}p_{\ell}(t).\] (II.9)
We can always find a stochastic process \(\eta_{t}^{N}\) so that
\[\frac{I_{t}^{N}}{N}=\frac{1}{N}m^{N}(t)+\eta_{t}^{N}.\] (II.10)
Equation (II.10) can be rearranged as
\[\frac{I_{t}^{N}-m^{N}(t)}{N}=\eta_{t}^{N},\] (II.11)
which defines \(\eta_{t}^{N}\) as the error between the density of infected individuals and their mean. If one assumes the existence of a hydrodynamic limit for \(I_{t}^{N}/N\) so that it is concentrated around its mean, it would imply that we can take the limit as \(N\to\infty\) in Eq. (II.10), giving that, in the context of weak convergence, the randomness vanishes in the limit, i.e.
\[\lim_{N\to\infty}\frac{I_{t}^{N}}{N}=\lim_{N\to\infty}\frac{1}{N}m^{N}(t)=m_ {I}(t),\qquad\lim_{N\to\infty}\eta_{t}^{N}=0.\] (II.12)
For \(\eta^{N}_{t}\) to converge unscaled to \(0\), it must be that \(\text{Var}(\eta^{N}_{t})\to 0\) as \(N\to\infty\), so that it can converge to a constant. Since \(\eta^{N}_{t}\) is uniformly bounded, the weak convergence to \(0\) implies the convergence of expectations and can be upgraded to an \(\mathcal{L}^{3}\) convergence, so
\[\lim_{N\to\infty}\mathbb{E}(\eta^{N}_{t})=\lim_{N\to\infty}\mathbb{E}((\eta^{N }_{t})^{p})=0.\] (II.13)
Moreover, the limiting \(m_{I}(t)\) will come out as the scaling limit of (II.9) and it is a density driven process. The coefficients in that equation also need to scale with \(N\), otherwise the large \(N\)-limit will be trivially \(0\) or \(\infty\). The correct scalings for \(\tau\) and \(\beta\) are already hidden in Equations (II.2) and (II.4); they need to scale proportionally to the number of simplices they correspond to. The infectivity parameter \(\tau\) acts on links, and the simplicial complex contains \(O(N^{2})\) of them; similarly, \(\beta\) acts on \(2\)-simplices, and these are \(O(N^{3})\). To this end, we introduce two positive parameters \(\lambda\) and \(\mu\) so that
\[\tau=\lambda N^{-1},\qquad\beta=\mu N^{-2}.\] (II.14)
The extra \(N\) power needed to balance \(\tau\) and \(\beta\) with the number of corresponding simplices is coming from the division with \(N\) in Equation (II.10).
Theorem II.1 (Hydrodynamic limit for \(2\)-simplices infections in fully connected simplicial \(2\)-complexes).: _Fix a time horizon \(T>0\) and assume that the following uniform pointwise limit_
\[\lim_{N\to\infty}\sup_{t\leq T}\Big{\|}\frac{1}{N}m^{N}(t)-m_{I}(t)\Big{\|}_{ \infty}=0\] (II.15)
_exists as a deterministic function on \([0,1]\). Furthermore, assume the \(\eta^{N}_{t}\), defined by (II.10), satisfies_
\[\lim_{N\to\infty}\sup_{t\leq T}\mathbb{E}(\eta^{N}_{t})=0.\] (II.16)
_Finally, let \(\gamma\) denote the recovery rate and assume (II.14)._
_Then we have the weak convergence_
\[\lim_{N\to\infty}\frac{1}{N}I^{N}_{t}=m_{I}(t),\] (II.17)
_and \(m_{I}(t)\) solves the ODE_
\[\frac{d}{dt}m_{I}(t)=(\lambda-\gamma)m_{I}(t)+\Big{(}\frac{\mu}{2}-\lambda \Big{)}m_{I}(t)^{2}-\frac{\mu}{2}m_{I}(t)^{3}.\] (II.18)
The detailed proof of Theorem II.1 is given in Appendix A where the mathematical reason for the \(\beta,\tau\) scalings becomes apparent. In Figure 2 (left) we numerically show that the agreement between the Kolmogorov equations and their mean-field limit is excellent. Furthermore, it is evident that the added peer pressure brought by the \(2\)-simplices elevates the endemic equilibrium, but the same increase in the value of the transmission rate across \(2\)-simplices as for the pairwise rate of transmission leads to a less marked effect on the values of the endemic equilibrium. This suggests that pairwise transmission remains the main driver of the epidemics, especially for high values of the \(1\)-simplices infectivity.
### Higher-order \(Sis\) epidemics on a complete simplicial \(3\)-complex
We now follow the steps of the proof of Theorem II.1, but extending the previous case study up of one order. This means that the simplicial complex includes also \(4\)-body interactions (\(3\)-simplices) that also act as possible channels of infection for susceptible nodes. The proof goes along the same lines, but it also highlights how the scaling of the rates needs to inversely scale with the number of simplices composing the complex. We want to keep the infection pressure from \(1\)- and \(2\)-simplices from before (though it can be set to \(0\) if necessary) and also add an extra pressure \(\delta\) due to transmission mediated by \(3\)-simplices \(I-I-I-S\).
As before, if a node is infected (i.e. it is at state \(I\)), then it switches to state \(S\) at a recovery rate \(\gamma>0\). When the number of infected is \(k\), one of them will switch to \(S\) with a rate
\[c_{k}=\gamma k.\] (II.19)
The proof of Theorem II.1 is given in Appendix B.
The proof of Theorem II.
If a node is at state \(S\) then it switches to state \(I\) at rate proportional to the number of infected nodes \(I\) it is linked to; in particular if \(S\) is linked to \(k\) infected, then it switches to \(I\) at a rate \(\tau k\), \(\tau>0\). Then the number of infected goes up by 1 with a rate of
\[a_{k}=\tau k(N-k).\] (II.20)
The 2-simplex pressure still remains at
\[\beta_{k}^{\triangle}=\beta\binom{k}{2}(N-k),\quad\beta>0,k\geq 2.\] (II.21)
Finally, for the infection pressure coming from the 3-simplices there are precisely \(\binom{k}{3}\) ways to select 3 infected nodes and \(N-k\) ways to select a susceptible node, so, after introducing the new parameter \(\delta\) for the 3-simplices, we define
\[\delta_{k}^{\square}=\delta\binom{k}{3}(N-k),\quad\delta>0,k\geq 3.\] (II.22)
Note that this pressure starts playing a role when \(k\geq 3\), otherwise it is zero. Then the Kolmogorov equations are readily available, and we can compactly write them as
\[\frac{d}{dt}p_{k}(t)=(\alpha_{k-1}+\beta_{k-1}^{\triangle}+\delta_{k-1}^{ \square})p_{k-1}(t)-(\alpha_{k}+c_{k}+\beta_{k}^{\triangle}+\delta_{k}^{ \square})p_{k}(t)+c_{k+1}p_{k+1}(t).\] (II.23)
Following the same methodology as for Theorem II.1, we formulate a the limiting mean-field equation as given below.
**Theorem II.2**.: _Fix a time horizon \(T>0\) and assume that the following uniform pointwise limit_
\[\lim_{N\to\infty}\sup_{t\leq T}\Big{\|}\frac{1}{N}m^{N}(t)-m_{I}(t)\Big{\|}_{ \infty}=0\] (II.24)
_exists as a deterministic function on \([0,1]\). Furthermore, assume that \(\eta_{t}^{N}\), defined by Equation (II.10), satisfies_
\[\lim_{N\to\infty}\sup_{t\leq T}\mathbb{E}(\eta_{t}^{N})=0.\] (II.25)
_Let \(\gamma\) denote the recovery rate. Define, as for the previous case of Equation (II.14) and adding another positive parameter \(\theta\), the scaled pressures_
\[\tau=\lambda N^{-1},\quad\beta=\mu N^{-2},\quad\delta=\theta N^{-3}\] (II.26)
_corresponding respectively to infections coming from 1-,2- and 3- simplices. Then_
\[\lim_{N\to\infty}\frac{d}{dt}\frac{m_{N}(t)}{N}=\frac{d}{dt}m_{I}(t)=(\lambda -\gamma)m_{I}(t)+\Big{(}\frac{\mu}{2}-\lambda\Big{)}m_{I}(t)^{2}+\left(\frac {\theta}{6}-\frac{\mu}{2}\right)m_{I}(t)^{3}-\frac{\theta}{6}(m_{I}(t))^{4}.\] (II.27)
Figure 2: Comparison between the expected fraction of infected nodes based on solving the forward Kolmogorov equations (continuous lines) and the solution of the mean-field equation (markers). The case of 1- and 2-simplices (Kolmogorov equations (II.7) and mean-field equation (II.18)) and 1-, 2- and 3-simplices (Kolmogorov equations (II.23) and mean-field equation (A.6)) are shown in the left and right panel, respectively. The following parameter combinations: leftmost panel \((\lambda,\mu)\)=\((1.5,3)\), \((3,1.5)\), \((3,3)\) corresponding to increasing level of prevalence. The same values apply for the rightmost panel, with a fixed \(\theta=5\) for all three cases. For this numerical tests we considered \(N=2500\) nodes, and the epidemic is started with a seed \(500\) infected nodes.
A sketch of the proof is given in Appendix A. The finer technical details of the proof of this theorem are not included, as the arguments follow closely those of Theorem II.1.
In Figure 2 (right) we show the outcome of numerical tests comparing the expected number/fraction of infectious nodes based on the Kolmogorov equations versus that resulting from solving one single ordinary differential equation, the mean-field limit. As before, the agreement between the two is excellent. The relative increase in the endemic levels in a one-to-one comparison between the left and rightmost panels reveals that the added infectious pressure brought my higher-order simplices (3-simplices) contributes less than the lower order ones (2-simplices). Furthermore, the effect of the 3-simplices becomes more/less significant when the pairwise infection rate decreases/increases.
### Higher-order \(Sis\) epidemics on a complete simplicial \(M\)-complex
In the previous subsections we chose to present the proof of Theorem II.1 for a simplicial contagion up to order 2 (triangles) only for purposes and clarity. As the proof for order 3 demonstrate, the scaling performed can be generalised to higher-order simplices by building on previously obtained scaling limits. This means that the ingredients of the previous two proofs are sufficient to prove a general hydrodynamic result via induction, where extra infection pressure can come from any complete simplicial i-complex \(1\leq i\leq M\) for some fixed \(M\). The formal proof is left to the reader, as it is a repetition of the previous steps and induction. The base case of the induction when \(M=2\) is the proof of Theorem II.1.
**Theorem II.3**: _Fix a time horizon \(T>0\) and assume that the following uniform pointwise limit_
\[\lim_{N\to\infty}\sup_{t\leq T}\Big{\|}\frac{1}{N}m^{N}(t)-m_{I}(t)\Big{\|}_ {\infty}=0,\] (II.28)
_exists as a deterministic function on \([0,1]\). Furthermore assume \(\eta_{t}^{N}\), defined by (II.10), satisfies_
\[\lim_{N\to\infty}\sup_{t\leq T}\mathbb{E}(\eta_{t}^{N})=0\] (II.29)
_Let \(\gamma\) denote the recovery rate. Define \(r_{i}\) as the infection pressure on one single susceptible node in a generic active simplex of size (\(i+1\)), with all the other \(i\) nodes being infected, \(1\leq i\leq M\) for some \(M\) finite. Furthermore assume that each \(r_{i}\) scales according to the order of appearance of the simplex i.e._
\[r_{i}=s_{i}N^{-i},\quad k\geq 1\] (II.30)
_for some \(s_{i}\geq 0\)._
_Then_
\[\lim_{N\to\infty}\frac{d}{dt}\frac{m_{N}(t)}{N}=\frac{d}{dt}m_{I}(t)=\left(s_{ 1}-\gamma\right)m_{I}(t)+\sum_{i=1}^{M-1}\left(\frac{s_{i+1}}{(i+1)!}-\frac{s _{i}}{i!}\right)(m_{I}(t))^{i+1}-\frac{s_{M}}{M!}(m_{I}(t))^{M+1}.\] (II.31)
Note that the above also works for the pairwise infection (\(M=1\)), since in that case the sum is interpreted as empty and only the first and last term survives in the equation above. As expected, we have \(dm_{I}(t)/dt=\left(s_{1}-\gamma\right)m_{I}(t)-s_{1}m_{I}(t)^{2}\).
## III Bifurcation analysis of the resulting mean-field models
We now explore the phase diagrams associated to the mean-field models derived in the previous sections, for both structures.
### The complete simplicial 2-complex
From Theorem II.1, the limiting equation is
\[\frac{d}{dt}m_{I}(t)=(\lambda-\gamma)m_{I}(t)+\Big{(}\frac{\mu}{2}-\lambda \Big{)}m_{I}(t)^{2}-\frac{\mu}{2}m_{I}(t)^{3}.\] (III.1)
The steady state \(m_{I}=x\) is determined by this equation when \(\dfrac{d}{dt}m_{I}(t)=0\) is substituted, i.e. \(x\) is the solution of
\[0=(\lambda-\gamma)x+\Big{(}\dfrac{\mu}{2}-\lambda\Big{)}x^{2}-\dfrac{\mu}{2}x^{ 3}=x\Big{(}(\lambda-\gamma)+\Big{(}\dfrac{\mu}{2}-\lambda\Big{)}x-\dfrac{\mu}{ 2}x^{2}\Big{)}.\] (III.2)
Assuming that the 2-simplices are actually contributing to the epidemic, that is \(\mu\neq 0\), we have up to three solutions
\[x=0,\quad x=\dfrac{\frac{\mu}{2}-\lambda\pm\sqrt{\Big{(}\frac{\mu}{2}-\lambda \Big{)}^{2}+2\mu(\lambda-\gamma)}}{\mu}.\] (III.3)
The existence of an epidemic-free state (trivial solution) does not depend on any parameter and it is always accessible. For the other solution, we can easily compute from Eq. (III.3) the bifurcation diagram in the \((\lambda,\mu)\) parameter plane by using the following elementary facts about the solutions of a quadratic equation in the form \(0=c+bx-ax^{2}\). Denoting the discriminant by \(D=b^{2}+4ac\) and the position of the maximum by \(m=\frac{b}{2a}\), the following cases can be distinguished:
1. If \(D<0\), then there are no real solution;
2. If \(D>0\) and \(c>0\), then there is a positive and a negative solution;
3. If \(D>0\), \(c<0\) and \(m>0\), then there are two positive solutions;
4. If \(D>0\), \(c<0\) and \(m<0\), then there are two negative solutions.
In our case, when \(a=\mu/2\), \(b=\mu/2-\lambda\), and \(c=\lambda-\gamma\), the discriminant curve, where \(D=0\) (in the positive quadrant of the parameter plane) takes the form \(\lambda=-\mu/2+\sqrt{2\mu\gamma}\) and \(m>0\) is equivalent to \(\mu>2\lambda\).
Applying the simple rules above to our case leads to the bifurcation diagram shown in Figure 3 (left), whose labelled regions are divided as follows:
1. If \(\lambda<-\mu/2+\sqrt{2\mu\gamma}\), there is no non-trivial solution (domain D);
2. If \(\lambda>-\mu/2+\sqrt{2\mu\gamma}\), and \(\lambda>\gamma\), there is a positive and a negative solution (domain B);
3. If \(\lambda>-\mu/2+\sqrt{2\mu\gamma}\), \(\lambda<\gamma\) and \(\mu>2\lambda\), there are two positive solutions (domain C);
4. If \(\lambda>-\mu/2+\sqrt{2\mu\gamma}\), \(\lambda<\gamma\) and \(\mu<2\lambda\), there are two negative solutions (domain A).
Figure 3: Bifurcation diagrams for number and nature of the steady states of the differential equation in Eq. (III.1) associated to a simplicial contagion dynamics on a complete simplicial 2-complex. Without loss of generality, we set \(\gamma=1\); non-dimensionalising time would make \(\gamma\) superfluous. (Left) Different steady state configurations as a function of the two rescaled infectivity parameters \(\lambda\) (1-simplices) and \(\mu\) (2-simplices): (A) zero and two negative solutions, (B) zero, one negative and one positive solution (the \(\mu=2\lambda\) line splits this region into two further regions, above this line the magnitude of the positive solution is greater than that of the negative one, and otherwise below), (C) zero and two positive solutions, and (D) the zero solution only. (Right) Typical plot of the steady states as a function of the standard 1-simplices infectivity for two fixed values of \(\mu=7,5\) and 2 (from top to bottom). The cross sections corresponding to these values are plotted with horizontal dashed-lines in the leftmost panel.
The stability of these steady states can be obtained from the fact that in the cubic case, with a negative coefficient in the cubic term, the middle steady state is unstable and the other two states are stable. That is, for parameter pairs in domains A and D, \(x=0\) is the only stable equilibrium. For parameter pairs in domain B, \(x=0\) is unstable and there is a globally stable positive equilibrium. Finally, for parameter pairs in domain C, \(x=0\) is stable and there is another stable positive equilibrium. Their basin of attraction is separated by the third, unstable steady state (also positive).
The global bifurcation picture can be sampled from the plane by "cutting" it at different values of \(\mu\). The result is shown in Figure 3 (right), where the prevalence at the steady state, denoted by \(m_{I}^{SS}\), are plotted against the rescaled rate of infection via 1-simplices \(\lambda\) for different values of the one for 2-simplices \(\mu\). We note that two distinct bifurcation scenarios are possible; namely a simple transcritical bifurcation and a fold bifurcation leading to bi-stability. It is worth noting that bi-stability only appears for large values of \(\mu\), i.e. a certain amount of infection mediated by 2-simplices is needed in order to sustain a region of bi-stability. We can make this more precise. In Figure 3 (left) we notice that one requirement on \(\mu\) for the system to display bi-stability, be in region C, when \(\lambda\) is varied, is to have \(\mu>2\gamma\); this is the critical \(\mu_{c}=2\gamma\) where the parabola achieves its maximum when viewed in the \((\mu,\lambda)\) plane. However, the conditions for the system to go through region C are \(\lambda<\mu/2\), \(\lambda<\gamma\), and \(\lambda>-\mu/2+\sqrt{2\mu\gamma}\). At this point one needs to check that, if \(\mu>2\gamma\) holds, then one can always find \(\lambda>0\) such that the other three conditions are also satisfied. This will then guarantee that the system will go through region C when \(\lambda\) is varied. One of the conditions is trivial, we can always choose a \(\lambda>0\) such that \(\lambda<\gamma\). However, \(\lambda\) also needs to be such that \(\lambda<\mu/2\). This leads to requiring that \(\lambda<\min\left\{\gamma,\mu/2\right\}\). By the original assumption, \(\gamma<\mu/2\), which leads to \(\lambda<\gamma\). This leaves us to check that there are \(\lambda\) values satisfying simultaneously the following inequaltiy:
\[-\frac{\mu}{2}+\sqrt{2\mu\gamma}<\lambda<\gamma.\] (III.4)
Such values of \(\lambda\) exist if and only if \(-\frac{\mu}{2}+\sqrt{2\mu\gamma}<\gamma\), which is equivalent to \(\sqrt{2\mu\gamma}<\gamma+\frac{\mu}{2}\). However, this is the direct consequence of the inequality between the geometric and arithmetic mean, that is,
\[\sqrt{2\gamma\mu}<\frac{2\gamma+\mu}{2}=\gamma+\mu.\] (III.5)
Setting \(\mu=0\) (i.e. no extra infection pressure beyond pairs) in Equation (II.17) leads to
\[x=0,\quad x=1-\frac{\gamma}{\lambda},\] (III.6)
which are the steady states for the classical model with pairwise infection only.
We can use perturbation theory methods to expand the solutions of the quadratic Equation (III.1), that is the steady states, as a function of \(\mu\). This effectively means that we perceive the additional infection across complete simplicial 2-complexes as a perturbation of the classic system with infections across links only. This leads to
\[x\simeq 1-\frac{\gamma}{\lambda}+\frac{\gamma(\lambda-\gamma)}{2\lambda^{3}}\mu.\] (III.7)
The zeroth-order approximation corresponds to the classical SIS model that accounts for pairwise interactions only, that is \(1-\frac{\gamma}{\lambda}\). As expected, the infection across complete simplicial 2-complexes increases the value of the endemic steady state. More precisely, from Equation (III.7), the contribution of the simplicial contagion is of \(O(1/\lambda^{2})\). Note that the expansion above only works for the case of \(\lambda>\gamma\), which is the standard epidemic threshold for systems composed by pairwise interactions only. Once 2-simplices are added, the observed behaviour is usually interpreted in terms of tipping point dynamics, where depending on the initial condition the epidemic either dies out or it reaches a stable endemic equilibrium [37].
### The complete simplicial 3-complex
From Theorem II.2, the limiting equation is
\[\frac{d}{dt}m_{I}(t)=(\lambda-\gamma)m_{I}(t)+\Big{(}\frac{\mu}{2}-\lambda \Big{)}m_{I}(t)^{2}+\left(\frac{\theta}{6}-\frac{\mu}{2}\right)m_{I}(t)^{3}- \frac{\theta}{6}m_{I}(t)^{4}.\] (III.8)
The steady state \(m_{I}=x\) is determined by this equation when \(\frac{d}{dt}m_{I}(t)=0\) is substituted, i.e. \(x\) is the solution of
\[0=(\lambda-\gamma)x+\Big{(}\frac{\mu}{2}-\lambda\Big{)}x^{2}+\left(\frac{ \theta}{6}-\frac{\mu}{2}\right)x^{3}-\frac{\theta}{6}x^{4}=x\Big{[}\lambda- \gamma+\Big{(}\frac{\mu}{2}-\lambda\Big{)}\,x+\left(\frac{\theta}{6}-\frac{ \mu}{2}\right)x^{2}-\frac{\theta}{6}x^{3}\Big{]}.\] (III.9)
As for the previous case, we will now determine the bifurcation diagram in the \((\lambda,\mu)\) parameter plane, while the other two parameters, \(\gamma\) and \(\theta\), will be fixed. The number of steady states is determined, in this cubic case as well, by the discriminant curve. We will exploit the advantages of the parametric representation method that parametrises the discriminant curve by the steady state value \(x\) (see Ref. [38] for more details).
In order to apply the parametric representation method, the cubic term in Equation (III.9) is written in the form
\[f_{0}(x)+\lambda f_{1}(x)+\mu f_{2}(x)=0,\] (III.10)
where
\[f_{0}(x)=\frac{\theta}{6}(x^{2}-x^{3})-\gamma,\qquad f_{1}(x)=1-x,\qquad f_{2 }(x)=\frac{1}{2}(x-x^{2}).\] (III.11)
The discriminant consist of those parameter pairs, where the function has a double root, that is its derivative is also zero at the root, i.e.
\[f_{0}^{\prime}(x)+\lambda f_{1}^{\prime}(x)+\mu f_{2}^{\prime}(x)=0\] (III.12)
holds as well. The main idea of the parametric representation method is to solve system given by Equations (III.10)-(III.12) for \(\lambda\) and \(\mu\) in terms of \(x\), leading to the parametric expression of the discriminant curve. This is especially useful, when the parameters are involved linearly, as in our case. Then the solution of Equations (III.10)-(III.12) can be easily given as
\[\lambda=\frac{f_{0}^{\prime}(x)f_{2}(x)-f_{0}(x)f_{2}^{\prime}(x)}{f_{1}(x)f_ {2}^{\prime}(x)-f_{1}^{\prime}(x)f_{2}(x)}\,\qquad\mu=\frac{f_{0}(x)f_{1}^{\prime}(x)-f_{0}^{ \prime}(x)f_{1}(x)}{f_{1}(x)f_{2}^{\prime}(x)-f_{1}^{\prime}(x)f_{2}(x)}\.\] (III.13)
After substituting Equation (III.11) into Equation (III.13), the discriminant curve is obtained as
\[\lambda=\frac{\theta}{6}x^{2}+\gamma\frac{1-2x}{(1-x)^{2}}\,\qquad\mu=\frac{2 \gamma}{(1-x)^{2}}-\frac{\theta}{3}x\.\] (III.14)
The advantage of the parametric representation method is evident since eliminating \(x\) from these two equations is quite complicated, i.e. to derive the equation of the discriminant curve without parametrising with \(x\) would be difficult.
We now numerically plot the discriminant curve while varying the value of \(x\), as it is shown in Figure 4 for two fixed values of the highest-order infectivity parameters \(\theta=2\) (left) and \(\theta=10\) (right) --modulating the effects of the 3-simplices. The parameter \(x\) of the curve varies along the real axis. We divide the curve into three parts, shown with different colours, as follows. The curve tends to infinity when \(x=1\), hence we will consider the part belonging to \(x<1\) and \(x>1\) separately. Moreover, we are interested in positive steady states, hence we will divide the the \(x<1\) part into two parts, belonging to \(x<0\) and to \(0<x<1\). The parameters \(\lambda\) and \(\mu\) are positive, hence we will investigate only the positive quadrant of the \((\lambda,\mu)\) parameter plane. The following observation can be made based on the shown results, and can be also proved by elementary calculations.
**Proposition III.1**: _The following statements hold for the discriminant curve given in Equation (III.14)._
1. _The part belonging to_ \(x>1\) _does not enter the positive quadrant, hence we will not consider it in further investigations._
2. _The curve touches the vertical line_ \(\lambda=\gamma\) _at_ \(\mu=2\gamma\) _where_ \(x=0\)_._
3. _The curve is locally on the left-hand-side of the vertical line_ \(\lambda=\gamma\)_, if_ \(\theta<6\gamma\)_, and it lies on the right-hand-side of this vertical line, when_ \(\theta>6\gamma\)_._
We note that the last statement follows easily when the formula for \(\lambda\) in Equation (III.14) is rearranged as
\[\lambda=\gamma+x^{2}\left(\frac{\theta}{6}-\frac{\gamma}{(1-x)^{2}}\right).\] (III.15)
Figure 4 also shows that the discriminant curve may have a cusp point. By definition, the cusp point of a curve is that point where \(\lambda^{\prime}(x)=0\) and \(\mu^{\prime}(x)=0\) hold at the same time. The parametric representation offers an easy way to determine the cusp point, namely, the equation
\[f_{0}^{\prime\prime}(x)+\lambda f_{1}^{\prime\prime}(x)+\mu f_{2}^{\prime \prime}(x)=0\] (III.16)
holds as well at the cusp point. Solving the system of Equations (III.10)-(III.12), (III.16) for \(x\) and substituting Equation (III.11) leads to the following equation for the \(x_{c}\) parameter value of the cusp point:
\[\theta(1-x_{c})^{3}=6\gamma.\] (III.17)
This equation has a positive solution for \(x_{c}\) if and only if \(\theta>6\gamma\). Hence, we have the following proposition.
Proposition III.2.: _The following statements hold for the discriminant curve given in Equation (III.14)._
1. _The branch of the discriminant curve belonging to_ \(x>0\) _has a cusp point, if_ \(\theta>6\gamma\)_._
2. _The branch of the discriminant curve belonging to_ \(x>0\) _is a convex arc, if_ \(\theta<6\gamma\)_._
Finally, in order to construct the bifurcation diagram according to the number of positive steady states, we will make use of the so-called tangential property of the parametric representation method (see Ref. [38] for more details).
1. The number of solutions of Equation (III.10) for a given parameter pair \((\lambda,\mu)\) is equal to the number of tangents that can be drawn from \((\lambda,\mu)\) to the discriminant curve.
2. The values of solutions of Equation (III.10) for a given parameter pair \((\lambda,\mu)\) are the \(x\) parameter values of the tangent points along the discriminant curve.
The number of tangents that can be drawn to a convex arc can be easily read geometrically. It can be 2, 1 or 0 according to the position of the point from which we draw the tangent lines. For a cusp however, from certain points it is possible to draw three tangents and it is this property which allows to further increase the number of steady states.
Summarising, we can have two different bifurcation diagrams according to the specific values of the parameters \(\theta\) and \(\gamma\), as it is shown in Figure 4. If \(\theta<6\gamma\) (leftmost panel), the cusp of the bifurcation curve lies in the region where \(x<0\), thus giving no biologically meaningful steady states. Note that, besides the number of solutions shown in the figure, the disease-free steady state is always a steady state. Contrarily, in the case \(\theta>6\gamma\) (rightmost panel), the cusp is on the \(x>0\) branch, and thus the number of positive steady states can be 0, 1, 2 or 3.
The problem of finding the number of positive solutions of Equation (III.8) can also be approached by using Descartes's rule of signs. This states that the number of positive solutions is related to the sign changes in the coefficients of the polynomial when arranged in the canonical order. Since the coefficient of the highest-order term, \(-\theta/6\), is negative, we must have that
\[\frac{\theta}{6}-\frac{\mu}{2}>0,\ \ \frac{\mu}{2}-\lambda<0,\ \ \lambda-\gamma>0 \Longleftrightarrow\theta>3\mu,\ \ \mu<2\lambda,\ \ \lambda>\gamma.\] (III.18)
It is worth to plot the steady states as \(\lambda\) is varied for different fixed value of \(\mu\). The inset in the leftmost panel of Figure 4 shows that there are two regions of particular interest; namely \(\mu=1.8\) and \(\mu=2.2\). As shown in Figure 5, the system goes from displaying a simple transcritical bifurcation (\(\mu=1.8\)) to a fold bifurcation leading to bi-stability (\(\mu=2.2\)). In Figure 6 we consider instead higher contributions coming from the highest-order simplices. Setting \(\theta=10\) leads to a more complex scenario where the system can display the transcritical behaviour, fold bifurcation
Figure 4: The full bifurcation analysis for a simplicial contagion dynamics on a complete simplicial 3-complex showing the number of solutions in the \((\lambda,\mu)\) plane for two fixed values of \(\theta=2\) (left) and \(\theta=10\) (right); the numbers do not include the trivial disease-free steady state. Without loss of generality, we set \(\gamma=1\); non-dimensionalising time would make \(\gamma\) superfluous. The magenta, red and blue lines correspond to \(x>1\), \(0<x<1\) and \(x<0\), respectively. The insets show the subtle geometry of the cusps; these shift from the negative to the positive branch and thus increasing the number of possible steady states. The dot-dashed lines correspond to the \(\lambda=\gamma\) line and the oblique line in the inset corresponds to the \(\mu=2\lambda\) line, these are important to determine when three strictly positive steady states exist. Finally, the dashed lines in the insets represent fixed values of \(\mu\) (1.8 and 2.2 in the leftmost panel, and 1.7, 1.81, 1.87 and 2.1 in the rightmost panel) where the bifurcation profile changes when \(\lambda\) is varied; these are shown explicitly in Figures 5 and 6. The marker (\(\circ\)) in the insets is the transition point from the positive to the negative branch. In the inset of the rightmost panel, the domain with three solutions, or four if zero is included, is the region bounded by the cusp and the \(\lambda=\gamma=1\) line.
with the fold after the transcritical point, fold bifurcation with the fold before the transcritical point, and bi-stability between an endemic and the disease-free steady state. It may be useful to focus on the two fold bifurcations and in particular on the cross-sections shown in the inset of the rightmost panel of Figure 4. From here on, the number of solutions in a region includes the trivial disease-free solution. For \(\mu=1.81\) (Figure 4, top-right panel), and with \(\lambda\) moving from zero to positive values, it is evident that for small values of \(\lambda\) the only steady state is the trivial disease-free steady state. However, as \(\lambda\) increases the first transition point is into the area with two steady states (transcritical point), this is the narrow region between the \(\lambda=1\) line and the left-hand side of the cusp. Increasing \(\lambda\) further takes the system to the second transition point inside the cusp with four solutions and finally back to a region with two solutions. For \(\mu=1.87\) (Figure 4, middle-left panel), as \(\lambda\) increases from zero, we move from one solution to three (point of the fold) and then to four solution as we pass through the transcritical point. As \(\lambda\) increases further, only the stable endemic and unstable trivial disease-free states survive. In this case, on entering the cusp from the left we have three solutions and this increase to four as the system is still within the cusp but passes through the \(\lambda=\gamma=1\) boundary.
To further illustrate these findings, we plot the temporal evolution of the prevalence as given by the solutions of Equation (III.8) starting from different initial conditions, at the bottom panel of Figure 6. The figure clearly shows that we have two stable non-zero steady states, as expected based on the top-right panel in the same figure. Hence, the long-term behaviour of the system strongly depends on the initial conditions. The stability of the steady states can be obtained by using the fact that the largest steady state is stable. Hence, for example, in the case of 3 positive steady state, the largest and smallest positive steady states are stable, while the middle one and zero are unstable.
Finally, it is possible to obtain an asymptotic expansion to one of the solutions of equation (III.8) in the limit of small \(\mu\) and \(\theta\). This leads to
\[x\simeq 1-\frac{\gamma}{\lambda}+\frac{\gamma(\lambda-\gamma)}{2 \lambda^{3}}\mu+\frac{\gamma(\lambda-\gamma)^{2}}{6\lambda^{4}}\theta.\] (III.19)
The expansion above shows again that the added infection due to complete simplicial 3-complex increase the value of the endemic steady sate. Under certain conditions we can again see that this is of order \(O(1/\lambda^{2})\) which again highlights how high values of the rate of infection across pairs limits the effect of higher-order structures.
## IV Discussion
Understanding how behavioural contagion unfolds on a population of interacting individuals is sociologically interesting, but also crucial for biological spreading given its strict interplay with the behavioural component that can facilitate/inhibit the contagion process [39; 40; 41; 42]. In this paper, we derived the exact equations for an \(SIS\)-like simplicial contagion dynamics on fully connected simplicial complexes where contagion events are mediated by \(k\)-simplices of arbitrary order that represent group interactions from 2-body to \((k+1)\)-body. While the exact model can be written out explicitly and can be evaluated numerically, we also provided a rigorous mean-field limit in the form of a single differential equation for the expected number of infected nodes. We then performed a detailed bifurcation analysis for the case of a simplicial contagion that involves complete simplicial 2- and 3-complexes. In both cases, we found that the novel effects brought by higher-order interactions can be effectively interpreted as as perturbations to the base case of an SIS spreading only through pairs of nodes. We analytically showed how the higher-order structure contributes to increasing the value of the endemic steady state. In particular, we found that this contribution is of
Figure 5: Bifurcation picture in the case \(\theta=2\) for \(\mu=1.8\) (left) and \(\mu=2.2\) (right). Increasing the value of \(\mu\), for this particular value of \(\theta\), moves the system from a simple transcritical bifurcation to fold bifurcation leading to bi-stability.
\(O(1/\lambda^{2})\), where \(\lambda\) is the rate of infection across a unique link. This clearly illustrates the vanishing contribution of simplicial contagion as the value of the pairwise infection rate increases.
Our work is the first one to provide an analytical treatment that is able to handle simplicial contagion that runs on simplices of order higher than 2. In fact, most of the recent studies in the emerging field of "higher-order network science" have been focusing on understanding the dynamical differences that emerge when the system descriptors move beyond pairs; in this view, triangles, encoding 3-body interactions, are the most natural starting point. In Ref. [21], the particular case of a simplicial contagion running exclusively on 1-simplices and simplices of another single higher order \(k>1\) was analysed, leading to similar results as for the simplest case of 1- and 2-simplices. In this manuscript, we made an important stride in this direction by allowing for all interactions at multiple orders up to 4 nodes, and also providing a general equation for infections running on systems featuring all interactions up to an arbitrary fixed order.
We found that there is a natural relation between the order of the simplex and the degree of the polynomial
Figure 6: Bifurcation picture in the case \(\theta=10\) for \(\mu=1.7\) (top-left), \(\mu=1.81\) (top-right), \(\mu=1.87\) (middle-left) and \(\mu=2.1\) (middle-right). Increasing the value of \(\mu\), for this particular value of \(\theta\), moves the system through the following four distinct bifurcation profiles: transcritical, fold bifurcation with fold after the transcritical point, fold bifurcation with fold before the transcritical point and bi-stability between an endemic and the disease-free steady state. The figure at the bottom shows solutions of the mean-field model, with up to 3-simplices. Each individual solution starts with a different initial state and it illustrates that the system has two stable steady states. Parameters are: \(\gamma=1\), \(\lambda=1.003\), \(\mu=1.81\) and \(\theta=10\). This plot corresponds to fixing \(\lambda=1.003\) in the top-right plot.
appearing on the right-hand side of the mean-field limit. For pairs only, the mean-field equation is driven by a quadratic polynomial, for up to complete simplicial 2-complexes is cubic, for complete simplicial 3-complexes is quartic and for complete simplicial \(M\)-complexes the polynomial has degree \((M+1)\). This naturally leads to the possibility of having multiple non-trivial (excluding the trivial disease-free) steady states, and it comes down to the number of positive solutions of polynomials. Indeed, as for previous results [21; 22; 23; 24; 19], when we consider a structure that allows for up to 2-simplices we find a bi-stability regime where a stable endemic and a stable disease-free states co-exist. Here, we showed that when allowing for 3-simplices as well it is possible to have four steady states, two of which are distinct strictly positive endemic steady states. Given these premises, we envision that going even higher with the order of the interactions could lead to having an arbitrary numbers of strictly positive and distinct endemic steady states. Note that multistability was already found in previous threshold-based contagion models on higher-order networks that presented community structure. Here, instead, we show that even the most trivial structure, a fully connected one, can lead to this phenomenon if one simply allows for interaction of higher-order. It is well known that the outcome of an epidemic process is the result of the complex interplay between the structure of the underlying network and epidemic dynamics that unfolds on top of it. Since the models analysed in this manuscript are all based on the assumption of a fully connected structure, we conclude that in this case the richness of model behaviour is mainly driven by the formulation of the dynamics through higher-order mechanisms.
We foresee different possible extensions to our work. The most natural continuation would be to investigate the possibility of making general statements about the global bifurcation picture of the mean-field model given by polynomials of arbitrary order, coupled with an extension of the results obtained here via perturbation theory. Next, one could make crucial realistic steps in two directions. On one hand, it would be interesting to rigorously explore the effects of simplicial contagion of arbitrary order on higher-order structures that are not complete. Preliminary calculations on Erdos-Renyi structures show that it is possible to derive a semi-rigorous Markov-chain at the population level (not to be confused with the microscopic-Markov chain approach [31; 22; 32]) where infection rates involving 1- and 2-simplices are approximated based on probabilistic arguments. On the other hand, it is well known that temporality of contacts can have a huge impact on the spreading of epidemics on complex networks [43], and higher-order structures can obviously present (even more) interesting temporal patterns at all orders [44; 45; 46]. The problem of understanding the impact of having group interactions that change in time on the dynamical processes that unfold on top has been addressed only by a few studies --based on simulations [47; 48]. Is it possible to devise a formal analytical treatment for (even trivial) time-varying higher-order structures? Starting from the simplest approach possible, one could keep the structure as it is (fully connected), but instead investigate the impact of having group events that get activated at different points in time according to some activity distribution.
## Appendix A
In this section, we detail the proof for deriving the mean-field limits for the complete simplicial 2- and 3-complexes.
### Mean-field limit for complete simplicial 2-complex
Proof of Theorem ii.1.: The expected number of infected \(m^{N}(t)=\mathbb{E}(I_{t}^{N})=\sum_{k=0}^{N}kp_{k}(t)\) at time \(t\) satisfies the equations
\[\frac{d}{dt}m^{N}(t) =\sum_{k=0}^{N}k\frac{d}{dt}p_{k}(t)\] \[=\sum_{k=0}^{N}k[a_{k-1}p_{k-1}(t)-(a_{k}+c_{k})p_{k}(t)+c_{k+1} p_{k+1}(t)]+\sum_{k=0}^{N}k\beta_{k-1}^{\triangle}p_{k-1}(t)-\sum_{k=0}^{N}k \beta_{k}^{\triangle}p_{k}(t). \tag{12}\]
Focus on the middle sum in (A.1). By the initial conditions on \(\beta_{k}^{\triangle}\) we have
\[\sum_{k=0}^{N}k\beta_{k-1}^{\triangle}p_{k-1}(t) =\sum_{k=3}^{N}k\beta_{k-1}^{\triangle}p_{k-1}(t)\] \[=\sum_{\ell=2}^{N-1}(\ell+1)\beta_{\ell}^{\triangle}p_{\ell}(t)= \sum_{\ell=2}^{N-1}\ell\beta_{\ell}^{\triangle}p_{\ell}(t)+\sum_{\ell=2}^{N-1} \beta_{\ell}^{\triangle}p_{\ell}(t)\] \[=\sum_{\ell=0}^{N}\ell\beta_{\ell}^{\triangle}p_{\ell}(t)+\sum_{ \ell=0}^{N}\beta_{\ell}^{\triangle}p_{\ell}(t),\quad\text{since $\beta_{N}^{ \triangle}=0$ as well.}\]
Substitute back in (A.1) to obtain
\[\frac{d}{dt}m^{N}(t)=\sum_{k=0}^{N}k[a_{k-1}p_{k-1}(t)-(a_{k}+c_{k})p_{k}(t)+c_ {k+1}p_{k+1}(t)]+\beta\sum_{\ell=2}^{N-1}(N-\ell)\binom{\ell}{2}p_{\ell}(t).\] (A.2)
We build on Eq. (A.2) and write everything in terms of moments of \(I_{t}^{N}\). We have
\[\frac{d}{dt}m^{N}(t) =\sum_{k=0}^{N}k[a_{k-1}p_{k-1}(t)-(a_{k}+c_{k})p_{k}(t)+c_{k+1}p_ {k+1}(t)]+\beta\sum_{\ell=2}^{N-1}(N-\ell)\binom{\ell}{2}p_{\ell}(t)\] \[=\tau N\mathbb{E}(I_{t}^{N})-\gamma\mathbb{E}(I_{t}^{N})-\tau \mathbb{E}((I_{t}^{N})^{2})+\beta\Big{(}(N-I_{t}^{N})\binom{I_{t}^{N}}{2} \Big{)}\] \[=\tau N\mathbb{E}(I_{t}^{N})-\gamma\mathbb{E}(I_{t}^{N})-\tau \mathbb{E}((I_{t}^{N})^{2})+\frac{\beta}{2}\big{(}N\mathbb{E}(I_{t}^{N}(I_{t} ^{N}-1))-\mathbb{E}((I_{t}^{N}-1)(I_{t}^{N})^{2}))\big{)}\] \[=\Big{(}\tau N-\gamma-\frac{N\beta}{2}\Big{)}\mathbb{E}(I_{t}^{N} )+\Big{(}-\tau+\frac{\beta(N+1)}{2}\Big{)}\mathbb{E}((I_{t}^{N})^{2})-\frac{ \beta}{2}\mathbb{E}((I_{t}^{N})^{3}).\]
Now we scale everything by \(N\) to obtain
\[\frac{d}{dt}\frac{m^{N}(t)}{N} =\Big{(}\tau N-\gamma-\frac{N\beta}{2}\Big{)}\mathbb{E}\Big{(} \frac{I_{t}^{N}}{N}\Big{)}+\Big{(}-\tau+\frac{\beta(N+1)}{2}\Big{)}N\mathbb{E }\Big{(}\Big{(}\frac{I_{t}^{N}}{N}\Big{)}^{2}\Big{)}-\frac{\beta}{2}N^{2} \mathbb{E}\Big{(}\Big{(}\frac{I_{t}^{N}}{N}\Big{)}^{3}\Big{)}.\]
At this point we substitute in the values \(\tau=\lambda N^{-1},\beta=\mu N^{-2}\) to obtain
\[\frac{d}{dt}\frac{m^{N}(t)}{N}\] \[=\Big{(}\lambda-\gamma\Big{)}\mathbb{E}\Big{(}\frac{I_{t}^{N}}{N} \Big{)}-\Big{(}\lambda-\frac{\mu}{2}\Big{)}\mathbb{E}\Big{(}\Big{(}\frac{I_{t }^{N}}{N}\Big{)}^{2}\Big{)}-\frac{\mu}{2}\mathbb{E}\Big{(}\Big{(}\frac{I_{t}^{ N}}{N}\Big{)}^{3}\Big{)}-\frac{\mu}{2N}(\mathbb{E}\Big{(}\frac{I_{t}^{N}}{N} \Big{)}-\mathbb{E}\Big{(}\Big{(}\frac{I_{t}^{N}}{N}\Big{)}^{2}\Big{)}\Big{)}\] \[=\Big{(}\lambda-\gamma\Big{)}\mathbb{E}\Big{(}\frac{I_{t}^{N}}{N} \Big{)}-\Big{(}\lambda-\frac{\mu}{2}\Big{)}\mathbb{E}\Big{(}\Big{(}\frac{I_{t }^{N}}{N}\Big{)}^{2}\Big{)}-\frac{\mu}{2}\mathbb{E}\Big{(}\Big{(}\frac{I_{t}^{ N}}{N}\Big{)}^{3}\Big{)}+\mathcal{O}\Big{(}\frac{\mu}{N}\Big{)}.\]
Going forward, we will be keeping track of the error (which vanishes as \(N\to\infty\)) to make sure it will be independent of the time parameter \(t\). We will need this fact at the end of the proof to interchange two limits. Above we used that \(0\leq I_{t}^{N}\leq 1\). At this point we can substitute in (II.10) and obtain
\[\frac{d}{dt}\frac{m^{N}(t)}{N}\] \[\qquad\qquad\qquad\qquad-\frac{\mu}{2}\mathbb{E}\Big{(}\frac{m^ {N}(t)}{N}+\eta_{t}^{N}\Big{)}^{3}\Big{)}+\mathcal{O}\Big{(}\frac{\mu}{N}\Big{)}\] \[=\Big{(}\lambda-\gamma\Big{)}\Big{(}\frac{m^{N}(t)}{N}+\mathbb{E }\eta_{t}^{N}\Big{)}+\Big{(}-\lambda+\frac{\mu}{2}\Big{)}\Big{(}\Big{(}\frac{m ^{N}(t)}{N}\Big{)}^{2}+2\frac{m^{N}(t)}{N}\mathbb{E}\eta_{t}^{N}+\mathbb{E}(( \eta_{t}^{N})^{2})\Big{)}\] \[\qquad\qquad\qquad-\frac{\mu}{2}\Big{(}\Big{(}\frac{m^{N}(t)}{N} \Big{)}^{3}+3\Big{(}\frac{m^{N}(t)}{N}\Big{)}^{2}\mathbb{E}\eta_{t}^{N}++3 \frac{m^{N}(t)}{N}\mathbb{E}((\eta_{t}^{N})^{2})+\mathbb{E}((\eta_{t}^{N})^{3} )\Big{)}+\mathcal{O}\Big{(}\frac{\mu}{N}\Big{)}.\]
Since the \(m^{N}(t)N^{-1}\) converge uniformly for \(t\in[0,T]\), the error
\[\varepsilon_{N,T}=\max_{p=1,2,3}\sup_{t\in[0,T]}\Big{\|}\Big{(}\frac{m^{N}(t)}{N }\Big{)}^{p}-(m_{I}(t))^{p}\Big{\|}_{\infty}\]
converges to \(0\) as \(N\to\infty\). We use this to write
\[\frac{d}{dt}\frac{m^{N}(t)}{N}=(\lambda-\gamma)(m_{I}(t)+\mathbb{E} \eta_{t}^{N})+\Big{(}-\lambda+\frac{\mu}{2}\Big{)}(m_{I}^{2}(t)+2m_{I}^{2}(t) \mathbb{E}\eta_{t}^{N}+\mathbb{E}((\eta_{t}^{N})^{2}))\\ -\frac{\mu}{2}(m_{I}^{3}(t)+3m_{I}^{2}(t)\mathbb{E}\eta_{t}^{N}+3 m_{I}(t)\mathbb{E}((\eta_{t}^{N})^{2})+\mathbb{E}((\eta_{t}^{N})^{3}))+\mathcal{O} \Big{(}\frac{\mu}{N}\vee\varepsilon_{N,T}\Big{)}.\]
For \(N\) large enough, \(\eta_{t}^{N}\leq 1\). This is because \(m^{N}(t)N^{-1}\leq 1\) and it will be uniformly close to its limit. Therefore, condition (II.16) implies
\[\delta_{N,T}=\sup_{t\leq T}\mathbb{E}(\eta_{t}^{N})\geq\sup_{t\leq T}\mathbb{ E}((\eta_{t}^{N})^{p}),\quad\text{ and }\delta_{N,T}\to 0.\]
Therefore, with a final substitution we have reached that for \(N\) large enough
\[\frac{d}{dt}\frac{m^{N}(t)}{N}=(\lambda-\gamma)m_{I}(t)+\Big{(}-\lambda+\frac {\mu}{2}\Big{)}m_{I}^{2}(t)-\frac{\mu}{2}m_{I}^{3}(t)+\mathcal{O}\Big{(}\frac{ \mu}{N}\vee\varepsilon_{N,T}\vee\delta_{N,T}\Big{)}.\] (A.3)
Note that as \(N\) grows, the error term in (A.3) will vanish in the limit. Therefore, we want to argue that
\[\lim_{N\to\infty}\frac{d}{dt}\frac{m^{N}(t)}{N}=\frac{d}{dt}\lim_{N\to\infty} \frac{m^{N}(t)}{N}=\frac{d}{dt}m_{I}(t).\] (A.4)
From the assumptions we have the pointwise convergence of \(\frac{m^{N}(t)}{N}\). From Eq. (A.3) we have that the derivatives of \(\frac{m^{N}(t)}{N}\) converge uniformly in any interval \([0,T]\). Then, from [49] Theorem 7.17 we can exchange limits and (A.4) is valid.
The proof of Eq. (II.18) now concludes by taking the limit as \(N\to\infty\) in (A.3) to obtain
\[\frac{d}{dt}m_{I}(t)=(\lambda-\gamma)m_{I}(t)+\Big{(}\frac{\mu}{2}-\lambda \Big{)}m_{I}^{2}(t)-\frac{\mu}{2}m_{I}^{3}(t).\] (A.5)
The proof of (II.17) follows from (II.10), (II.15) and the weak convergence of \(\{\eta_{t}\}_{t\leq T}\) to \(0\), guaranteed by (II.16).
### Mean-field limit for complete simplicial 3-complex
Proof of Theorem ii.2.: The ODE for the un-scaled first moment is
\[\frac{d}{dt}m^{N}(t) =\sum_{k=0}^{N}k\frac{d}{dt}p_{k}(t)\] \[=\sum_{k=0}^{N}k\left(a_{k-1}p_{k-1}(t)+c_{k+1}p_{k+1}(t)-(a_{k} +c_{k})p_{k}\right)+\beta\sum_{\ell=2}^{N-1}(N-\ell)\binom{\ell}{2}p_{\ell}(t)\] \[\qquad\qquad\qquad\qquad+\sum_{k=0}^{N}k\left(\delta_{k-1}^{ \square}p_{k-1}(t)-\delta_{k}^{\square}p_{k}(t)\right).\]
Note that the first line of that equality is actually treated and scaled by the argument in Theorem II.1, so we can control under the same assumptions its limit. So we focus on the sum on the second line. We compute, taking into accounts the boundaries,
\[\sum_{k=0}^{N}k\left(\delta_{k-1}^{\square}p_{k-1}(t)-\delta_{k}^ {\square}p_{k}(t)\right) =\sum_{k=4}^{N}k\delta_{k-1}^{\square}p_{k-1}(t)-\sum_{k=3}^{N-1} k\delta_{k}^{\square}p_{k}(t)\] \[=\sum_{k=3}^{N-1}(k+1)\delta_{k}^{\square}p_{k}(t)-\sum_{k=3}^{N- 1}k\delta_{k}^{\square}p_{k}(t)\] \[=\sum_{k=3}^{N-1}\delta_{k}^{\square}p_{k}(t)=\sum_{k=3}^{N-1} \delta\binom{k}{3}(N-k)p_{k}(t)\] \[=\delta\mathbb{E}\left(\frac{I_{t}^{N}(I_{t}^{N}-1)(I_{t}^{N}-2) }{6}(N-I_{t}^{N})\right).\]
In order to find the appropriate scaling for \(\delta\) we need to expand the expression in the expectation and it yields
\[\delta\mathbb{E}\left(\frac{I_{t}^{N}(I_{t}^{N}-1)(I_{t}^{N}-2)}{6}(N-I_{t}^{N}) \right)=\frac{\delta}{6}\mathbb{E}(-(I_{t}^{N})^{4}+(N+3)(I_{t}^{N})^{3}+(-3N- 2)(I_{t}^{N})^{2}+2NI_{t}^{N})\]
So we will scale \(\delta=\theta N^{-3}\). Then we have
\[\frac{\theta}{6}\mathbb{E}\left(-N\frac{(I_{t}^{N})^{4}}{N^{4}}+(N+3)\frac{(I_ {t}^{N})^{3}}{N^{3}}+\frac{(-3N-2)}{N}\frac{(I_{t}^{N})^{2}}{N^{2}}+2\frac{I_{t }^{N}}{N^{2}}\right)\]
To put everything together, recall that as before,
\[\tau=\lambda N^{-1},\quad\beta=\mu N^{-2},\quad\delta=\theta N^{-3}\quad I_{t }^{N}N^{-1}\sim m_{I}(t)+\eta_{N}(t),\quad\eta_{N}(t)\to 0\]
and divide the differential equation for \(m_{N}(t)\) by \(N\) and take a limit as \(N\to\infty\) to obtain
\[\frac{d}{dt}m_{I}(t) =\lim_{N\to\infty}\frac{d}{dt}\frac{m_{N}(t)}{N}=(\lambda-\gamma) m_{I}(t)+\Big{(}\frac{\mu}{2}-\lambda\Big{)}m_{I}(t)^{2}-\frac{\mu}{2}m_{I}(t)^{3}\] \[\qquad+\lim_{N\to\infty}\frac{1}{N}\frac{\theta}{6}\mathbb{E} \left(-N\frac{(I_{t}^{N})^{4}}{N^{4}}+(N+3)\frac{(I_{t}^{N})^{3}}{N^{3}}+ \frac{(-3N-2)}{N}\frac{(I_{t}^{N})^{2}}{N^{2}}+2\frac{I_{t}^{N}}{N^{2}}\right)\] \[=(\lambda-\gamma)m_{I}(t)+\Big{(}\frac{\mu}{2}-\lambda\Big{)}m_{ I}(t)^{2}+\left(\frac{\theta}{6}-\frac{\mu}{2}\right)m_{I}(t)^{3}-\frac{ \theta}{6}(m_{I}(t))^{4}.\] (A.6)
|
2302.00024 | Effects of the Spatial Extension of the Edge Channels on the
Interference Pattern of a Helical Josephson Junction | Josephson junctions (JJs) in the presence of a magnetic field exhibit
qualitatively different interference patterns depending on the spatial
distribution of the supercurrent through the junction. In JJs based on
two-dimensional topological insulators (2DTIs), the electrons/holes forming a
Cooper pair (CP) can either propagate along the same edge or be split into the
two edges. The former leads to a SQUID-like interference pattern, with the
superconducting flux quantum $\phi_0$ (where $\phi_0=h/2e$) as a fundamental
period. If CPs' splitting is additionally included, the resultant periodicity
doubles. Since the edge states are typically considered to be strongly
localized, the critical current does not decay as a function of the magnetic
field. The present paper goes beyond this approach and inspects a topological
JJ in the tunneling regime featuring extended edge states. It is here
considered the possibility that the two electrons of a CP propagate and explore
the junction independently over length scales comparable to the superconducting
coherence length. As a consequence of the spatial extension, a decaying pattern
with different possible periods is obtained. In particular, it is shown that,
if crossed Andreev reflections (CARs) are dominant and the edge states overlap,
the resulting interference pattern features oscillations whose periodicity
approaches $2\phi_0$. | Lucia Vigliotti, Alessio Calzona, Niccolò Traverso Ziani, F. Sebastian Bergeret, Maura Sassetti, Björn Trauzettel | 2023-01-31T19:00:17Z | http://arxiv.org/abs/2302.00024v1 | Effects of the Spatial Extension of the Edge Channels on the Interference Pattern of a Helical Josephson Junction
###### Abstract
Josephson junctions (JJs) in the presence of a magnetic field exhibit qualitatively different interference patterns depending on the spatial distribution of the supercurrent through the junction. In JJs based on two-dimensional topological insulators (2DTIs), the electrons/holes forming a Cooper pair (CP) can either propagate along the same edge or be split into the two edges. The former leads to a SQUID-like interference pattern, with the superconducting flux quantum \(\phi_{0}\) (where \(\phi_{0}=h/2e\)) as a fundamental period. If CPs' splitting is additionally included, the resultant periodicity doubles. Since the edge states are typically considered to be strongly localized, the critical current does not decay as a function of the magnetic field. The present paper goes beyond this approach and inspects a topological JJ in the tunneling regime featuring extended edge states. It is here considered the possibility that the two electrons of a CP propagate and explore the junction independently over length scales comparable to the superconducting coherence length. As a consequence of the spatial extension, a decaying pattern with different possible periods is obtained. In particular, it is shown that, if crossed Andreev reflections (CARs) are dominant and the edge states overlap, the resulting interference pattern features oscillations whose periodicity approaches \(2\phi_{0}\).
edge states; Josephson junctions; topological insulators; interference pattern; \(2\phi_{0}\) periodicity 1
Lucia Vigliotti, L.; Calzona, A.; Traverso Ziani, N.; Bergeret, F.S.; Sassetti, M.; Trauzettel, B. Effects of the Spatial Extension of the Edge Channels on the Interference Pattern of a Helical Josephson Junction.
## 1 Introduction
Topological phases of quantum systems have been at the forefront of research in condensed matter over the last two decades [1, 2]. One of these phases takes place in quantum spin Hall (QSH) insulators, which are two-dimensional topological insulators (2DTIs) hosting topologically protected and counter-propagating helical edge states on their boundary [3, 4, 5, 6, 7, 8, 9, 10]. The interplay of superconductivity and the QSH effect has been widely studied in view of applications in spintronics and in (topological) quantum computation [11, 12]. To this end, topological Josephson junctions (JJs) appear as fundamental building blocks [13, 14]. In a topological JJ, two superconducting electrodes are connected through the helical edge state channels of the QSH insulator. If the junction is pierced by a magnetic flux, it realizes a superconducting quantum interference setup [11, 12, 15]. The interference pattern, namely the
flux dependence of the critical current, characterizes JJs. Despite many theoretical studies on the interference patterns, there are still open questions, particularly when it comes to comparison with experiments [16; 17; 18; 19].
Many established models for JJs usually assume a local transmission of the Cooper pairs (CPs), i.e., the same trajectory for both electrons [20; 21]. A non-local transmission is also considered in the framework of edge transport via CPs' splitting over opposite edges [22; 23; 24; 25; 26]; this is allowed over length scales comparable with the superconducting coherence length \(\xi=\hbar v_{F}/\Delta\), with \(v_{F}\) as the Fermi velocity and \(\Delta\) as the superconducting gap, but usually discussed in the case of narrow edge states (see the upper panel of Figure 1). Specifically, strongly localized edge states give rise to a sinusoidal double-slit pattern, similar to a SQUID pattern, with no decay and a period \(\phi_{0}=h/2e\). However, the presence of interference oscillations with a doubled periodicity has been theoretically predicted [24; 27; 28] and experimentally observed in different setups [27; 29], including 2DTI-based JJs. In this case, the origin of this doubling relies on the CAR processes mentioned above: a non-local transmission of electrons (whose charge quantum is \(e\) versus the CP's charge quantum of \(2e\)) takes place [24; 28]. Depending on the amount of CPs' splitting, the resultant pattern features either alternating lobes with different heights or a weak cosine modulation around a constant value. On top of that, further single-electron effects leading to anomalous periodicities such as back-scattering [30] or forward-scattering [31] have been assessed. Lastly, it is worth recalling that a moderate spatial extent of the edge states affects the interference pattern with an overall decay in the magnetic field [11].
Although the scenario of extended edge states might be experimentally relevant, a theoretical model is still lacking. This is addressed in this article by means of a heuristic approach. An edge state with finite spatial extension can host different trajectories for the two electrons forming the CP, provided that they are not further away from each other than \(\xi\), injected either into a same edge (local Andreev reflection, LAR) or into different edges (crossed Andreev reflection, CAR) (see Figure 1). The wider the edges, the more pronounced will be the consequences on the interference pattern, which is highly sensitive to the electrons' path.
In this work, the combined effect of broadened edge channels, possibly overlapping, and the presence of CAR is explored. This introduces new options for the injection process, which are absent in the case of narrow edges and which enrich the possibilities of interference patterns. Differently from previous approaches assessing 2DTI junctions, a fast side lobe decay and different oscillation periods are obtained. Within this wide phenomenology, it is interesting to discuss whether CAR processes can bring along a doubled periodicity as in the case of localized edge states. It is found that the answer is affirmative and the regime to observe such periodicity is identified, finding that it requires a prevalence of CAR over LAR.
The main findings of this work are: the derivation of an expression that allows for the computation of supercurrents in the experimentally relevant scenario of topological Josephson junctions featuring edge states with finite spatial extent; and the introduction of a new way of taking into account the non-local character of CPs.
To simplify the problem, the following assumptions are made throughout the text: the interfaces between the superconductors and the non-superconducting region are assumed to be low transparent, leading to a sinusoidal current-phase relation; the two edge states are assumed to be symmetric in shape; and trajectories other than horizontal ones and inter-edge tunneling are neglected.
The article is structured as follows: in Section 2, the way of calculating the Josephson current through a junction is reviewed by introducing the gauge-invariant phase. Both local and non-local transfers of CPs are addressed. In Section 3, the model and approach to determine the supercurrent are introduced. Sections 4 and 5 are devoted to the presentation of the main results and a more general discussion, respectively. Finally, in Section 6, conclusions are drawn.
## 2 Local and Non-Local Transport of Cooper Pairs
Let us consider a two-dimensional JJ of length \(L\) and width \(w\) as in Figure 1. The intermediate region is tunnel-coupled to two superconductors on either side and for now, it is not needed to further specify its properties. A magnetic field is applied perpendicularly to the plane, \(\mathbf{B}=B\mathbf{e}_{\mathbf{z}}\). It is here assumed that the field is screened from the superconducting electrodes, and the gauge \(\mathbf{A}=-By\mathbf{e}_{\mathbf{x}}\), where \(\mathbf{A}\) is the vector potential, is chosen.
For the evaluation of the supercurrent, it is convenient to introduce the gauge-invariant phase difference, \(\delta\theta=\delta\varphi-(2\pi/\phi_{0})\)\(\int\mathbf{A}\cdot d\mathbf{r}\), with \(\delta\varphi\) the superconducting phase difference [20, 32]. The gauge-invariant phase picked by a CP being transmitted across the junction along a horizontal (ballistic) path \(y\), with \(-w/2<y<w/2\), is then given by
\[\delta\theta(y)=(\varphi_{r}-\varphi_{l})+\frac{2\pi\phi}{\phi_{0}}\frac{y}{w}. \tag{1}\]
Figure 1: Panel (**a**) shows a two-dimensional topological insulator (2DTI) of length \(L\) and width \(w\) laterally tunnel-coupled to two superconducting electrodes (in grey). Then, a magnetic field B is applied perpendicularly to the junction. The pink and green line represent the edge states on the boundaries of the 2DTI which are not proximitized. Each boundary hosts two counter-propagating channels with identical profiles. For clarity, only one colored shape per boundary is shown. The electrons forming a Cooper pair (CP) can be injected into a same edge via a local Andreev reflection (LAR) or into opposite edges via a crossed Andreev reflection (CAR). The CP splitting of the latter is allowed only if the superconducting coherence length \(\xi\), which is effectively the size of the CP, is larger or comparable with the width \(w\). Panel (**b**) shows the same sample for the case of extended edge states, which allows different trajectories for electrons injected in LAR and a wider range of possibilities for CAR. For clarity, both panels show LAR processes involving only the upper edge.
Here, \(\varphi_{\tau/l}\) labels the right/left superconducting phase, and \(\phi=BLw\). The second term in Equation (1) stems for the Aharonov-Bohm contribution, which for a single electron reads as \(\delta\theta^{AB}(y)=\frac{\pi\phi}{\phi_{0}}\frac{y}{w}\).
Concerning the computation of supercurrents, a standard approach is the Dynes and Fulton description [33], which holds in the tunneling regime (low-transparency interfaces) between the superconductors and their link under the assumption of the local nature of the supercurrent, flowing perpendicularly to the superconducting contacts. This means that the supercurrent density only depends on the \(y\) coordinate while the current flows in the \(x\) direction. In this case, for the junction just introduced, the total current is given by
\[I(\phi,\varphi_{\tau}-\varphi_{l})=\int_{-w/2}^{w/2}dy\,j(y)\sin\left[(\varphi _{\tau}-\varphi_{l})+\frac{2\pi\phi}{\phi_{0}}\frac{y}{w}\right], \tag{2}\]
with \(j(y)\) being the current density profile of the JJ. The total current therefore results from a weighted integration over sinusoidal current-phase-relations (stemming from the tunneling regime). Maximizing with respect to \((\varphi_{\tau}-\varphi_{l})\) and getting the absolute value, one obtains the critical current or interference pattern \(I_{C}(\phi)\). This procedure recovers well-known examples of interference patterns [21]: for an uniform current distribution \(j(y)=I_{C}/w\) (\(I_{C}\) being a constant), it reproduces the Fraunhofer pattern, \(I_{C}(\phi)/I_{C}(\phi=0)=|\sin\left(\pi\phi/\phi_{0}\right)/(\pi\phi/\phi_{0})|\); if there is only edge transport, and the edge channels are assumed to be extremely narrow, \(j(y)\propto[\delta(y-w/2)+\delta(y+w/2)]/2\), and one gets the SQUID pattern, \(I_{C}(\phi)/I_{C}(\phi=0)=|\cos\left(\pi\phi/\phi_{0}\right)|\).
Non-local transmission has been previously addressed in different realizations of JJs [27; 34; 35; 36; 37]. This work focuses on JJs featuring edge states, usually modeled as strongly localized. In these setups, a sample's width \(w\) comparable with the superconducting coherence length \(\xi\) allows an effective splitting of the CP via CAR. In this case, the Aharonov-Bohm phases acquired by the electrons propagating on opposite edges cancel, resulting in a flux-independent process. This leads to the \(2\phi_{0}\)-periodic even-odd effect in SQUID-like patterns, which has been experimentally observed [12; 23; 38] and theoretically addressed [24; 26; 28] in several works. Such phenomenology is shared by helical and non-helical edge channels, though remarkable qualitative differences emerge in response to variations of the parameters [28]. Besides the even-odd effect, it has been discussed how inter-channel scattering events give rise to anomalous flux dependencies leading, for instance, to multi-periodic magnetic oscillations [30] or to a further doubling of the period up to \(4\phi_{0}\)[31].
In the following, it is discussed how the current can be calculated in two-dimensional systems with extended edge states. Different interference patterns that depend on the extension of the edge states and on the width of the junction are found. The finite extension of the edge states leads to a Fraunhofer-like interference pattern, with a main central lobe and decaying side lobes. In particular, it is shown that, if CARs are dominant and the edge states overlap, the resulting periodicity approaches \(2\phi_{0}\).
## 3 Model for Extended Edge Channels
The system under consideration is a junction as the one depicted in the lower panel of Figure 1, consisting of a two-dimensional JJ where the weak link is a topological insulator sample of length \(L\) and width \(w\). This region is tunnel-coupled to the right and left superconductors. As previously, the phase of the right/left superconductor is denoted as \(\varphi_{\tau/l}\). Due to the proximity effect, in the superconducting parts, the edge states are gapped out. In the center region, the edge states are helical. In Figure 1, each boundary hosts two counter-propagating channels with identical profiles. For clarity, only one colored shape per boundary is shown.
Following the line of reasoning in the previous section, it is possible to write a phenomenological expression for the supercurrent that generalizes Equation (2) with two different coordinates for the two electrons:
\[I(\phi,\varphi_{r}-\varphi_{l}) = \int_{-w/2}^{w/2}dy_{\uparrow}\,dy_{\downarrow}\,j(y_{\uparrow},y _{\downarrow})\sin\left(\varphi_{r}-\varphi_{l}+\frac{\pi\phi}{\phi_{0}}\frac{(y _{\uparrow}+y_{\downarrow})}{w}\right) \tag{3}\] \[= \text{Im}\bigg{[}e^{i(\varphi_{r}-\varphi_{l})}\int_{-w/2}^{w/2}dy _{\uparrow}\,dy_{\downarrow}\,j(y_{\uparrow},y_{\downarrow})e^{i\frac{\pi \phi}{\phi_{0}}\frac{y_{\uparrow}}{w}}\,e^{i\frac{\pi\phi}{\phi_{0}}\frac{y_{ \uparrow}}{w}}\bigg{]},\]
where the fundamental ingredient is \(j(y_{\uparrow},y_{\downarrow})\), the weight function for the supercurrent, and \(y_{\uparrow}\), \(y_{\downarrow}\) label the horizontal trajectories of the two electrons of the CP, with \(\uparrow/\downarrow\) denoting the spin projection. For now, neither diagonal trajectories nor any inter-edge tunneling are included. The function \(j(y_{\uparrow},y_{\downarrow})\) parametrizes how each specific path contributes to the total supercurrent and encodes physical properties of the normal region, such as the supercurrent density profile, the number of transport channels, and the helical nature of the junction. If the size of the CP is comparable with the junction's width, the CP can be split into the two edges. Since broadened edge states are considered here, it is assumed that the CP can also be split into different trajectories within the same edge. To do so, an overall constraint function to take into account the CP's extent is included. The ansatz is hence the following
\[j(y_{\uparrow},y_{\downarrow})=e^{-|y_{\uparrow}-y_{\downarrow}|/\xi}[\underbrace {sg(y_{\uparrow})g(y_{\downarrow})+sg(-y_{\uparrow})g(-y_{\downarrow})}_{LAR}+ \underbrace{g(-y_{\uparrow})g(y_{\downarrow})}_{CAR}], \tag{4}\]
where \(g(\pm y)\) describes the spatial extension of the upper/lower edge states, which are assumed to be symmetric around \(y=0\) (see Figure 2 for a schematic view). Since \(j\) is a probability density, one can argue that \(g(y)\equiv|\psi_{l}(-y)|=|\psi_{u}(y)|\), where \(\psi_{u/l}(y)\) is the wavefunction of the upper/lower edge state. Our approach allows us to identify the CAR and LAR processes generalized to the case of extended edge states, as marked in Equation (4). There are two parameters to be discussed in the following: the coherence length \(\xi\) and the ratio of the amplitudes of LAR and CAR processes, denoted by \(s\). Indeed, due to helicity, LAR and CAR are clearly different processes. Since spin-flips are not considered, in the LAR case, spin-up and spin-down electrons have opposite directions of propagation. By contrast, in the CAR case, they are either right-movers or left-movers [28; 31].
Equations (3) and (4) show two main features: (1) the electrons can tunnel into the same edge but at different positions; (2) the electrons can tunnel into different edges acquiring Aharonov-Bohm phases that do not cancel each other out. The latter implies the unconventional possibility of flux-dependent CAR processes.
Figure 2: The colored broadened shapes represent the edges’ profiles: \(g(y)\) for the upper edge (pink) and \(g(-y)\) for the lower edge (green). They are therefore symmetric around \(y=0\) and overlap to some extent.
It is possible to check some limiting cases of Equations (3) and (4). Firstly, as to LAR processes, notice that they recover the Dynes and Fulton approach for \(\xi\ll w\)[33]. One can rewrite \(e^{-|y_{\uparrow}-y_{\downarrow}|/\xi}=e^{-\frac{|y_{\uparrow}-y_{\downarrow}|}{ \mu}\frac{w}{\xi}}\), where the first fraction takes values between \(0\) and \(1\). Then \(e^{-|y_{\uparrow}-y_{\downarrow}|/\xi}\xrightarrow{\xi\ll w}0\), and the current density vanishes unless \(y_{\uparrow}=y_{\downarrow}\equiv y\). In this case
\[j(y_{\uparrow},y_{\downarrow})=j(y)\propto|\psi_{u}(y)|^{2}+|\psi_{l}(y)|^{2}, \tag{5}\]
and the supercurrent recovers the form
\[I(\phi,\varphi_{r}-\varphi_{l})\propto\operatorname{Im}\left[e^{i(\varphi_{ r}-\varphi_{l})}\int_{-w/2}^{w/2}dy\left(|\psi_{u}(y)|^{2}+|\psi_{l}(y)|^{2} \right)e^{i\frac{2\pi\Phi}{\Phi_{0}}\frac{w}{\xi}}\right], \tag{6}\]
which is the Dynes and Fulton description in Equation (2). On the other hand, if \(\xi\gg w\)\(e^{-|y_{\uparrow}-y_{\downarrow}|/\xi}\xrightarrow{\xi\gg w}1\) and
\[j(y_{\uparrow},y_{\downarrow})\propto|\psi_{u}(y_{\uparrow})||\psi_{u}(y_{ \downarrow})|+|\psi_{l}(y_{\uparrow})||\psi_{l}(y_{\downarrow})|. \tag{7}\]
The integrals over \(y_{\uparrow}\) and \(y_{\downarrow}\) factorize
\[I(\phi,\varphi_{r}-\varphi_{l})\propto\operatorname{Im}\left[e^ {i(\varphi_{r}-\varphi_{l})}\middle(\int_{-w/2}^{w/2}dy_{\uparrow}\,|\psi_{u} (y_{\uparrow})|e^{i\frac{\pi\Phi}{\Phi_{0}}\frac{y_{\uparrow}}{w}}\int_{-w/2} ^{w/2}dy_{\downarrow}\,|\psi_{u}(y_{\downarrow})|e^{i\frac{\pi\Phi}{\Phi_{0} }\frac{y_{\downarrow}}{w}}+\right.\\ \left.\int_{-w/2}^{w/2}dy_{\uparrow}\,|\psi_{l}(y_{\uparrow})|e^{ i\frac{\pi\Phi}{\Phi_{0}}\frac{y_{\uparrow}}{w}}\int_{-w/2}^{w/2}dy_{ \downarrow}\,|\psi_{l}(y_{\downarrow})|e^{i\frac{\pi\Phi}{\Phi_{0}}\frac{y_{ \downarrow}}{w}}\right)\right], \tag{8}\]
corresponding to completely independent trajectories.
Regarding CAR, if the conduction can only happen on narrow edges (such as in the upper panel of Figure 1), then \(|\psi_{u/l}(y)|\propto\delta(y\mp w/2)\), which results in a flux-independent contribution to the critical current, as expected.
The dependence of \(s\) on temperature, bias, or length of the junction is not specified. Instead, it is treated as a phenomenological parameter. The next aim of this work is to identify a parameter regime in which the interference pattern is \(2\phi_{0}\)-periodic. Indeed, as the doubled periodicity is a widely studied signature, it is interesting to investigate new mechanisms that can give rise to it. It has been discussed that it usually emerges in the presence of Cooper pair splitting, which is a main feature of our description of broadened edge states. It is therefore expected to arise in our system. It turns out that, in our model, CAR-dominated transport is required to obtain this unusual periodicity of the maximal critical current. It will therefore be assumed that \(s<1\) from now on. (Notice that one of the two CAR contributions should be proportional to \(s^{2}\). Since \(s<1\), it will be neglected, and only the first order in \(s\) will be included.) Notably, it has been experimentally revealed in InSb JJs [23] that CAR processes are larger than expected and can even exceed LAR. Indeed, an entirely \(2\phi_{0}\)-periodic pattern, in combination with an enhanced conduction at both edges, was measured. Such \(2\phi_{0}\) periodicity can result from the flux-independent supercurrent due to CAR interfering with the standard \(\phi_{0}\)-periodic SQUID current. However, if LAR dominates over CAR, a \(\phi_{0}\) oscillation should be simultaneously present. Not being the case, it was concluded that the CAR amplitude was larger than the LAR one. It is interesting to identify rather general conditions under which CAR processes are more important than LAR processes, but this analysis goes beyond the scope of the present work.
So far, a formula has been constructed that generalizes the computation of a supercurrent given the current density to the case of extended edges and shown that it recovers the expected limiting cases. In the next section, it is shown that, in an appropriate parameter range and for
wide edge states, our model features an interference pattern approaching a \(2\phi_{0}\) Fraunhofer pattern.
## 4 Main Results
Here the interference pattern of the JJ is analyzed, discussing the role of the edges' profile \(g(y)\) and the two parameters \(\xi\) and \(s\). Given Equation (3), the pattern reads
\[I_{C}(\phi)=\left|\int_{-w/2}^{w/2}dy_{\uparrow}\,dy_{\downarrow}\,j(y_{ \uparrow},y_{\downarrow})e^{i\frac{\pi\phi}{\Psi_{0}}\frac{y_{\uparrow}}{w}}e ^{i\frac{\pi\phi}{\Psi_{0}}\frac{y_{\downarrow}}{w}}\right|, \tag{9}\]
with \(j(y_{\uparrow},y_{\downarrow})\) from Equation (4).
Figure 3 illustrates our results. It is assumed, as in the edge profile depicted in panel (a), \(\xi/w=0.85\) and \(s=0.2\). In Figure 3b, the total interference pattern is shown: it exhibits minima approaching multiples of \(2\phi_{0}\) and a fast decay. In panels (c)-(d), the LAR contribution and the CAR term (\(s=0\)) alone, respectively, are plotted in order to point out the essential interplay of the two processes. On the one hand, the LAR pattern qualitatively resembles a standard Fraunhofer pattern, although its minima are shifted away from \(\phi_{0}\) multiples as a consequence of the spatial extent of the edges. On the other hand, CAR processes feature a strong decay with a mild \(2\phi_{0}\) modulation on top. The \(2\phi_{0}\) oscillatory behavior in Figure 3b results from the interaction of these two terms. The interference patterns in Figure 3 are shown for a limited number of flux quanta which, however, allow us to appreciate that the minima in \(\phi=\phi_{0}\), \(3\phi_{0}\), \(5\phi_{0}\), which would be expected for a standard Fraunhofer-like pattern, are not visible. On the contrary, those in \(\phi=2\phi_{0}\), \(4\phi_{0}\) persist. For the sake of completeness, Figure 4 shows the plot in Figure 3b for a larger interval, confirming this trend. Notice that for \(\phi=4\phi_{0}\), \(6\phi_{0}\) and also \(\phi=8\phi_{0}\), the interference pattern does not completely vanish but presents very low peaks. These are reminiscent of the peak structure of the LAR contribution in Figure 3c, which is dominant over the CAR one for large values of \(\phi\) due to its slower decay.
In the next section, the robustness of the effect is discussed by providing plots of the interference pattern for different values of the parameters. From such analysis, the optimal parameter range for the doubled minima periodicity is inferred. It is summarized as follows. A high coherence length \(\xi\) (\(\xi\gtrsim w\)) is necessary because short values of \(\xi/w\) suppress the occurrence of CARs. This first requirement depends on the choice of the superconductors and on the sample width, and it is not hard to fulfill. The ratio \(s\) has to be low (at least \(s<1/2\)), which means that CARs are dominant over LARs. A significant overlap of the edge states is needed. Indeed, it can be shown that, if the edge states do not overlap, the full interference pattern starts to exhibit the features expected for perfectly localized edges: it approaches a SQUID-like pattern with the additional even-odd effect, which is overall \(2\phi_{0}\)-periodic but not decaying.
## 5 General Discussion
A more general discussion is provided here, commenting on the interference pattern obtained for a wider range of parameters. This allows us to substantiate the optimal ranges stated in the main text.
In Figure 5, two different shapes for the edge states are taken into consideration and plotted in the first column (the upper edge in pink, the lower one in green). They are both peaked at the opposite ends of the junction, around \(y=\pm w/2\), but feature a decreasing overlap from the top row to the bottom row. In the second column, the full interference pattern arising from both LAR and CAR is plotted. Each colored line corresponds to a different combination of \(\xi/w\) and \(s\), given the edge profile.
Figure 4: Resultant interference pattern \(I_{C}(\phi)\) in Figure 3b for a larger interval of flux quanta.
Figure 3: Resultant interference pattern \(I_{C}(\phi)\) (panel (**b**)) and the separated contributions of LARs (panel (**c**)) and CARs (panel (**d**)) for the edge profile in panel (**a**), \(\xi/w=0.85\) and \(s=0.2\).
The functional forms used for the edge shape are the following. (Fine details about the functional form describing the edge profile are not crucial.)
Panel (a) in Figure 5:
\[g(-y)=\frac{0.05}{|y/w+0.4|^{2}+0.05}\theta(-y/w+0.5)\theta(y/w+0.5). \tag{10}\]
Panel (c) in Figure 5:
\[g(-y)=e^{-(y/w-0.45)^{2}/(2*0.2^{2})}\theta(-y/w+0.5)\theta(y/w+0.5). \tag{11}\]
The upper edge (pink in Figures 1 and 2) is simply given by \(g(y)\).
Let us start from the first row, where the same edge shape as in the main text is considered. In panel (b), the orange curve is the one presented in Section 4, with a high coherence length (\(\xi/w=0.85\)) and the prominent presence of CAR (\(s=0.2\)). It is used here as a reference plot.
The black curve shows the opposite regime, where CAR is almost missing (\(s=0.7\)). Due to \(\xi/w\ll 1\), one falls back into the Dynes and Fulton description, with the interference pattern approaching the one of a supercurrent density \(g(y)^{2}+g(-y)^{2}\). This tends to give rise to a standard Fraunhofer-like pattern, with more minima. If \(s\) is decreased, LAR is also suppressed, and the entire pattern is lowered.
Increasing the coherence length, the possibility of a nonlocal propagation of the two electrons is enhanced, but it is not sufficient to get a clearly visible \(2\phi_{0}\) periodicity. A LAR-dominated scenario (a weak suppression \(s\sim 1\)), despite high coherence lengths, still leads to Fraunhofer-like behavior with more minima and a slower decay (light blue curve, with \(\xi/w=0.85\) and \(s=0.7\)). This pinpoints the additional demand for a prominent presence of CAR (small \(s\), at least \(s<1/2\)).
The second row allows us to discuss the importance of the overlap of edge states, which is quite small in panel (c). Tuning the parameters as in the black and light blue curves gives a result similar to panel (b). This is expected to be the case, since it has already been commented they are not in the appropriate parameter regime to appreciate the non-local transport significantly. Hence, a more or less pronounced overlap becomes irrelevant. However, using the optimal parameters (orange curve, with \(\xi/w=0.85\) and \(s=0.2\)), the periodicity just starts to approach \(2\phi_{0}\), but the minima are shallow. This shows the need for highly extended states to see the \(2\phi_{0}\) periodicity.
## 6 Conclusions
In this work, a way of computing the supercurrent across a helical Josephson junction that generalizes the previous theoretical approaches by assuming spatially extended edge states has been provided. Strongly localized edge states give rise to a pattern with no decay and a period \(\phi_{0}\) or \(2\phi_{0}\) if Cooper pair splitting over the edges is allowed. Including a finite extent of the edge states in the model gives rise to wider possibilities. A heuristic expression that allows for a simple and intuitive calculation of the Josephson current as a function of the magnetic flux through the junction has been presented. Such expression comes from the generalization to two coordinates of the Dynes and Fulton one, which assumes the electrons within a CP follow the same path. Indeed, it has been argued how, as a consequence of their spatial extension, the edge states can host different trajectories for the two electrons. Some limiting cases have been discussed, showing that the new approach correctly captures the already studied regimes.
The Dynes and Fulton hypothesis of sinusoidal current-phase relation, which holds in the tunneling regime between the superconductors and their link, is maintained by the new approach. A further assumption is that the two edge states have a symmetric profile. On the other hand, the specific functional form describing the edge profile is not crucial. The role played by LAR and CAR processes in determining the interference pattern has been analyzed, together with the importance of the edge states' broadening and of the superconducting coherence length, which represents the size of the Cooper pair. The periodicity of the resultant
Figure 5: Flux-dependence of the critical supercurrent considering different values of coherence length, a more or less prevalent role played by LAR and CARs (represented by the parameter \(s\)), and different profiles for the edge states. In each row, the first panel (**a**,**c**) shows profile \(g(y)\) and the symmetric \(g(-y)\). In the second column (panels **b**,**d**), the full interference pattern, arising from both LARs and CARs, is plotted. Different colors are associated with different values of \(\zeta\) and \(s\); see the plot legend.
pattern may vary from \(\phi_{0}\) to \(2\phi_{0}\), depending on the dominating process. In particular, the cause for the doubled periodicity has been identified with the non-local transport arrangement. In our case, such non-locality is allowed by the extent of the edges. More specifically, the predicted effects are relevant when the two electrons within a pair can separately explore the two edges and the latter are widely broadened through the junction.
This proposal can help in developing a more realistic description of experimentally realized systems and opens up further generalizations and refinements, such as a justification at a microscopic level of the phenomenological parameters involved.
Conceptualization, A.C., N.T.Z., and B.T.; Investigation, L.V. and A.C.; Validation, L.V., A.C., N.T.Z., F.S.B., M.S., and B.T.; Writing--original draft, L.V.; Writing--review & editing, L.V., A.C., N.T.Z., F.S.B., M.S., and B.T. All authors have read and agreed to the published version of the manuscript
This work was supported by the "Dipartimento di Eccellenza MIUR 2018-2022" and the funding of the European Union-NextGenerationEU through the "Understanding even-odd criticality" project, in the framework of the Curiosity Driven Grant 2021 of the University of Genova. This work was further supported by the Wurzburg-Dresden Cluster of Excellence ct.qmat, EXC2147, project-id 390858490, and the DFG (SFB 1170). We also thank the Bavarian Ministry of Economic Affairs, Regional Development, and Energy for financial support within the High-Tech Agenda Project "Bausteine fur das Quanten Computing auf Bassi topologischer Materialen". The work of F.S.B. was partially supported by the Spanish AEI through project PID2020-114252GB-I00 (SPIRIT), the Basque Government through grant IT-1591-22, and IKUR strategy program. F.S.B. acknowledges the A. v. Humboldt Foundation for funding and Prof. Trauzettel for the kind hospitality during his stay at Wurzburg University.
Not applicable.
Not applicable.
Not applicable.
Not applicable.
The authors declare no conflict of interest.
## Abbreviations
The following abbreviations are used in this manuscript:
\begin{tabular}{l l} JJ & Josephson junction \\
2DTI & two-dimensional topological insulator \\ CP & Cooper pair \\ SQUID & superconducting quantum interference device \\ CAR & crossed Andreev reflection \\ QSH & quantum spin Hall \\ LAR & local Andreev reflection \\ \end{tabular}
|
2310.20683 | Generalized locally compact models for approximate groups | We give a proof of the existence of generalized definable locally compact
models for arbitrary approximate subgroups via an application of topological
dynamics in model theory. Our construction is simpler and shorter than the
original one obtained by Hrushovski in ``Beyond the Lascar group'', and it uses
only basic model theory (mostly spaces of types and realizations of types). The
main tools are Ellis groups from topological dynamics considered for suitable
spaces of types. However, we need to redevelop some basic theory of topological
dynamics for suitable ``locally compact flows'' in place of (compact) flows. We
also prove that the generalized definable locally compact model which we
constructed is universal in an appropriate category. We note that the main
result yields structural information on definable generic subsets of definable
groups, with a more precise structural result for generics in the universal
cover of $\textrm{SL}_2(\mathbb{R})$. | Krzysztof Krupiński, Anand Pillay | 2023-10-31T17:48:47Z | http://arxiv.org/abs/2310.20683v1 | # Generalized locally compact models for approximate groups
###### Abstract.
We give a proof of the existence of _generalized definable locally compact models_ for arbitrary approximate subgroups via an application of topological dynamics in model theory. Our construction is simpler and shorter than the original one by Hrushovski [11] and it uses only basic model theory (mostly spaces of types and realizations of types). The main tools are Ellis groups from topological dynamics considered for suitable spaces of types. However, we need to redevelop some basic theory of topological dynamics for suitable "locally compact flows" in place of (compact) flows. We also prove that the generalized definable locally compact model which we constructed is universal in an appropriate category. We note that the main result yields structural information on definable generic subsets of definable groups, with a more precise structural result for generics in the universal cover of \(\operatorname{SL}_{2}(\mathbb{R})\).
Key words and phrases:Approximate subgroup, generalized locally compact model, Ellis group 2020 Mathematics Subject Classification: 03C60, 03C98, 37B02, 54H11, 11B30, 11P70, 20A15, 20N99 The first author was supported by the Narodowe Centrum Nauki grant no. 2016/22/E/ST1/00450. The second author was supported by NSF grants DMS-1665035, DMS-1760212, and DMS-2054271.
## 1. Introduction
The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\). The _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\), and the _full structure_ of a graph \(G\) is a _full structure_ of a graph \(G\),
logic (namely, a factorization through a suitable space of types), the modified notion of definablity used in generalized definable locally compact models is not so transparent and our characterization explains its nature.
It is also interesting to consider the special case when the approximate subgroup \(X\) in question generates a group \(G\) in finitely many steps. Then the target space of our generalized [definable] locally compact model is compact, and it is in fact the classical [resp. externally definable] generalized Bohr compactification of \(G\) defined by Glasner (see [11] and [12]). This special case can be seen as a structural result on arbitrary definable generic subsets of definable groups. We will discuss it in Section 5.
In Section 2, we give the necessary preliminaries, including all basic definitions in model theory. Section 3 is devoted to our construction of a generalized definable locally compact model of an arbitrary definable approximate subgroup. In Section 4, we prove universality of our model and discuss related things. In Section 5, we focus on the situation when the approximate subgroup in question generates a group in finitely many steps, so in fact the situation of a definable, symmetric, generic subset of a definable group. We explain why the main result can be thought of as a structural result on such generic subsets and we use it to obtain more precise structural information on generics in the universal cover of \(\operatorname{SL}_{2}(\mathbb{R})\). Moreover, our analysis of \(\widetilde{\operatorname{SL}_{2}(\mathbb{R})}\) leads to an answer to some natural question stated at the end of Section 4, and also shows that the weakening of Newelski's conjecture proposed in Section 5 holds for \(\widetilde{\operatorname{SL}_{2}(\mathbb{R})}\).
We finish this introduction with a brief history connecting Hrushovski's approach from [13] and our approach via topological dynamics. Topological dynamics methods were introduced to model theory by Newelski in [14]. Since then many papers have appeared in this subject, in particular some deep connections and applications to model-theoretic components of groups and to strong types were obtained in [12, 13, 14, 15]. Motivated by this work, Hrushovski developed in [13] a parallel theory of definability patterns structures. Then, in [13], he redeveloped it in the context of local logics introduced by himself in [13], and used it to prove the existence of generalized definable locally compact models. In this paper, we return to the topological dynamics approach, but for locally compact flows instead of usual compact flows, and we provide a shorter and simpler proof of Hrushovski's theorem with further information on universality.
## 2. Preliminaries
In this section, we recall some basic notions from model theory and topological dynamics to make the main construction self-contained.
### Model theory
Let us fix a _language_ (or _signature_) \(L\), i.e. a collection of relation, function, and constant symbols. Using those symbols together with quantifiers, variables, and logical symbols, one constructs recursively the set of all _\(L\)-formulas_; _\(L\)-sentences_ are \(L\)-formulas without free variables. An _\(L\)-structure_ is a set \(M\) together with interpretations of all the symbols of \(L\). For example, if \(L\) consists of just one binary function symbol, then any group is an \(L\)-structure. Let us fix an arbitrary \(L\)-structure \(M\).
For any \(L\)-sentence \(\varphi\), \(M\models\varphi\) means that \(\varphi\) is true in \(M\). For any subset \(A\) of \(M\) we can expand the language \(L\) to \(L_{A}\) be adding constant symbols for the members of \(A\), which are then interpreted in \(M\) as the corresponding elements of \(A\). For an \(L_{A}\)-formula \(\varphi(x)\), \(\varphi(M)\) denotes the _set of realizations_ of \(\varphi(x)\) in \(M\), i.e. \(\varphi(M):=\{a\in M^{|x|}:M\models\varphi(a)\}\). By an _\(A\)-definable subset_ of \(M\) [more generally, of a Cartesian power \(M^{n}\)] we mean the set of realizations in \(M\) of an \(L_{A}\)-formula \(\varphi(x)\) with one [resp. \(n\)] free variables \(x\). By a _definable subset_ we mean an \(M\)-definable subset. For example, the centralizer of an element of a group is a definable subset of this group.
An \(L\)-structure \(N\) is an _elementary superstructure_ of \(M\) (symbolically, \(M\prec N\)) if \(M\subseteq N\) and for every \(L\)-formula \(\varphi(x_{1},\ldots,x_{n})\) and tuple \((a_{1},\ldots,a_{n})\in M^{n}\) we have \(M\models\varphi(a_{1},\ldots,a_{n})\iff N\models\varphi(a_{1},\ldots,a_{n})\).
By a _type_ over \(A\subseteq M\) in variables \(x\) we mean a consistent collection \(\pi(x)\) of \(L_{A}\)-formulas, where \(\pi(x)\) being _consistent_ means that for any finitely many formulas \(\varphi_{1}(x),\ldots,\varphi_{n}(x)\in\pi(x)\) we have \(M\models(\exists x)(\varphi_{1}(x)\wedge\cdots\wedge\varphi_{n}(x))\). The compactness theorem tells us that this is equivalent to the property that \(\pi(x)\) has a realization \(a\) in some \(N>M\), i.e. \(N\models\varphi(a)\) for all \(\varphi(x)\in\pi(x)\), which will be denoted by \(a\models\pi\). A _complete type_ over \(A\) in variables \(x\) is a type \(p(x)\) over \(A\) such that for every \(L_{A}\)-formula \(\varphi(x)\) we have \(\varphi(x)\in p\) or \(\neg\varphi(x)\in p\). This is equivalent to saying that \(p=\operatorname{tp}(a/A):=\{\varphi(x)\text{ an }L_{A}\text{-formula}:N \models\varphi(a)\}\) for some tuple \(a\) in some \(N>M\). The set of all complete types over \(A\) in variables \(x\) is denoted by \(S_{x}(A)\). This is a compact, zero-dimensional topological spaces with a basis of open sets given by the \(L_{A}\)-formulas, i.e. any \(L_{A}\)-formula \(\varphi(x)\) yields a basic open set \([\varphi(x)]:=\{p\in S(A):\varphi(x)\in p\}\). Identifying formulas (modulo equivalence) with the definable sets that they define, complete types over \(A\) can be treated as ultrafilters in the Boolean algebra of \(A\)-definable subsets of \(M^{|x|}\), and then the topology on \(S_{x}(A)\) is just the Stone space topology. We will often omit \(x\) in \(S_{x}(A)\). The above discussion applies also to any elementary extension of \(M\) in place of \(M\).
For a given cardinal \(\kappa\), we say that \(N\succ M\) is _\(\kappa\)-saturated_ if for every \(B\subseteq N\) of cardinality \(<\kappa\), every \(p\in S(B)\) has a realization in \(N\). Using the compactness theorem, for every \(\kappa\) there exists \(N\succ M\) which is \(\kappa\)-saturated. In this paper, we will work with \(N\succ M\) which is \(|M|^{+}\)-saturated, and since it is very convenient to work with realizations of types from \(S(N)\), we will be taking them in an \(|N|^{+}\)-saturated \(\mathfrak{C}\succ N\).
An _externally definable_ subset \(D\) of \(M\) is the intersection of \(M\) with a definable subset of \(N\) (where \(N\succ M\) is \(|M|^{+}\)-saturated), that is \(D=M\cap\varphi(N)\) for some formula \(\varphi(x)\) with parameters from \(N\). This definition does not depend on the choice of \(N\). By a _complete external type_ over \(M\) we mean an ultrafilter on the Boolean algebra of externally definable subsets of \(M\); all these types form a Stone space \(S_{\operatorname{ext}}(M)\). It is very convenient to identify \(S_{\operatorname{ext}}(M)\) with a space of complete types in the usual sense. In order to do that, take an \(|M|^{+}\)-saturated \(N\succ M\). Then \(S_{\operatorname{ext}}(M)\) is homeomorphic with the space \(S_{M}(N)\) of all complete types \(p\in S(N)\) which are _finitely satisfiable_ in \(M\), i.e. for any \(\varphi(x)\in p\), \(\varphi(x)\) is realized by some element or tuple of elements of \(M\); more precisely, \(S_{M}(N)\ni p\mapsto\{\varphi(M):\varphi(x)\in p\}\in S_{\operatorname{ext}}(M)\) is a homeomorphism.
One can also restrict the context to a given formula \(\varphi(x)\) or to the set of realizations \(X:=\varphi(M)\). By \(S_{\varphi(x)}(N)\) or \(S_{X}(N)\) we denote the space of complete types \(p\in S(N)\) which contain the formula \(\varphi(x)\)1; \(S_{X,M}(N)\) will stand for the space of complete types over \(N\) which contain \(\varphi(x)\) and are finitely satisfiable in \(M\). Then \(S_{X,M}(N)\) is homeomorphic with the space \(S_{X,\operatorname{ext}}(M)\) of ultrafilters on the Boolean algebra of externally definable subsets of \(X\). All of it applies also to any superset \(C\) of \(N\) (contained in \(\mathfrak{C}\)) in place of \(N\). In particular, we have the spaces \(S_{M}(C)\) and \(S_{X,M}(C)\) homeomorphic with \(S_{\operatorname{ext}}(M)\) and \(S_{X,\operatorname{ext}}(M)\), respectively.
Footnote 1: There is a clash of notation here, as \(S_{M}(N)\) and \(S_{X}(N)\) have two different meanings when \(X=M\). This should not cause any confusion, as the symbol \(S_{M}(N)\) will always denote the space of complete types over \(N\) finitely satisfiable in \(M\), whereas the symbol \(S_{X}(N)\) (for \(X:=\varphi(M)\)) will always denote the space of complete types over \(N\) which contain the formula \(\varphi(x)\) defining \(X\).
In this paper, we will need to extend this context to so-called \(\bigvee\)-definable sets, i.e. unions of possibly infinitely many definable sets. More precisely, let \(\{X_{i}\}_{i\in I}\) be an upward directed family of \(A\)-definable sets for some \(A\subseteq M\), and let \(G:=\bigcup_{i\in I}X_{i}\). Then by \(S_{G,M}(N)\) we mean \(\bigcup_{i\in I}S_{X_{i},M}(N)\) with the topology inherited from \(S_{M}(N)\). Since each \(S_{X_{i},M}(N)\) is clearly an open subset of \(S_{M}(N)\), we get that \(U\subseteq S_{G,M}(N)\) is open if and only if \(U\cap S_{X_{i},M}(N)\) is open in \(S_{X_{i},M}(N)\) for all \(i\in I\); so \(F\subseteq S_{G,M}(N)\) is closed if and only if \(F\cap S_{X_{i},M}(N)\) is closed in
\(S_{X_{i},M}(N)\) for all \(i\in I\). As each \(S_{X_{i},M}(N)\) is clearly a clopen subset of \(S_{G,M}(N)\) which is a compact (Hausdorff) space, we get
**Fact 2.1**.: \(S_{G,M}(N)\) _is a locally compact (Hausdorff) space._
In this paper, compact and locally compact spaces are Hausdorff by definition.
Note that the space \(S_{G,M}(N)\) is homeomorphic with the space \(S_{G,\mathrm{ext}}(M)\) of those ultrafilters on the Boolean algebra generated by the externally definable subsets of \(G\) which are concentrated on some \(X_{i}\).
As before, the above discussion applies also to any superset \(C\) of \(N\) in place of \(N\). In particular, we have the locally compact space \(S_{G,M}(C)\) homeomorphic with \(S_{G,\mathrm{ext}}(M)\).
By \(S_{G}(M)\) we mean \(\bigcup_{i\in I}S_{X_{i}}(M)\) with the topology inherited from \(S(M)\), where \(S_{X_{i}}(M)\) is the space of complete types over \(M\) containing a formula defining \(X_{i}\). As above, this is a locally compact space which is witnessed by the clopen compact sets \(S_{X_{i}}(M)\).
If it is not specified, all the parameters and elements are taken from \(\mathfrak{C}\). Sets of parameters are usually denoted by capital letters, while elements or tuples of elements by lower case letters. For any \(a,b,A\), we will write \(a\equiv_{A}b\) to express that \(\mathrm{tp}(a/A)=\mathrm{tp}(b/A)\).
For any \(A\subseteq B\) and \(a\) we say that \(\mathrm{tp}(a/B)\) is a _coheir_ over \(A\) if it is finitely satisfiable in \(A\) (i.e. any finite collection of formulas in \(\mathrm{tp}(a/B)\) has a realization in \(A\)). The following remark will be used many times.
_Remark 2.2_.: If \(a\equiv_{A}b\) and \(\mathrm{tp}(c/A,a,b)\) is a coheir over \(A\), then \(a\equiv_{A,c}b\).
Proof.: If not, then there is an \(L_{A}\)-formula \(\varphi(x,y)\) such that \(\mathfrak{C}\models\varphi(a,c)\wedge\neg\varphi(b,c)\). Since \(\mathrm{tp}(c/A,a,b)\) is a coheir over \(A\), there is \(c^{\prime}\in A\) such that \(\mathfrak{C}\models\varphi(a,c^{\prime})\wedge\neg\varphi(b,c^{\prime})\), so \(a\not\equiv_{A}b\), a contradiction.
Note that if a type \(\pi(x)\) (over any set of parameters) is finitely satisfiable in \(A\), then it extends to a global type \(p\in S(\mathfrak{C})\) finitely satisfiable in \(A\). For that it is enough to take any ultrafilter \(\mathcal{U}\) on the Boolean algebra of all subsets of \(A\) such that \(\{\varphi(\mathfrak{C})\cap A:\varphi(x)\in\pi\}\subseteq\mathcal{U}\) and to define \(p\) as \(\{\varphi(x)\in L_{\mathfrak{C}}:\varphi(\mathfrak{C})\cap A\in\mathcal{U}\}\).
**Fact 2.3**.: _For any type \(p\in S_{M}(N)\) and superset \(B\) of \(N\) there is a unique extension \(\tilde{p}\in S(B)\) of \(p\) which is finitely satisfiable in \(M\)._
Proof.: This follows from the fact that \(\{\varphi(M):\varphi(x)\in p\}\) is an ultrafilter on the Boolean algebra of externally definable subsets of \(M\) (which in turn follows from \(|M|^{+}\)-saturation of \(N\)).
The model theory context in this paper will be the following: \(X\) will be an approximate subgroup definable in a structure \(M\) (as defined in the introduction), \(N\succ M\) an \(|M|^{+}\)-saturated elementary extension of \(M\), \(\mathfrak{C}\succ N\) a big (at least \(|N|^{+}\)-saturated) elementary extension of \(N\) (the so-called _monster model_), \(G:=\langle X\rangle\) -- the group generated by \(X\), \(\bar{X}=X(\mathfrak{C})\) -- the interpretation of \(X\) in \(\mathfrak{C}\), \(\bar{G}:=\langle\bar{X}\rangle\) -- the group generated by \(\bar{X}\). Thus, we can use the above notation \(S_{G,M}(N)\) for the family \(\{X_{i}\}_{i\in I}:=\{X^{n}:n\in\omega\}\).
Regarding the monster model \(\mathfrak{C}\), besides saturation one usually also assumes strong homogeneity with respect to a sufficiently big cardinal. Using the compactness theorem, it is easy to construct \(\mathfrak{C}\succ N\) which is \(|N|^{+}\)-saturated and _strongly \(|N|^{+}\)-homogeneous_ which means that for any subset \(A\subseteq\mathfrak{C}\) of cardinality at most \(|N|\), any elementary map \(f\colon A\to\mathfrak{C}\) (that is \(\mathfrak{C}\models\varphi(a)\iff\mathfrak{C}\models\varphi(f(a))\) for every formula \(\varphi(x)\in L\) and finite tuple \(a\) from \(A\)) extends to an automorphism of \(\mathfrak{C}\). Although the arguments in this paper do not require strong \(|N|^{+}\)-homogeneity of \(\mathfrak{C}\), it is convenient to assume it and use (without even mentioning) the fact that then for every \(A\) of cardinality at most \(|N|\) and finite tuples \(a,b\) we have that \(a\equiv_{A}b\) if and only if \(b=f(a)\) for some \(f\in\mathrm{Aut}(\mathfrak{C}/A)\) (the pointwise stabilizer of \(A\)).
### Topological dynamics
Topological dynamics studies _flows_, that is pairs \((G,Y)\) where \(Y\) is a compact space and \(G\) is a topological group acting continuously on \(Y\). We focus on the case when \(G\) is discrete; then continuity of the action just means that the action is by homeomorphisms.
In this paper, we will have to extend the context to the case when \(Y\) is a certain special locally compact space on which \(G\) acts by homeomorphisms, namely \(Y:=S_{G,M}(N)\) from the end of the last subsection. We will develop all the necessary theory in this context providing all the details (including proofs) in Section 3. So here we only briefly recall some notions and facts in the classical context of (compact) flows. They will not be used in the main construction (except Fact 2.5), and we give them only to show what is well-known in topological dynamics. This classical context of (compact) flows is however sufficient when the approximate subgroup \(X\) generates \(G\) in finitely many steps, and this is the context of Section 5.
In the rest of this subsection, \((G,X)\) will be an arbitrary flow (so \(X\) and \(G\) have nothing to do with the approximate subgroup \(X\) considered above). Classical references for Ellis semigroups and groups are [1, 2]. A very good concise exposition with proofs can be found in Appendix A of [11].
**Definition 2.4**.: The _Ellis semigroup_ of the flow \((G,X)\), denoted by \(E(X)\), is the closure of the collection of functions \(\{\pi_{g}:g\in G\}\) (where \(\pi_{g}\colon X\to X\) is given by \(\pi_{g}(x):=gx\)) in the space \(X^{X}\) equipped with the product topology, with composition as the semigroup operation.
\(E(X)\) is a compact left topological semigroup (i.e. the semigroup operation is continuous in the left coordinate). The following fundamental fact was proved by Ellis (e.g. see Corollary 2.10 and Propositions 3.5 and 3.6 of [10], or Fact A.8 of [11]).
**Fact 2.5**.: _Let \(S\) be a semigroup equipped with a quasi-compact \(T_{1}\) topology such that for any \(s_{0}\in S\) the map \(s\mapsto ss_{0}\) is a continuous and closed mapping (the latter follows immediately from continuity and compactness if \(S\) is Hausdorff). Then there is a minimal left ideal \(\mathcal{M}\) in \(S\) (i.e. a minimal set such that \(S\mathcal{M}=\mathcal{M}\)), and every such \(\mathcal{M}\) satisfies the following._
1. _For any_ \(p\in\mathcal{M}\)_,_ \(Sp=\mathcal{M}p=\mathcal{M}\) _is closed._
2. \(\mathcal{M}\) _is the disjoint union of the sets_ \(u\mathcal{M}\) _with_ \(u\) _ranging over_ \(J(\mathcal{M}):=\{u\in\mathcal{M}:u^{2}=u\}\)_._
3. _For each_ \(u\in J(\mathcal{M})\)_,_ \(u\mathcal{M}\) _is a group with identity element_ \(u\)_, where the group operation is the restriction of the semigroup operation on_ \(S\)_._
4. _All the groups_ \(u\mathcal{M}\) _(for_ \(u\in J(\mathcal{M})\)_) are isomorphic, even when we vary the minimal left ideal_ \(\mathcal{M}\)_._
Applying this to \(S:=E(X)\), the isomorphism type of the groups \(u\mathcal{M}\) (or just any of these groups) from the above fact is called the _Ellis group_ of the flow \(X\).2
Footnote 2: This terminology is used by model theorists. In topological dynamics, the Ellis group of a pointed minimal \(G\)-flow \((X,x_{0})\) is the subgroup of those elements \(\eta\) in the Ellis group (in our sense) of the universal \(G\)-ambit \(\beta G\) for which \(\eta x_{0}=x_{0}\), but we will not use this definition in the paper.
**Definition 2.6**.: For \(B\subseteq E(X)\) and \(a\in E(X)\), \(a\circ B\) is defined as the set of all points \(c\in E(X)\) for which there exist nets \((b_{i})_{i}\) in \(B\) and \((g_{i})_{i}\) in \(G\) such that \(\lim g_{i}=a\) and \(\lim g_{i}b_{i}=c\).
Basic properties of \(\circ\) are contained in Facts A.25-A.29 of [11]. In particular, \(a\circ B\) is closed.
Now, choose any minimal left ideal \(\mathcal{M}\) of \(E(X)\) and an idempotent \(u\in\mathcal{M}\).
**Definition 2.7**.: For \(A\subseteq u\mathcal{M}\), define \(\operatorname{cl}_{\tau}(A):=(u\circ A)\cap u\mathcal{M}\).
For the proofs of the facts listed below see Facts A.30-A.40 in [11].
**Fact 2.8**.: \(\operatorname{cl}_{\tau}\) _is a closure operator on \(u\mathcal{M}\). The topology given by \(\operatorname{cl}_{\tau}\) is called the \(\tau\)-topology._
**Fact 2.9**.: \(u\mathcal{M}\) _with the \(\tau\)-topology is a compact \(T_{1}\) semitopological group (i.e. multiplication is separately continuous) which does not depend (up to topological isomorphism) on the choice of \(\mathcal{M}\) and \(u\in J(\mathcal{M})\)._
**Fact 2.10**.: \(H(u\mathcal{M}):=\bigcap_{V}\mathrm{cl}_{\tau}(V)\)_, where \(V\) ranges over the \(\tau\)-neighborhoods of \(u\) in \(u\mathcal{M}\), is a \(\tau\)-closed normal subgroup of \(u\mathcal{M}\), and \(u\mathcal{M}/H(u\mathcal{M})\) is a compact Hausdorff group. In fact, \(u\mathcal{M}/H(u\mathcal{M})\) is the universal (or greatest) Hausdorff quotient of \(u\mathcal{M}\)._
An ambit is a flow \((G,X,x_{0})\) with a distinguished point \(x_{0}\in X\) with dense orbit. An important classical \(G\)-flow is the universal \(G\)-ambit \(\beta G\), i.e. the space of ultrafilters on the Boolean algebra of all subsets of \(G\) with the action of \(G\) by left translation and the distinguished ultrafilter being the principal ultrafilter of the neutral element. Then the Ellis semigroup \(E(\beta G)\) is naturally isomorphic to \((\beta G,*)\), where \(*\) is given by \(U\in p*q\iff\{g\in G:g^{-1}U\in q\}\in p\). Model theory provides a transparent and very useful formula for \(*\). Namely, treat \(G\) as a group definable in \(M:=G\) equipped with the full structure (i.e. with predicates for all subsets of \(G\)). Then \(\beta G=S_{G,\mathrm{ext}}(M)\) is naturally identified with the space of types \(S_{G}(M)\) and it turns out that \(p*q=\mathrm{tp}(ab/M)\), where \(b\models q\), \(a\models p\), and \(\mathrm{tp}(a/M,b)\) is the unique extension of \(p\) which is a coheir over \(M\). More generally, if we have a group \(G\) definable in a structure \(M\), then \(S_{G,\mathrm{ext}}(M)\) is a G-ambit with the action of \(G\) by left translation and the distinguished element being the ultrafilter of the neutral element. Identifying \(S_{G,\mathrm{ext}}(M)\) with \(S_{G,M}(N)\) (where \(N>M\) is \(|M|^{+}\)-saturated), it turns out that the Ellis semigroup \(E(S_{G,M}(N))\) is isomorphic to \((S_{G,M}(N),*)\) with \(*\) given by \(p*q:=\mathrm{tp}(ab/N)\), where \(b\models q\), \(a\models p\), and \(\mathrm{tp}(a/N,b)\) is the unique extension of \(p\) which is a coheir over \(M\) (an isomorphism \((S_{G,M}(N),*)\to E(S_{G,M}(N))\) is given by \(p\mapsto l_{p}\), where \(l_{p}(q):=p*q\))._
## 3. Generalized definable locally compact model
This section is devoted to a new self-contained construction of a generalized definable locally compact model of an arbitrary definable approximate subgroup. Let us start from the context and precise definition of generalized definable locally compact models.
For a map \(f\colon G\to H\) from a group (or even semigroup) \(G\) to a group \(H\), \(\mathrm{error}_{r}(f):=\{f(y)^{-1}f(x)^{-1}f(xy):x,y\in G\}\) and \(\mathrm{error}_{l}(f):=\{f(xy)f(y)^{-1}f(x)^{-1}:x,y\in G\}\). For \(C\subseteq H\), we write \(f\colon G\to H:C\) if \(\mathrm{error}_{r}(f)\cup\mathrm{error}_{l}(f)\subseteq C\) and we say that \(f\) is a _quasi-homomorphism with an error set \(C\)_. Note that if \(C\) is normal in \(H\) (which will be the case in our context), then \(\mathrm{error}_{r}(f)\subseteq C\) if and only if \(\mathrm{error}_{l}(f)\subseteq C\). Also, if \(f\colon G\to H:C\), then \(f(e_{G})\in C^{-1}\) and \(f(x^{-1})\in f(x)^{-1}C^{-2}\). Sometimes one assumes that \(f(e_{G})=e_{H}\), and this will be satisfied in our construction.
From now on, take the situation and notation described at the end of Subsection 2.1.
**Definition 3.1**.: A _generalized definable locally compact model of \(X\)_ is a quasi-homomorphism \(f\colon G\to H:C\) for some symmetric, normal, compact subset \(C\) of a locally compact group \(H\) such that:
1. for every compact \(V\subseteq H\) there is \(i\in\mathbb{N}\) with \(f^{-1}[V]\subseteq X^{i}\);
2. for every \(i\in\mathbb{N}\), \(f[X^{i}]\) is relatively compact in \(H\);
3. there is \(l\in\mathbb{N}\) such that for any compact \(Z,Y\subseteq H\) with \(C^{l}Y\cap C^{l}Z=\emptyset\) the preimages \(f^{-1}[Y]\) and \(f^{-1}[Z]\) can be separated by a definable set.
If we drop item (3), we get the notion of _generalized locally compact model_.
_Remark 3.2_.: In item (2) of the above definition, it is equivalent to require that \(f[X]\) is relatively compact.
Proof.: We have \(f[X^{2}]\subseteq f[X]^{2}C\), so, by compactness of \(\mathrm{cl}(f[X])\) and \(C\), we get \(\mathrm{cl}(f[X^{2}])\subseteq\mathrm{cl}(f[X])^{2}C\). More generally, by induction, \(\mathrm{cl}(f[X^{i}])\subseteq\mathrm{cl}(f[X])^{i}C^{i-1}\) for all \(i\geq 1\), and since the last set is compact, so is \(\mathrm{cl}(f[X^{i}])\)
_Remark 3.3_.: If \(f\colon G\to H:C\) is a generalized definable locally compact model of \(X\), then there is \(l\in\mathbb{N}\) such that for any compact \(Z,Y\subseteq H\) with \(C^{l}Y\cap C^{l}Z=\emptyset\) there are didjoint definable subsets \(D_{1}\) and \(D_{2}\) of some \(X^{n}\) with \(f^{-1}[Y]\subseteq D_{1}\) and \(f^{-1}[Z]\subseteq D_{2}\).
Proof.: It follows from items (1) and (3) of the definition.
**Fact 3.4**.: _Let \(f\colon G\to H:C\) be a generalized locally compact model of \(X\)._
1. _For every neighborhood_ \(U\) _of_ \(e_{H}\)_,_ \(f^{-1}[UC]\) _is generic in the sense that finitely many left translates of_ \(f^{-1}[UC]\) _cover_ \(X\)_._
2. _For every relatively compact neighborhood_ \(U\) _of_ \(e_{H}\)_,_ \(Y:=f^{-1}[UC]\) _is commensurable with_ \(X\) _and_ \(YY^{-1}\) _is an approximate subgroup commensurable with_ \(X\)_._
Proof.: (1) Take an open neighborhood \(W\) of \(e_{H}\) such at \(W^{-1}W\subseteq U\). By compactness of \(\operatorname{cl}(f[X])\), we have that \(\operatorname{cl}(f[X])\) is covered by finitely many translates \(a_{1}W\), \(\dots\), \(a_{n}W\).
For every \(i\leq n\) with \(f^{-1}[a_{i}W]\neq\emptyset\) choose \(g_{i}\in f^{-1}[a_{i}W]\). We will show that \(X\) is covered by the finitely many translates \(g_{i}f^{-1}[UC]\) for \(i\leq n\) such that \(f^{-1}[a_{i}W]\neq\emptyset\).
Consider any \(g\in X\); then \(g\in f^{-1}[a_{i}W]\) for some \(i\leq n\), i.e. \(f(g)\in a_{i}W\). Write \(g\) as \(g_{i}h\). Then \(f(g)=f(g_{i})f(h)f(h)^{-1}f(g_{i})^{-1}f(g_{i}h)\in a_{i}Wf(h)C\). Hence, the last set has a nonempty intersection with \(a_{i}W\). So \(f(h)\in W^{-1}WC\subseteq UC\). Therefore, \(h\in f^{-1}[UC]\), and so \(g\in g_{i}f^{-1}[UC]\).
(2) The fact that finitely many left translates of \(f^{-1}[UC]\) cover \(X\) follows from (1). The fact that finitely many left translates of \(X\) cover \(f^{-1}[UC]\) follows from item (1) of Definition 3.1 and the assumption that \(X\) is an approximate subgroup (which clearly implies that \(X^{i}\) is covered by finitely many left translates of \(X\) for every every \(i\in\mathbb{N}\)). The very final part about \(YY^{-1}\) easily follows, as \(Y\subseteq YY^{-1}\subseteq X^{i}\) for some \(i\).
Thus, as mentioned in the introduction, a generalized locally compact model of \(X\) allows us to recover \(X\) up to commensurability as the preimage of any compact neighborhood of \(C\).
### Topological dynamics of \(\boldsymbol{S_{G,M}(N)}\)
Recall that for \(N\subseteq C\subseteq\mathfrak{C}\) by \(S_{M}(C)\) we denote the space of complete types over \(C\) which are finitely satisfiable in \(M\), and by \(S_{G,M}(C)\) the subspace of \(S_{M}(C)\) consisting of all types concentrated on some \(X^{n}\). For a formula \(\varphi(x)\) in \(L_{C}\) (with \(N\subseteq C\subseteq\mathfrak{C}\)) such that \(\varphi(\mathfrak{C})\subseteq\bar{X}^{n}\) for some \(n\in\mathbb{N}\), we have that \([\varphi(x)]:=\{p\in S_{M}(C):\varphi(x)\in p\}\subseteq S_{X^{n},M}(C)\) is a basic open set in \(S_{G,M}(C)\). For any \(g\in\bar{G}\), by \(\varphi(g^{-1}x)\) [resp. \(\varphi(xg^{-1})\)] we mean an \(L_{C,g}\)-formula defining the set \(g\varphi(\mathfrak{C})\) [resp. \(\varphi(\mathfrak{C})g\)]. (Note that by the definability of the approximate subgroup \(X\), it is clear that the sets \(g\varphi(\mathfrak{C})\) and \(\varphi(\mathfrak{C})g\) are indeed definable over \(C,g\).)
The goal of this subsection is to extend the classical theory briefly mentioned in Subsection 2.2 to the action of \(G\) on the locally compact space \(S_{G,M}(N)\) by left translation, that is \(g\operatorname{tp}(a/N):=\operatorname{tp}(ga/N)\). First of all, this action is by homeomorphisms, because a basis of open sets in \(S_{G,M}(N)\) consists of the sets of the form \([\varphi(x)]\) for formulas \(\varphi(x)\) in \(L_{N}\) with \(\varphi(\mathfrak{C})\subseteq\bar{X}^{n}\) for some \(n\), and \(g[\varphi(x)]=[\varphi(g^{-1}x)]\) is still a basic open set for any \(g\in G\).
Define a binary operation \(*\) on \(S_{G,M}(N)\) by
\[p*q:=\operatorname{tp}(ab/N),\text{ where }b\models q,\,a\models p,\,\text{and } \operatorname{tp}(a/N,b)\text{ is a coheir over }M\text{.}\]
**Lemma 3.5**.: \((S_{G,M}(N),*)\) _is a left topological semigroup, that is, \(*\) is well-defined, associative, and left continuous._
Proof.: Take pairs \((a,b)\) and \((a^{\prime},b^{\prime})\) both as in the definition of \(*\). Thus, \(b^{\prime}\equiv_{N}b\), so, by \(|N|^{+}\)-saturation of \(\mathfrak{C}\), we can find \(a^{\prime\prime}\) such that \((a^{\prime\prime},b)\equiv_{N}(a^{\prime},b^{\prime})\). Then \(\operatorname{tp}(a^{\prime\prime}/N,b)\) is an extension of \(p\) which is a coheir over \(M\). Therefore, \(\operatorname{tp}(a^{\prime\prime}/N,b)=\operatorname{tp}(a/N,b)\) by Fact 2.3. Hence, \(\operatorname{tp}(ab/N)=\operatorname{tp}(a^{\prime\prime}b/N)=\operatorname{tp }(a^{\prime}b^{\prime}/N)\). We have proved that \(*\) is well-defined.
To check that \(*\) is associative, consider any \(p,q,r\in S_{G,M}(N)\) and pick \(a\models p\), \(b\models q\), and \(c\models r\) such that both \(\operatorname{tp}(b/N,c)\) and \(\operatorname{tp}(a/N,b,c)\) are coheirs over \(M\). Then \(\operatorname{tp}(a/N,bc)\) is a
coheir over \(M\), so \(abc\models p*(q*r)\). On the other hand, \(\operatorname{tp}(a/N,b)\) and \(\operatorname{tp}(ab/N,c)\) are both coheirs over \(M\), so \(abc\models(p*q)*r\). Thus, \(p*(q*r)=(p*q)*r\).
It remains to show left continuity of \(*\). Fix \(q\in S_{G,M}(N)\) and pick \(b\models q\). Then \(b\in\bar{X}^{m}\) for some \(m\). Consider any basic open set \(U=[\varphi(x)]\subseteq S_{X^{n},M}(N)\) for some \(n\). The goal is to show that \(V:=\{p\in S_{G,M}(N):p*q\in U\}\) is open. It is clear that \(V\subseteq S_{X^{n+m},M}(N)\). By Fact 2.3, the restriction map \(r\colon S_{X^{n+m},M}(N,b)\to S_{X^{n+m},M}(N)\) is a homeomorphism. So it is enough to show that \(r^{-1}[V]\) is open.
For any \(a\) such that \(\operatorname{tp}(a/N,b)\) is a coheir over \(M\) we have
\[\operatorname{tp}(a/N,b)\in r^{-1}[V]\iff\operatorname{tp}(ab/N)\in U\iff \mathfrak{C}\models\varphi(ab).\]
Therefore, \(r^{-1}[V]=[\varphi(xb)]\) is a basic open set in \(S_{X^{n+m},M}(N,b)\).
Note that \(G\) naturally embeds into \(S_{G,M}(N)\) via \(g\mapsto\operatorname{tp}(g/N)\), which we will be using without mentioning.
_Remark 3.6_.: For every \(n\) the set \(X^{n}\) is dense in \(S_{X^{n},M}(N)\), and \(G\) is dense in \(S_{G,M}(N)\).
Proof.: The second part follows from the first. The first part is clear, as for any nonempty basic open set \([\varphi(x)]\) in \(S_{X^{n},M}(N)\) there is \(a\in\varphi(M)\subseteq X^{n}\).
For \(p\in S_{G,M}(N)\) let \(l_{p}\colon S_{G,M}(N)\to S_{G,M}(N)\) be defined by \(l_{p}(q):=p*q\). Since the next fact will not be used in the rest of the construction, we leave a proof as an exercise.
**Proposition 3.7**.: _The assignment \(p\mapsto l_{p}\) yields an isomorphism between \(S_{G,M}(N)\) and the Ellis semigroup \(E(S_{G,M}(N))\) defined in the same way as for (compact) flows in Subsection 2.2._
The following property of the semigroup operation \(*\), which follows immediately from the definition of \(*\) and the assumption that \(X\) is symmetric, will play an essential role in the rest of the construction.
_Remark 3.8_.: Whenever \(q\in S_{X^{n},M}(N)\), \(r\in S_{X^{m},M}(N)\), and \(p*q=r\), then \(p\in S_{X^{n+m},M}(N)\).
**Lemma 3.9**.: _There exists a left ideal \(\mathcal{M}\) of \(S_{G,M}(N)\) for which the set \(\mathcal{M}\cap S_{X,M}(N)\) is minimal (nonempty)._
Proof.: By compactness of \(S_{X,M}(N)\) and Zorn's lemma, it is enough to show that for every \(s\in S_{X,M}(N)\) the set \((S_{G,M}(N)*s)\cap S_{X,M}(N)\) is closed. By Remark 3.8, \((S_{G,M}(N)*s)\cap S_{X,M}(N)=(S_{X^{2},M}(N)*s)\cap S_{X,M}(N)\).
Since \(r_{s}\colon S_{X^{2},M}(N)\to S_{X^{3},M}(N)\) given by \(p\mapsto p*s\) is a continuous map between compact Hausdorff spaces, we get that \(S_{X^{2},M}(N)*s=r_{s}[S_{X^{2},M}(N)]\) is closed, and so is \((S_{X^{2},M}(N)*s)\cap S_{X,M}(N)\).
**Proposition 3.10**.: _There exists a minimal left ideal in \(S_{G,M}(N)\)._
Proof.: We can clearly find a left ideal \(\mathcal{M}\) as in the conclusion of Lemma 3.9 which is of the form \(S_{G,M}(N)*s_{0}\) for some \(s_{0}\in S_{X,M}(N)\). We will show that it is minimal. For that take any \(s\in\mathcal{M}\). It is enough to show that \((S_{G,M}(N)*s)\cap S_{X,M}(N)\neq\emptyset\) (as then \(s_{0}\in(S_{G,M}(N)*s)\cap S_{X,M}(N)\) by the choice of \(\mathcal{M}\)).
We have that \(s\in S_{X^{n},M}(N)\) for some \(n\); then \(s=\operatorname{tp}(b/N)\) for some \(b\in\bar{X}^{n}\).
**Claim 1**.: \(\bar{X}\cdot b^{-1}\cap G\neq\emptyset\)_._
Proof.: Since \(X\) is an approximate subgroup, \(X^{n}\subseteq Xg_{1}\cup\dots\cup Xg_{n}\) for some \(g_{1},\dots,g_{k}\in G\). Hence, \(\bar{X}^{n}\subseteq\bar{X}g_{1}\cup\dots\cup\bar{X}g_{n}\), i.e. \(\bar{X}^{n}\subseteq\bar{X}G\). Since \(X\) is symmetric, we get that \((\bar{X}^{n})^{-1}\subseteq\bar{X}^{-1}G\), so \(b^{-1}\in\bar{X}^{-1}G\), that is \(\bar{X}b^{-1}\cap G\neq\emptyset\).
(claim)
By this claim, \(\bar{X}\cdot b^{-1}\cap G\) extends to an ultrafilter on the Boolean algebra generated by externally definable subsets of \(G\) which is concentrated on \(X^{n+1}\). This ultrafilter corresponds to a unique \(\operatorname{tp}(a/N,b)\) finitely satisfiable in \(M\). Then \(\operatorname{tp}(a/N)\ast\operatorname{tp}(b/N)=\operatorname{tp}(ab/N)\in S_{X,M}(N)\), so \((S_{G,M}(N)\ast s)\cap S_{X,M}(N)\neq\emptyset\).
**Lemma 3.11**.: _Any minimal left ideal of \(S_{G,M}(N)\) is closed and intersects \(S_{X,M}(N)\)._
Proof.: Let \(\mathcal{M}\) be a minimal left ideal of \(S_{G,M}(N)\). The proof of Proposition 3.10 shows that any left ideal (in particular \(\mathcal{M}\)) of \(S_{G,M}(N)\) intersects \(S_{X,M}(N)\). To show closedness of \(\mathcal{M}\), first note that \(\mathcal{M}=S_{G,M}(N)\ast s\) for some \(s\in S_{G,M}(N)\). Of course, \(s\in S_{X^{n},M}(N)\) for some \(n\). By Remark 3.8, for every \(m\in\mathbb{N}\), \((S_{G,M}(N)\ast s)\cap S_{X^{m},M}(N)=(S_{X^{n+m},M}(N)\ast s)\cap S_{X^{m},M }(N)\), and the last set is closed by compactness of \(S_{X^{n+m},M}(N)\) and left continuity of \(\ast\).
From now on, we will often skip writing \(\ast\).
**Lemma 3.12**.: _Let \(\mathcal{M}\) be an arbitrary minimal left ideal of \(S_{G,M}(N)\). Then \(J(\mathcal{M}):=\{u\in\mathcal{M}:u^{2}=u\}\) is nonempty and \(\mathcal{M}\) is the union of all \(u\mathcal{M}\) with \(u\) ranging over \(J(\mathcal{M})\)._
Proof.: Consider any \(p\in\mathcal{M}\). Then \(p\in S_{X^{n},M}(N)\) for some \(n\). By minimality of \(\mathcal{M}\), the set \(P:=\{q\in\mathcal{M}:qp=p\}\) is nonempty. Thus, by left continuity of \(\ast\) and Remark 3.8, \(P\) is a nonempty closed subsemigroup of \(\mathcal{M}\) contained in \(S_{X^{2n},M}(M)\), so it is compact. By Zorn's lemma, there exists a minimal closed subsemigroup \(K\) of \(P\).
Consider any \(u\in K\). We will show that \(u^{2}=u\). Then, since \(u\in P\), we get \(p=up=u(up)\in u\mathcal{M}\), so we will be done.
Let \(Q:=\{q\in K:qu=u\}\). By compactness of \(K\) and left continuity of \(\ast\), \(Ku\) is a nonempty closed subsemigroup of \(K\), so \(Ku=K\) as \(K\) is minimal. Hence, \(Q\neq\emptyset\). Since \(Q\) is a closed subsemigroup of \(K\), we get that \(Q=K\), in particular \(u\in Q\).
The proofs of the next two lemmas are identical to the proofs in the classical context, and the proof of the third lemma below is an easy elaboration on the proof in the classical context. We will only prove the first one, as the other two are not needed in our construction. For the proofs in the classical context see [12, Fact A.8].
**Lemma 3.13**.: _For any minimal left ideal \(\mathcal{M}\) of \(S_{G,M}(N)\) and \(u\in J(\mathcal{M})\), the set \(u\mathcal{M}\) is a group (with \(\ast\) as group operation)._
Proof.: \(u\mathcal{M}\) is clearly closed under \(\ast\), \(u\in u\mathcal{M}\) is a neutral element in \(u\mathcal{M}\), and \(\ast\) is associative. Now, consider any \(p\in u\mathcal{M}\). By minimality of \(\mathcal{M}\), there is \(q\in\mathcal{M}\) with \(qp=u\). Then \((uq)p=u^{2}=u\). Thus, \(u\mathcal{M}\) is a semigroup with left identity and left inverses, and so it is a group.
**Lemma 3.14**.: _For every minimal left ideal \(\mathcal{M}\) of \(S_{G,M}(N)\) and any distinct \(u,v\in J(\mathcal{M})\), \(u\mathcal{M}\cap v\mathcal{M}=\emptyset\)._
**Lemma 3.15**.: _For any minimal left ideals \(\mathcal{M},\mathcal{N}\) of \(S_{G,M}(N)\) and \(u\in J(\mathcal{M}),v\in J(\mathcal{N})\) the groups \(u\mathcal{M}\) and \(v\mathcal{N}\) are isomorphic._
Therefore, the isomorphism type of all these groups \(u\mathcal{M}\) (or just any of these groups separately) can be called the _Ellis group_ of \(S_{G,M}(N)\).
Now, the goal is to equip the Ellis group with a topology, which will be called the \(\tau\)-topology. We will do it in the same way as in the classical context. Below, for \(P\subseteq S_{G,M}(N)\) the closure of \(Q\) will be denoted by \(\overline{Q}\), while for a subset \(Q\) of the Ellis group the closure with respect to the \(\tau\)-topology will be denoted by \(\operatorname{cl}_{\tau}(Q)\).
**Definition 3.16**.: For any \(p\in S_{G,M}(N)\) and \(Q\subseteq S_{G,M}(N)\) we define \(p\circ Q\) as the set of all \(r\in S_{G,M}(N)\) for which there are nets \((g_{i})_{i\in I}\) in \(G\) and \((q_{i})_{i\in I}\) in \(Q\) such that \(\lim_{i}g_{i}=u\) and \(\lim_{i}g_{i}q_{i}=r\).
All the easy observations A.25 - A.35 from [14] work with exactly the same proofs for \(S_{G,M}(N)\) in place of the Ellis semigroup of a compact flow. In particular, we have
**Lemma 3.17**.: _Given a minimal left ideal \(\mathcal{M}\unlhd S_{G,M}(N)\) and idempotent \(u\in\mathcal{M}\), the operator \(\mathrm{cl}_{\tau}\) on subsets of \(u\mathcal{M}\) given by \(\mathrm{cl}_{\tau}(Q):=(u\mathcal{M})\cap(u\circ Q)=u(u\circ Q)\) is a closure operator on \(u\mathcal{M}\)._
Now, fix a minimal left ideal \(\mathcal{M}\) of \(S_{G,M}(N)\) and \(u\in J(\mathcal{M})\).
**Definition 3.18**.: By the _\(\tau\)-topology_ we mean the topology on the Ellis group \(u\mathcal{M}\) given by the closure operator \(\mathrm{cl}_{\tau}\) from Lemma 3.17.
Fact A.33 of [14] tells us that the \(\tau\)-topology on \(u\mathcal{M}\) is coarser than the subspace topology inherited from \(S_{G,M}(N)\). The next lemma (see Fact A.35 of [14]) yields an important connection between limits in both these topologies.
**Lemma 3.19**.: _If \((a_{i})_{i}\) is a net in \(u\mathcal{M}\) converging to \(a\in\overline{u\mathcal{M}}\), then \((a_{i})_{i}\) converges to \(ua\) in the \(\tau\)-topology._
**Definition 3.20**.: Let us say that a topological space \(P\) is _quasi locally compact_ if every point \(p\in P\) has a neighborhood \(U\) whose closure is quasi-compact.
**Proposition 3.21**.: _The Ellis group \(u\mathcal{M}\) is a quasi locally compact \(T_{1}\) space._
Proof.: The fact that it is \(T_{1}\) is easy: \(\mathrm{cl}_{\tau}(\{p\})=u(u\circ\{p\})=\{u(up)\}=\{p\}\). Quasi locally compactness requires more work.
Consider any \(q\in u\mathcal{M}\). Then \(q\in S_{X^{n},M}(N)\) for some \(n\). Also, \(u\in S_{X^{m},M}(N)\) for some \(m\). Let
\[P:=S_{X^{n+m},M}(N)^{c}\cap u\mathcal{M},\]
where \(S_{X^{n+m},M}(N)^{c}\) denotes the complement of \(S_{X^{n+m},M}(N)\) in \(S_{G,M}(N)\).
**Claim 1**.: \(S_{X^{n},M}(N)\cap\mathrm{cl}_{\tau}(P)=\emptyset\)_._
Proof.: Take any \(p\in\mathrm{cl}_{\tau}(P)\). Then \(p=\lim_{i}g_{i}p_{i}\) for some nets \((g_{i})_{i}\) in \(G\) and \((p_{i})_{i}\) in \(P\) with \(\lim_{i}g_{i}=u\). So for sufficiently large \(i\) we have that \(g_{i}\in X^{m}\) and \(p_{i}\notin S_{X^{n+m},M}(N)\). Hence, \(p=\mathrm{tp}(ab/N)\) for some \(a\in\bar{X}^{m}\) and \(b\notin\bar{X}^{n+m}\). Therefore, \(p\notin S_{X^{n},M}(N)\), as required. \(\quad\)\(\circ\)(claim)
Let
\[V:=u\mathcal{M}\backslash\,\mathrm{cl}_{\tau}(P).\]
By Claim 1 and the above choices, \(q\in S_{X^{n},M}(N)\cap u\mathcal{M}\subseteq V\subseteq S_{X^{n+m},M}(N)\). In particular, \(V\) is a \(\tau\)-open neighborhood of \(q\).
**Claim 2**.: \(\mathrm{cl}_{\tau}(V)\subseteq S_{X^{2m+n},M}(N)\)__
Proof.: Consider any \(p\in\mathrm{cl}_{\tau}(V)\). Then \(p=\lim_{i}g_{i}p_{i}\) for some nets \((g_{i})_{i}\) in \(G\) and \((p_{i})_{i}\) in \(V\) with \(\lim_{i}g_{i}=u\). So \(p=\mathrm{tp}(ab/N)\) for some \(a\in\bar{X}^{m}\) and \(b\in\bar{X}^{n+m}\). So \(p\in S_{X^{2m+n},M}(N)\). \(\quad\)\(\circ\)(claim)
It remains to show that \(\mathrm{cl}_{\tau}(V)\) is quasi-compact in the \(\tau\)-topology. For that we need to show that any net \((p_{i})_{i\in I}\) in \(\mathrm{cl}_{\tau}(V)\) has a convergent subnet. By compactness of \(S_{X^{2m+n},M}(N)\), the net \((p_{i})_{i\in I}\) has a subnet \((q_{j})_{j\in J}\) convergent to some \(r\in S_{X^{2m+n},M}(N)\) in the usual topology on \(S_{X^{2m+n},M}(N)\). By Lemma 3.19, \(\tau\)-\(\lim_{j}q_{j}=ur\), so, by \(\tau\)-closedness of \(\mathrm{cl}_{\tau}(V)\), \(ur\in\mathrm{cl}_{\tau}(V)\).
**Proposition 3.22**.: \(u\mathcal{M}\) _equipped with the \(\tau\)-topology is a semitopological group, i.e. group operation is separately continuous._
Proof.: The argument from Fact A.36 of [14] works without any changes.
The proof of Fact A.37 of [14] applies to our context, so we get that all Ellis groups of \(S_{G,M}(N)\) (for varying minimal left ideals \(\mathcal{M}\) and idempotents \(u\in J(\mathcal{M})\)) are in fact topologically isomorphic. So the Ellis group of \(S_{G,M}(N)\) is a well-defined semitopological group associated with \(S_{G,M}(N)\).
**Definition 3.23**.: Define \(H(u\mathcal{M})\) as \(\bigcap\operatorname{cl}_{\tau}(V)\) with \(V\) ranging over all \(\tau\)-neighborhoods of \(u\).
**Proposition 3.24**.: \(H(u\mathcal{M})\) _is a \(\tau\)-closed normal subgroup of \(u\mathcal{M}\), and \(u\mathcal{M}/H(u\mathcal{M})\) is a locally compact (so Hausdorff) topological group._
Proof.: This is an elaboration on the proof of Fact A.40 (so, in fact, Fact A.12) of [14].
Exactly as in the proof of [14, Fact A.12], we get that \(H(u\mathcal{M})\) is a \(\tau\)-closed normal subsemigroup containing \(u\). Hence, for every \(h\in H(u\mathcal{M})\) both \(hH(u\mathcal{M})\) and \(H(u\mathcal{M})h\) are subsemigroups of \(H(u\mathcal{M})\).
**Claim 1**.: _For every \(h\in H(u\mathcal{M})\), both \(hH(u\mathcal{M})\) and \(H(u\mathcal{M})h\) contain an idempotent._
Proof.: Fix \(h\in H(u\mathcal{M})\) and consider \(hH(u\mathcal{M})\) (the case of \(H(u\mathcal{M})h\) is analogous). By the proof of Proposition 3.21, there is a \(\tau\)-neighborhood \(V\) of \(u\) in \(u\mathcal{M}\) such that \(\operatorname{cl}_{\tau}(V)\subseteq S_{X^{k},M}(N)\) for some \(k\). By the last paragraph of the proof of Proposition 3.21, \(\operatorname{cl}_{\tau}(V)\) is quasi-compact. Hence, \(H(u\mathcal{M})\) is \(\tau\)-closed, quasi-compact, and \(T_{1}\). Therefore, by Proposition 3.22 (which implies that multiplication on the left or on the right by a fixed element is a homeomorphism), \(hH(u\mathcal{M})\) is a \(\tau\)-closed, quasi-compact, \(T_{1}\), and the map \(hH(u\mathcal{M})\to hH(u\mathcal{M})\) given by \(s\mapsto ss_{0}\) is continuous and closed for every \(s_{0}\in hH(u\mathcal{M})\). Hence, \(hH(u\mathcal{M})\) contains an idempotent by Fact 2.5. \(\Box\)(claim)
Since the only idempotent in the group \(u\mathcal{M}\) is \(u\), we conclude from the above claim that \(H(u\mathcal{M})\) is a subgroup of \(u\mathcal{M}\). By Proposition 3.21, \(u\mathcal{M}\) is quasi locally compact, and so it _weakly quasi locally compact_ in the sense that every \(p\in u\mathcal{M}\) has a quasi-compact neighborhood. This property is easily seen to be preserved under taking group quotients of semitopological groups, so \(u\mathcal{M}/H(u\mathcal{M})\) is weakly quasi locally compact.
The last paragraph of the proof of [14, Fact A.12] applies to our context, so \(u\mathcal{M}/H(u\mathcal{M})\) is Hausdorff.
By the last two paragraphs, \(u\mathcal{M}/H(u\mathcal{M})\) is locally compact. On the other hand, since \(u\mathcal{M}\) is a semitopological group, so is \(u\mathcal{M}/H(u\mathcal{M})\). Therefore, by Ellis joint continuity theorem [13, Theorem 2], we get that \(u\mathcal{M}/H(u\mathcal{M})\) is jointly continuous and inversion is continuous. Thus, \(u\mathcal{M}/H(u\mathcal{M})\) is a locally compact topological group.
### The main theorem
Recall that we are in the situation and notation described at the end of Subsection 2.1. Let \(\mathcal{M}\) be a minimal left ideal of \(S_{G,M}(N)\) and \(u\) an idempotent in \(\mathcal{M}\). Let \(F\colon G\to u\mathcal{M}\) be given by \(F(g):=ugu\) and \(\hat{F}\colon S_{G,M}(N)\to u\mathcal{M}\) be the extension of \(F\) given by \(\hat{F}(p):=upu\). Let \(f\colon G\to u\mathcal{M}/H(u\mathcal{M})\) be given by \(f(g):=ugu/H(u\mathcal{M})\) and \(\hat{f}\colon S_{G,M}(N)\to u\mathcal{M}/H(u\mathcal{M})\) be the extension of \(f\) given by \(\hat{f}(p):=upu/H(u\mathcal{M})\). In particular, \(f=\pi F\) where \(\pi\colon u\mathcal{M}\to u\mathcal{M}/H(u\mathcal{M})\) is the quotient map.
The following sets will play a key role.
\[F_{n} :=\{x_{1}y_{1}^{-1}\ldots x_{n}y_{n}^{-1}:x_{i},y_{i}\in\bar{G} \text{ and }x_{i}\equiv_{M}y_{i}\text{ for all }i\leq n\}\] \[\tilde{F}_{n} :=\{\operatorname{tp}(a/N)\in S_{G,M}(N):a\in F_{n}\}\] \[\tilde{F} :=((\tilde{F}_{\tau}\cap u\mathcal{M})/H(u\mathcal{M}))^{u\mathcal{ M}/H(u\mathcal{M})}\] \[C :=\operatorname{cl}_{\tau}(\tilde{F})\cup\operatorname{cl}_{\tau}( \tilde{F})^{-1}\]
Here is the main result, i.e. our version of Hrushovski's [13, Theorem 4.2].
**Theorem 3.25**.: _The above function \(f\) is a generalized definable locally compact model of \(X\) with the compact, normal, symmetric error set \(C\) defined above, which is witnessed by \(l=2\) (see Definition 3.1). Moreover, \(f^{-1}[C]\subseteq X^{30}\) and there is a compact neighborhood \(U\) of the neutral element in \(u\mathcal{M}/H(u\mathcal{M})\) such that \(f^{-1}[U]\subseteq X^{14}\) and \(f^{-1}[UC]\subseteq X^{34}\)._
The proof of Theorem 3.25 starts after the proof of Lemma 3.33 below.
**Lemma 3.26**.: \(F_{1}=\{xy^{-1}:x,y\in\bar{X}\text{ with }x\equiv_{M}y\}\)_. In particular, \(F_{n}\subseteq\bar{X}^{2n}\) is \(M\)-type-definable (that is, the set of realizations of a type over \(M\)), and so \(\tilde{F}_{n}\subseteq S_{X^{2n},M}(N)\) is closed._
Proof.: Only \((\subseteq)\) requires a proof. Take any \(a,b\in\bar{G}\) with \(a\equiv_{M}b\). Then \(a\in\bar{X}^{n}\) for some \(n\). Since \(X^{n}\subseteq XS\) for some finite \(S\subseteq G\), we have that \(\bar{X}^{n}\subseteq\bar{X}S\). So \(ac\in\bar{X}\) for some \(c\in S^{-1}\). As \(a\equiv_{M}b\) and \(c\in M\), also \(bc\in\bar{X}\) and \(ac\equiv_{M}bc\). So \(ab^{-1}=(ac)(bc)^{-1}\in\{xy^{-1}:x,y\in\bar{X}\text{ with }x\equiv_{M}y\}\). The rest easily follows.
**Lemma 3.27**.:
1. \(u\in\tilde{F}_{1}\subseteq S_{X^{2},M}(N)\)_._
2. _If_ \(p\in\tilde{F}_{n}\cap u\mathcal{M}\)_, then_ \(p^{-1}\in\tilde{F}_{n+1}\cap u\mathcal{M}\)_._
3. \(\tilde{F}^{-1}[S_{X^{n},M}(N)\cap u\mathcal{M}]\subseteq S_{X^{n+4},M}(N)\)_._
4. \(F^{-1}[S_{X^{n},M}(N)\cap u\mathcal{M}]\subseteq X^{n+4}\)_._
Proof.: (1) \(u^{2}=u\) implies that there are \(a\) and \(b\) realizing \(u\) such that \(ab\models u\). So \(ab\equiv_{M}b\), hence \(a=(ab)b^{-1}\in F_{1}\). Therefore, \(u\in\tilde{F}_{1}\) which is contained in \(S_{X^{2},M}(N)\) by Lemma 3.26.
(2) Since \(pp^{-1}=u\), there are \(a\models p\) and \(b\models q\) such that \(ab\models u\). By assumption, \(b\in F_{n}\), and, by (1), \(ab\in F_{1}\). Therefore, \(a\in F_{n+1}\).
(3) Take any \(p\in\tilde{F}^{-1}[S_{X^{n},M}(N)]\), i.e. \(upu\in S_{X^{n},M}(N)\). Then \(abc=d\in\bar{X}^{n}\) for some \(a\models u\), \(b\models p\), and \(c\models u\). So \(b=a^{-1}dc^{-1}\in\bar{X}^{2}\bar{X}^{n}\bar{X}^{2}=\bar{X}^{n+4}\) by (1). Hence, \(p\in S_{X^{n+4},M}(N)\).
(4) Take any \(g\in F^{-1}[S_{X^{n},M}(N)]\). By (3), \(\operatorname{tp}(g/N)\in S_{X^{n+4},M}(N)\), so \(g\in\bar{X}^{n+4}\). As \(g\in G\), we get \(g\in X^{n+4}\).
**Lemma 3.28**.: _There exists a \(\tau\)-open neighborhood \(V\) of \(u\) in \(u\mathcal{M}\) such that \(V\subseteq S_{X^{4},M}(N)\)._
Proof.: By Lemma 3.27(1), \(u\in\tilde{F}_{1}\subseteq S_{X^{2},M}(N)\). So the proof of Proposition 3.21 (in which we can take \(q:=u\) and \(n=m=2\)) yields a \(\tau\)-open neighborhood \(V\) of \(u\) which is contained in \(S_{X^{4},M}(N)\).
**Lemma 3.29**.:
1. \((\tilde{F}_{7}\cap u\mathcal{M})^{u\mathcal{M}}\subseteq\tilde{F}_{8}\cap u \mathcal{M}\subseteq S_{X^{16},M}(N)\cap u\mathcal{M}\)_._
2. \(\operatorname{cl}_{\tau}((\tilde{F}_{7}\cap u\mathcal{M})^{u\mathcal{M}}) \subseteq\tilde{F}_{9}\cap u\mathcal{M}\subseteq S_{X^{18},M}(N)\cap u \mathcal{M}\) _is quasi-compact._
3. \(\operatorname{cl}_{\tau}(\tilde{F})=\pi[\operatorname{cl}_{\tau}((F_{7}\cap u \mathcal{M})^{u\mathcal{M}})]\subseteq(\tilde{F}_{9}\cap u\mathcal{M})/H(u \mathcal{M})\) _is compact._
4. \(C\) _is compact, normal, symmetric, and contained in_ \((\tilde{F}_{10}\cap u\mathcal{M})/H(u\mathcal{M})\)_._
Proof.: (1) Take \(p\in\tilde{F}_{7}\cap u\mathcal{M}\) and \(q\in u\mathcal{M}\). The goal is to show that \(qpq^{-1}\in\tilde{F}_{8}\) (the last inclusion follows from Lemma 3.26).
By the definition of \(*\), we can find \(\alpha\models q\), \(\beta\models q^{-1}\), and \(a_{1},b_{1},\ldots,a_{7},b_{7}\) with \(a_{i}\equiv_{M}b_{i}\) for all \(i\leqslant 7\) and \(\operatorname{tp}(\alpha/N,a_{\leqslant 7},b_{\leqslant 7},\beta)\) a coheir over \(M\) such that \(\alpha(\prod_{i\leqslant 7}a_{i}b_{i}^{-1})\beta\models qpq^{-1}\). We have \(\alpha(\prod_{i\leqslant 7}a_{i}b_{i}^{-1})\beta=(\prod_{i\leqslant 7}a_{i}^{ \alpha}(b_{i}^{\alpha})^{-1})\alpha\beta\). Now, by Lemma 3.27(1), \(\alpha\beta\models qq^{-1}=u\in\tilde{F}_{1}\), so \(\alpha\beta\in F_{1}\). On the other hand, since \(\operatorname{tp}(\alpha/M,a_{i},b_{i})\) is a coheir over \(M\) and \(a_{i}\equiv_{M}b_{i}\), by Remark 2.2, we get that \(a_{i}^{\alpha}=_{M}b_{i}^{\alpha}\). Therefore, \((\prod_{i\leqslant 7}a_{i}^{\alpha}(b_{i}^{\alpha})^{-1})\alpha\beta\in F_{8}\), so \(qpq^{-1}\in\tilde{F}_{8}\).
(2) By Lemma 3.26, the sets \(F_{1}\) and \(F_{8}\) are \(M\)-type-definable. By Lemma 3.27(1), \(u\in\tilde{F}_{1}\), and, by (1), \((\tilde{F}_{7}\cap u\mathcal{M})^{u\mathcal{M}}\subseteq\tilde{F}_{8}\). Thus, using the definition of \(\operatorname{cl}_{\tau}\) as in the proof of Claim 2 in the proof of Proposition 3.21, we get that \(\operatorname{cl}_{\tau}((\tilde{F}_{7}\cap u\mathcal{M})^{u\mathcal{M}})\subseteq \tilde{F}_{9}\) which is contained in \(S_{X^{18},M}(N)\) by Lemma 3.26. Then quasi-compactness follows from the argument in the last paragraph of the proof of Proposition 3.21.
(3) follows from (2) and Hausdorff of \(u\mathcal{M}/H(u\mathcal{M})\).
(4) Compactness follows from (3) and the fact that \(u\mathcal{M}/H(u\mathcal{M})\) is a topological group. Normality is immediate, also using that \(u\mathcal{M}/H(u\mathcal{M})\) is a topological group. The inclusion \(C\subseteq(\tilde{F}_{10}\cap u\mathcal{M})/H(u\mathcal{M})\) follows from (3) and Lemma 3.27(2).
**Lemma 3.30**.: \(H(u\mathcal{M})\subseteq\tilde{F}_{3}\cap u\mathcal{M}\subseteq S_{X^{6},M}(N) \cap u\mathcal{M}\)_._
Proof.: The second inclusion is by Lemma 3.26. The first one is essentially contained in the proof of Theorem 0.1(2) of [17], but we repeat it here for the reader's convenience.
By Lemma 3.27(1), \(u\in\tilde{F}_{1}\). By Lemma 3.26, \(F_{2}\) is \(M\)-type-definable. So let \(\rho\) be the partial type over \(M\) defining \(F_{2}\) and closed under conjunction. Consider any \(\varphi(x)\in\rho\). Let
\[V:=[\neg\varphi(x)]\cap u\mathcal{M},\]
where \([\neg\varphi(x)]\) is the clopen subset of \(S_{G,M}(N)\) consisting of all types containing \(\neg\varphi(x)\).
**Claim 1**.:
1. \(u\notin\operatorname{cl}_{\tau}(V)\)_._
2. \(\operatorname{cl}_{\tau}(u\mathcal{M}\backslash\operatorname{cl}_{\tau}(V)) \subseteq\operatorname{cl}_{\tau}(u\mathcal{M}\backslash V)\subseteq\widetilde {F}_{3}^{\varphi}\)_, where_ \(\widetilde{F}_{3}^{\varphi}:=\{\operatorname{tp}(ab/N)\in S_{G,M}(N):a\in F_{ 1}\text{ and }b\models\varphi(x)\}\)_._
Proof.: (1) Suppose for a contradiction that \(u\in\operatorname{cl}_{\tau}(V)\). Using the definition of \(\operatorname{cl}_{\tau}\), we get that there are \(a\models u\) and \(b\models\neg\varphi(x)\) such that \(ab\models u\). Then \(a\in F_{1}\) and \(ab\in F_{1}\), so \(b\in F_{2}\), and hence \(b\models\varphi(x)\), a contradiction.
(2) We need to check that \(\operatorname{cl}_{\tau}(u\mathcal{M}\backslash V)\subseteq\widetilde{F}_{3}^ {\varphi}\). Consider any \(p\in\operatorname{cl}_{\tau}(u\mathcal{M}\backslash V)\). As before, there are \(a\models u\) and \(b\models\varphi(x)\) such that \(ab\models p\). Then \(a\in F_{1}\), so \(\operatorname{tp}(ab/N)\in\widetilde{F}_{3}^{\varphi}\). (claim)
Notice that \(\bigcap_{\varphi(x)\in\rho}\widetilde{F}_{3}^{\varphi}=\widetilde{F}_{3}\). So, by the claim,
\[H(u\mathcal{M})=\bigcap\{\operatorname{cl}_{\tau}(U):U\text{ $\tau$-neighborhood of $u$}\}\subseteq\bigcap_{\varphi(x)\in\rho} \widetilde{F}_{3}^{\varphi}\cap u\mathcal{M}=\widetilde{F}_{3}\cap u \mathcal{M},\]
which completes the proof.
**Lemma 3.31**.:
1. _Every compact_ \(K\subseteq u\mathcal{M}/H(u\mathcal{M})\) _is contained in_ \((S_{X^{n},M}(N)\cap u\mathcal{M})/H(u\mathcal{M})\) _for some_ \(n\in\mathbb{N}\) _and_ \(\pi^{-1}[K]\) _is quasi-compact._
2. _Every quasi-compact_ \(K\subseteq u\mathcal{M}\) _is contained in_ \(S_{X^{n},M}(N)\) _for some_ \(n\in\mathbb{N}\)_._
Proof.: (1) Take \(V\) from Lemma 3.28. Then \(U:=\pi[V]\subseteq(S_{X^{4},M}(N)\cap u\mathcal{M})/H(u\mathcal{M})\) is open in \(u\mathcal{M}/H(u\mathcal{M})\), so \(K\) is covered by finitely many translates of \(U\). Since all the translating elements are in some \((S_{X^{m},M}(N)\cap u\mathcal{M})/H(u\mathcal{M})\), we get that \(K\subseteq(S_{X^{m+4},M}(N)\cap u\mathcal{M})/H(u\mathcal{M})\). So the \(\tau\)-closed subset \(\pi^{-1}[K]\) of \(u\mathcal{M}\) is contained in \(S_{X^{m+4},M}(N)H(u\mathcal{M})\) which in turn is contained in \(S_{X^{m+10},M}(N)\) by Lemma 3.30. So the final paragraph of the proof of Proposition 3.21 shows that \(\pi^{-1}[K]\) is quasi-compact.
(2) follows from (1) and Lemma 3.30, as \(\pi[K]\) is compact.
The next two lemmas will be needed only in the proof of definability of \(f\).
**Lemma 3.32**.: _If \(p=\operatorname{tp}(a/N)\) and \(q=\operatorname{tp}(b/N)\) belong to \(u\mathcal{M}\) and \(a\in F_{n}b\), then \(p\in(\tilde{F}_{n+2}\cap u\mathcal{M})q\)._
Proof.: As \(qq^{-1}=u\), we can find \(b^{\prime}\models q^{-1}\) such that \(bb^{\prime}\models u\). So \(b^{\prime}=b^{-1}\alpha\) for some \(\alpha\models u\). Then \(pq^{-1}=\operatorname{tp}(a^{\prime\prime}b^{-1}\alpha/N)\) for some \(a^{\prime\prime}\equiv_{N}a\). As \(a\in F_{n}b\), we have \(a^{\prime\prime}\in F_{n}b^{\prime\prime}\) for some \(b^{\prime\prime}\equiv_{N}b\), i.e. \(a^{\prime\prime}=cb^{\prime\prime}\) for some \(c\in F_{n}\). Thus, \(pq^{-1}=\operatorname{tp}(cb^{\prime\prime}b^{-1}\alpha/N)\). Since by Lemma 3.27(1) we know that \(\alpha\in F_{1}\), we conclude that \(cb^{\prime\prime}b^{-1}\alpha\in F_{n+2}\), so \(p\in(\tilde{F}_{n+2}\cap u\mathcal{M})q\).
**Lemma 3.33**.: \(\hat{F}[\widetilde{F^{-1}[K]}]\subseteq(\tilde{F}_{7}\cap u\mathcal{M})K\) _for every \(\tau\)-closed, quasi-compact subset \(K\) of \(u\mathcal{M}\), where \(\widetilde{F^{-1}[K]}\) denotes the closure of \(\hat{F}^{-1}[K]\) in \(S_{G,M}(N)\)._
Proof.: Consider any \(p\in\overline{\hat{F}^{-1}[K]}\) and \(q=\hat{F}(p)=upu\). Then \(q=\operatorname{tp}(\alpha a\beta/N)\) for some \(\alpha\models u\), \(a\models p\), \(\beta\models u\). Also, \(p=\lim p_{j}^{\prime}\) for some net \((p_{j}^{\prime})_{j}\) from \(\hat{F}^{-1}[K]\). Take a net \((g_{k}^{\prime})_{k}\) in \(G\) such that \(\lim g_{k}^{\prime}=u\).
By Lemma 3.31(2), \(K\subseteq S_{X^{n},M}(N)\) for some \(n\), so Lemma 3.27(3) implies that \(\hat{F}^{-1}[K]\subseteq S_{X^{n+4},M}(N)\). Moreover, since \(u\in S_{X^{2},M}(N)\), taking an end segment of \((g_{k}^{\prime})_{k}\), we can assume that \(g_{k}^{\prime}\in X^{2}\) for all \(k\). Then all \(g_{k}^{\prime}up_{j}^{\prime}\) belong to \(S_{X^{n+8},M}(N)\). By compactness of \(S_{X^{n+8},M}(N)\), we can find subnets \((g_{i})_{i\in I}\) of \((g_{k}^{\prime})_{k}\) and \((p_{i})_{i\in I}\) of \((p_{j}^{\prime})_{j}\) such that \(\lim_{i}g_{i}up_{i}\) exists.
Clearly, \(\lim g_{i}=u\), \(\lim p_{i}=p\), \(up_{i}u\in u\hat{F}^{-1}[K]u=\hat{F}[\hat{F}^{-1}[K]]=K\), and \(r:=\lim g_{i}up_{i}u=(\lim g_{i}up_{i})u\) exists. Hence, by Definition 3.16, we get \(r\in u\circ K\), so \(ur\in u(u\circ K)=\operatorname{cl}_{\tau}(K)=K\), as \(K\) is \(\tau\)-closed.
Since \(ur=u(\lim g_{i}up_{i})u\), by compactness (or rather \(|N|^{+}\)-saturation of \(\mathfrak{C}\)), we get \(ur=\operatorname{tp}(\gamma\delta eb\beta/N)\) for some \(\gamma,\delta,\epsilon\) realizing \(u\) and \(b\models p\) (note that \(\beta\) can be chosen the same as at the beginning of the proof.)
Put \(x:=\alpha a\beta\) and \(y:=\gamma\delta eb\beta\). Then \(x=\alpha ab^{-1}\epsilon^{-1}\delta^{-1}\gamma^{-1}y\in F_{5}y\), because \(a\equiv_{M}b\) and \(\alpha,\epsilon,\delta,\gamma\in F_{1}\) by Lemma 3.27(1). Therefore, using Lemma 3.32, we get
\[q=\operatorname{tp}(x/N)\in(\tilde{F}_{7}\cap u\mathcal{M})\operatorname{tp} (y/N)=(\tilde{F}_{7}\cap u\mathcal{M})ur.\]
As we observed above that \(ur\in K\), we get \(q\in(\tilde{F}_{7}\cap u\mathcal{M})K\).
Proof of Theorem 3.25.: By Lemma 3.29(4), we already know that \(C\) is compact, normal, and symmetric. Let us divide the proof into numbered parts.
(1) \(C\) _is an error set of \(f\)._
By normality of \(C\), it is enough to show that \(\operatorname{error}_{r}(f)\subseteq C\). We will show more, namely that \(\operatorname{error}_{r}(f)\subseteq(\tilde{F}_{3}\cap u\mathcal{M})/H(u \mathcal{M})\). For that take any \(g,h\in G\) and we need to show that \(F(h)^{-1}F(g)^{-1}F(gh)\in\tilde{F}_{3}\cap u\mathcal{M}\). The left hand side equals \((uhu)^{-1}(ugu)^{-1}ughu=(uhu)^{-1}(ugu)^{-1}ghu\).
**Claim 1**.: \((ugu)^{-1}=\operatorname{tp}(xy^{-1}g^{-1}/N)\) _for some \(x\equiv_{N}y\)._
Proof.: Let \(\alpha\models u\). Then \(g\alpha\models gu\). Let \(a\models(ugu)^{-1}\) be such that \(\operatorname{tp}(a/N,\alpha)\) is a coheir over \(M\). Then \(u=(ugu)^{-1}ugu=(ugu)^{-1}gu=\operatorname{tp}(ag\alpha/N)\). Put \(x:=ag\alpha\) and \(y:=\alpha\). Then \(x\equiv_{M}y\) (as each of these elements realizes \(u\)) and \(a=xy^{-1}g^{-1}\). \(\Box\)(claim)
By this claim, we conclude that \(F(h)^{-1}F(g)^{-1}F(gh)=\operatorname{tp}(zt^{-1}h^{-1}xy^{-1}g^{-1}gh\alpha/N)\) for some \(z\equiv_{N}t\), \(x\equiv_{N}y\), and \(\alpha\models u\). But \(zt^{-1}h^{-1}xy^{-1}g^{-1}gh\alpha=zt^{-1}x^{h^{-1}}(y^{h^{-1}})^{-1}\alpha\in F _{3}\), because \(z\equiv_{M}t\), \(x^{h^{-1}}\equiv_{M}y^{h^{-1}}\) (as \(x\equiv_{M}y\) and \(h\in M\)), and \(\alpha\in F_{1}\) (by Lemma 3.27(1)). Therefore, \(F(h)^{-1}F(g)^{-1}F(gh)\in\tilde{F}_{3}\cap u\mathcal{M}\).
(2) _There is a \(\tau\)-open neighborhood \(V\) of \(u\) in \(u\mathcal{M}\) such that \(V\subseteq S_{X^{4},M}(N)\). For any such \(V\), \(U:=\pi[V]\) is an open neighborhood of the neutral element in \(u\mathcal{M}/H(u\mathcal{M})\) and \(f^{-1}[U]\subseteq X^{14}\). Thus, \(f^{-1}[U]\subseteq X^{14}\) also holds for \(U\) replaced by any (in particular by a compact) neighborhood of the neutral element in \(u\mathcal{M}/H(u\mathcal{M})\) contained in \(U\)._
The existence of \(V\) is by Lemma 3.28. Then \(U:=\pi[V]\) is an open neighborhood of \(u/H(u\mathcal{M})\) in \(u\mathcal{M}/H(u\mathcal{M})\). By Lemma 3.30, \(H(u\mathcal{M})\subseteq S_{X^{6},M}(N)\). So \(\pi^{-1}[U]=VH(u\mathcal{M})\subseteq S_{X^{4},M}(N)S_{X^{6},M}(N)\subseteq S_{X ^{10},M}(N)\). Hence, \(f^{-1}[U]=F^{-1}[\pi^{-1}[U]]\subseteq F^{-1}[S_{X^{10},M}(N)]\), and the last preimage is contained in \(X^{14}\) by Lemma 3.27(4).
(3) _For every compact \(K\subseteq u\mathcal{M}/H(u\mathcal{M})\) there is \(k\in\mathbb{N}\) with \(f^{-1}[K]\subseteq X^{k}\)._
Consider any compact \(K\subseteq u\mathcal{M}/H(u\mathcal{M})\). By Lemma 3.31(1), \(K\subseteq(S_{X^{n},M}(N)\cap u\mathcal{M})/H(u\mathcal{M})\) for some \(n\). So
\[f^{-1}[K]=F^{-1}[\pi^{-1}[K]]\subseteq F^{-1}[S_{X^{n},M}(N)H(u\mathcal{M})] \subseteq F^{-1}[S_{X^{n+6},M}(N)]\subseteq X^{n+10},\]
where the second inclusion follows from Lemma 3.30 and the last one by Lemma 3.27(4).
(4) \(f[X^{i}]\) _is relatively compact for every \(i\in\mathbb{N}\)._
By Remark 3.2, it is enough to show it for \(i=1\). We have
\[f[X]=\pi[F[X]]\subseteq\pi[u(S_{X,M}(N)\cap u\mathcal{M})u]\subseteq\pi[S_{X^{5}, M}(N)\cap u\mathcal{M}]\subseteq\pi[\operatorname{cl}_{\tau}(S_{X^{5},M}(N)\cap u \mathcal{M})],\]
where the second inclusion follows from Lemma 3.27(1). By this lemma and the proof of Claim 2 in the proof of Proposition 3.21, we have \(\operatorname{cl}_{\tau}(S_{X^{5},M}(N)\cap u\mathcal{M})\subseteq S_{X^{7},M }(N)\). So the argument in the last paragraph of the proof of Proposition 3.21 shows that \(\operatorname{cl}_{\tau}(S_{X^{5},M}(N)\cap u\mathcal{M})\) is quasi-compact. Thus, \(\pi[\operatorname{cl}_{\tau}(S_{X^{5},M}(N)\cap u\mathcal{M})]\) is compact, and so is the closure of \(f[X]\).
(5) \(f^{-1}[C]\subseteq X^{30}\).
By Lemma 3.29(4), \(C\subseteq(\tilde{F}_{10}\cap u\mathcal{M})/H(u\mathcal{M})\). Hence,
\[f^{-1}[C]=F^{-1}[\pi^{-1}[C]]\subseteq F^{-1}[(\tilde{F}_{10} \cap u\mathcal{M})H(u\mathcal{M})]\subseteq F^{-1}[\tilde{F}_{13}\cap u \mathcal{M}]\subseteq\] \[F^{-1}[S_{X^{26},M}(N)\cap u\mathcal{M}]\subseteq X^{30},\]
where the second inclusion follows from Lemma 3.30, the third one from Lemma 3.26, and the last one from Lemma 3.27(4).
(6) _For \(U\) from (2) we have \(f^{-1}[UC]\subseteq X^{34}\). Thus, the same holds for \(U\) replaced by any (in particular by a compact) neighborhood of the neutral element in \(u\mathcal{M}/H(u\mathcal{M})\) contained in \(U\)._
We have
\[f^{-1}[UC]=F^{-1}[\pi^{-1}[UC]]\subseteq F^{-1}[S_{X^{4},M}(N)S_{X^{20},M}(N)S _{X^{6},M}(N)]=F^{-1}[S_{X^{30},M}(N)]\subseteq X^{34},\]
where the first inclusion follows from the choice of \(U\) and Lemmas 3.29(4) and 3.30, and the last one from Lemma 3.27(4).
(7) _For any compact \(Z,Y\subseteq u\mathcal{M}/H(u\mathcal{M})\) with \(C^{2}Y\cap C^{2}Z=\emptyset\) the preimages \(f^{-1}[Y]\) and \(f^{-1}[Z]\) can be separated by a definable set._
By Lemma 3.31(1), \(\pi^{-1}[Y]\) and \(\pi^{-1}[Z]\) are quasi-compact. On the other hand, since \((\tilde{F}_{7}\cap u\mathcal{M})/H(u\mathcal{M})\subseteq C\), we have that
\[(\tilde{F}_{7}\cap u\mathcal{M})(\tilde{F}_{7}\cap u\mathcal{M})\pi^{-1}[Y] \cap(\tilde{F}_{7}\cap u\mathcal{M})(\tilde{F}_{7}\cap u\mathcal{M})\pi^{-1}[ Z]=\emptyset.\]
So the following claim will complete the proof of (7) and of the whole theorem.
**Claim 2**.: _For any quasi-compact \(Z,Y\subseteq u\mathcal{M}\) such that the sets \((\tilde{F}_{7}\cap u\mathcal{M})Y\) and \((\tilde{F}_{5}\cap u\mathcal{M})(\tilde{F}_{7}\cap u\mathcal{M})Z\) are disjoint the preimages \(F^{-1}[Y]\) and \(F^{-1}[Z]\) can be separated by a definable set._
Proof.: Let \(\rho\colon S_{G,M}(N)\to S_{G}(M)\) be the restriction map. By the definition of the topologies on type spaces, \(\rho\) is a continuous map. We claim that it is enough to show
\[(*)\quad\quad\rho[\overline{\hat{F}^{-1}[Y]}]\cap\rho[\overline{\hat{F}^{-1}[ Z]}]=\emptyset,\]
To see that \((*)\) is enough, note that by Lemma 3.31(2) and 3.27(3) both \(\overline{\hat{F}^{-1}[Y]}\) and \(\overline{\hat{F}^{-1}[Z]}\) are contained in some \(S_{X^{n},M}(N)\). Hence, by \((*)\) and compactness of \(S_{X^{n},M}(N)\), \(\rho[\overline{\hat{F}^{-1}[Y]}]\) and \(\rho[\overline{\hat{F}^{-1}[Z]}]\) are disjoint closed subsets of \(S_{X^{n}}(M)\), and so they can be separated by a basic open set \([\varphi(x)]\) for some formula in \(L_{M}\). Then the definable set \(\varphi(M)\) separates \(F^{-1}[Y]\) and \(F^{-1}[Z]\).
Let us prove \((*)\). Suppose it fails, i.e. there are \(p\in\overline{\hat{F}^{-1}[Y]}\) and \(q\in\overline{\hat{F}^{-1}[Z]}\) such that \(\rho(p)=\rho(q)\). So, taking \(\alpha\models p\) and \(\beta\models q\), we have \(\alpha\equiv_{M}\beta\). Next, \(\hat{F}(p)=upu=\operatorname{tp}(\gamma_{1}\alpha\gamma_{2}/N)\) and \(\hat{F}(q)=uqu=\operatorname{tp}(\gamma_{1}\beta\gamma_{2}/N)\) for some \(\gamma_{1},\gamma_{2}\models u\) (note that we can choose the same \(\gamma_{1},\gamma_{2}\) in both formulas: first we chose \(\gamma_{2}\models u\) such that \(\operatorname{tp}(\alpha,\beta/N,\gamma_{2})\) is a coheir over \(M\), and then \(\gamma_{1}\models u\) such that \(\operatorname{tp}(\gamma_{1}/N,\alpha,\beta,\gamma_{2})\) is a coheir over \(M\)). Put \(x:=\gamma_{1}\alpha\gamma_{2}\) and \(y:=\gamma_{1}\beta\gamma_{2}\). Using Lemma 3.27(1), we conclude that \(xy^{-1}=\gamma_{1}\alpha\beta^{-1}\gamma_{1}^{-1}\in F_{3}\), so \(x\in F_{3}y\). By Lemma 3.32, this implies that \(\hat{F}(p)=\operatorname{tp}(x/N)\in(\tilde{F}_{5}\cap u\mathcal{M}) \operatorname{tp}(y/N)=(\tilde{F}_{5}\cap u\mathcal{M})\tilde{F}(q)\). On the other hand, by Lemma 3.33, we have \(\tilde{F}(p)\in(\tilde{F}_{7}\cap u\mathcal{M})Y\) and \(\tilde{F}(q)\in(\tilde{F}_{7}\cap u\mathcal{M})Z\). Thus, we conclude that
\(\hat{F}(p)\) is in the intersection of \((\tilde{F}_{7}\cap u\mathcal{M})Y\) and \((\tilde{F}_{5}\cap u\mathcal{M})(\tilde{F}_{7}\cap u\mathcal{M})Z\), which contradicts the assumption of the claim. \(\Box\)
### Around the main theorem
In this subsection, we discuss some improvements or variants of Theorem 3.25.
**Concrete numbers in the statement of Theorem 3.25**.
In [10, Theorem 4.2], Hrushovski produced a generalized definable locally compact model \(f\) of \(X\) with an error set \(C\) such that \(f^{-1}[C]\subseteq X^{12}\), while in our theorem \(f^{-1}[C]\subseteq X^{30}\).
The proof of part (1) inside the proof of Theorem 3.25 shows that \(\operatorname{error}_{r}(f)\subseteq(\tilde{F}_{3}\cap u\mathcal{M})/H(u \mathcal{M})\). Analogously, one can show that \(\operatorname{error}_{l}(f)\subseteq(\tilde{F}_{3}\cap u\mathcal{M})/H(u \mathcal{M})\). Therefore, if we dropped the definability requirement from the definition of generalized definable locally compact model (i.e. item (3) of Definition 3.1), then we could decrease our error set \(C\) by taking \(\tilde{F}:=((\tilde{F}_{3}\cap u\mathcal{M})/H(u\mathcal{M}))^{u\mathcal{M}/ H(u\mathcal{M})}\) in place of \(\tilde{F}:=((\tilde{F}_{7}\cap u\mathcal{M})/H(u\mathcal{M}))^{u\mathcal{M}/ H(u\mathcal{M})}\), and setting \(C:=\operatorname{cl}_{\tau}(\tilde{F})\cup\operatorname{cl}_{\tau}(\tilde{F} )^{-1}\) as before. After this modification, our proofs yield \(f^{-1}[C]\subseteq X^{22}\) and \(f^{-1}[UC]\subseteq X^{26}\). A question is whether after this modification item (3) of Definition 3.1 still holds for some \(l\) (maybe greater than \(2\)). By the proof of part (7) in the proof of Theorem 3.25, it would hold with \(l=4\) if the answer to the second question below was positive.
**Question 3.34**.:
1. _Does_ \(\tilde{F}_{n}*\tilde{F}_{m}=\tilde{F}_{n+m}\)_?_
2. _Does_ \((\tilde{F}_{n}\cap u\mathcal{M})*(\tilde{F}_{m}\cap u\mathcal{M})=\tilde{F}_{ n+m}\cap u\mathcal{M}\)_?_
And a final question is whether we could use yet smaller \(C\) obtained by replacing \(\tilde{F}:=((\tilde{F}_{7}\cap u\mathcal{M})/H(u\mathcal{M}))^{u\mathcal{M}/ H(u\mathcal{M})}\) by \(\tilde{F}:=((\tilde{F}_{1}\cap u\mathcal{M})/H(u\mathcal{M}))^{u\mathcal{M}/ H(u\mathcal{M})}\). For this \(C\) our proof would give us \(f^{-1}[C]\subseteq X^{18}\).
**Definability over \(X\)**
In [10, Theorem 4.2], Hrushovski obtains separation by two sets definable over \(X\), while we got separation by a set definable over \(M\). However, assuming that our approximate subgroup \(X\) is \(\emptyset\)-definable (in particular, the group operation is piecewise \(\emptyset\)-definable), it is not difficult to modify our error set \(C\) to get separation by a set definable over \(X\), and then we get separation by subsets of some \(X^{n}\) which are definable over \(X\) (see Remark 3.3). We explain the necessary modification of \(C\) below.
Notice that, by \(\emptyset\)-definability of the approximate subgroup \(X\), definability over \(X\) is equivalent to definability over \(G:=\langle X\rangle\). Let us modify the definition of \(C\) by replacing \(F_{n}:=\{x_{1}y_{1}^{-1}\dots x_{n}y_{n}^{-1}:x_{i},y_{i}\in\bar{G}\text{ and }x_{i}\equiv_{M}y_{i}\text{ for all }i\leq n\}\) by \(F_{n}^{\prime}:=\{x_{1}y_{1}^{-1}\dots x_{n}y_{n}^{-1}:x_{i},y_{i}\in\bar{G} \text{ and }x_{i}\equiv_{G}y_{i}\text{ for all }i\leq n\}\). Then \(\tilde{F}_{n}^{\prime}\), \(\tilde{F}^{\prime}\), and \(C^{\prime}\) are defined using \(F_{n}^{\prime}\) in the same way as the corresponding objects without primes are defined at the beginning of Subsection 3.2.
We claim that Theorem 3.25 holds with \(C\) replaced by \(C^{\prime}\) with the stronger conclusion that for any compact \(Z,Y\subseteq u\mathcal{M}/H(u\mathcal{M})\) with \(C^{2}Y\cap C^{2}Z=\emptyset\) the preimages \(f^{-1}[Y]\) and \(f^{-1}[Z]\) can be separated by an \(X\)-definable set.
Since the sets with primes are supersets of the corresponding sets without primes, it is easy to see that the only things to check (in order to apply the whole argument from Subsection 3.2) are the following:
1. \(F_{1}^{\prime}=\{xy^{-1}:x,y\in\bar{X}\text{ with }x\equiv_{G}y\}\subseteq\bar{X}^{2}\);
2. \((\tilde{F}_{7}^{\prime}\cap u\mathcal{M})^{u\mathcal{M}}\subseteq\tilde{F}_{8} ^{\prime}\cap u\mathcal{M}\);
3. \((*)\) from the proof of part (7) in the proof of Theorem 3.25 but for \(\rho\colon S_{G,M}(N)\to S_{G}(M)\) replaced by \(\rho^{\prime}\colon S_{G,M}(N)\to S_{G}(G)\) being the restriction map.
Proof.: (1) The proof of Lemma 3.26 adapts, because the set \(S\) from that proof is contained in \(G\), and so \(c\in G\).
(2) Since all types in \(S_{G,M}(N)\) are finitely satisfiable in \(G\), the proof of Lemma 3.29 adapts choosing \(\alpha\models q\) so that \(\operatorname{tp}(\alpha/N,a_{\leqslant 7},b_{\leqslant 7},\beta)\) is a coheir over \(G\).
(3) Replacing \(\rho\) by \(\rho^{\prime}\), the proof of \((*)\) works as before (with \(\alpha\equiv_{G}\beta\) in place of \(\alpha\equiv_{M}\beta\)).
**An error set for \(\boldsymbol{\hat{f}}\)**
Elaborating on the proof of part (1) in the proof of Theorem 3.25, we obtain the following, where \(S_{G,M}(N)\) is equipped with its semigroup structure.
**Proposition 3.35**.: _The function \(\hat{f}\colon S_{G,M}(N)\to u\mathcal{M}/H(u\mathcal{M})\) (given by \(\hat{f}(p):=upu/H(u\mathcal{M})\)) is a quasi-homomorphism with \(\operatorname{error}_{r}(\hat{f})\cup\operatorname{error}_{l}(\hat{f}) \subseteq(\widetilde{F}_{5}\circ u\mathcal{M})/H(u\mathcal{M})\), and so_
\[\hat{C}:=\operatorname{cl}_{7}\left(((\widetilde{F}_{5}\cap u\mathcal{M})/H( u\mathcal{M}))^{u\mathcal{M}/H(u\mathcal{M})}\right)\cup\operatorname{cl}_{7} \left(((\widetilde{F}_{5}\cap u\mathcal{M})/H(u\mathcal{M}))^{u\mathcal{M}/H( u\mathcal{M})}\right)^{-1}\]
_is a compact, normal, symmetric error set of \(\hat{f}\)._
Proof.: We will only explain how to prove that \(\operatorname{error}_{r}(\hat{f})\subseteq(\widetilde{F}_{5}\cap u\mathcal{M} )/H(u\mathcal{M})\). The proof that \(\operatorname{error}_{l}(\hat{f})\subseteq(\widetilde{F}_{5}\cap u\mathcal{M} )/H(u\mathcal{M})\) is similar. The rest follows as at the beginning of the proof of Theorem 3.25, using an obvious variant of Lemma 3.29(4).
We need to show that \((uqu)^{-1}(upu)^{-1}pqu\subseteq\tilde{F}_{5}\) for all \(p,q\in S_{G,M}(N)\). We have \(pqu=\operatorname{tp}(g^{\prime}h^{\prime}\alpha/N)\) for some \(g^{\prime}\models p\), \(h^{\prime}\models q\), and \(\alpha\models u\). An obvious extension of Claim 1 in the proof of Theorem 3.25 yields
\[(upu)^{-1}=\operatorname{tp}(xy^{-1}g^{-1}/N)\text{ for some }x\equiv_{N}y \text{ and }g\models p,\] \[(uqu)^{-1}=\operatorname{tp}(zt^{-1}h^{-1}/N)\text{ for some }z\equiv_{N}t \text{ and }h\models q.\]
Looking at the proof of the aforementioned Claim 1, we can choose all the above data so that \(\operatorname{tp}(t/N,x,y,g,g^{\prime},h^{\prime},\alpha)\), \(\operatorname{tp}(h/N,t,x,y,g,g^{\prime},h^{\prime},\alpha)\), \(\operatorname{tp}(zt^{-1}h^{-1}/N,x,y,g,g^{\prime},h^{\prime},\alpha)\), and \(\operatorname{tp}(xy^{-1}g^{-1}/N,g^{\prime},h^{\prime},\alpha)\) are all coheirs over \(M\). Then,
\[(uqu)^{-1}(upu)^{-1}pqu=\operatorname{tp}(zt^{-1}h^{-1}xy^{-1}g^{-1}g^{ \prime}h^{\prime}\alpha/N)=\] \[\operatorname{tp}(zt^{-1}x^{h^{-1}}(y^{h^{-1}})^{-1}(g^{h^{-1}}) ^{-1}g^{\prime h^{-1}}h^{-1}h^{\prime}\alpha/N)\in\tilde{F}_{5},\]
because \(z\equiv_{M}t\), \(x^{h^{-1}}\equiv_{M}y^{h^{-1}}\) (as \(x\equiv_{M}y\) and \(\operatorname{tp}(h/M,x,y)\) is a coheir over \(M\)), \(g^{h^{-1}}\equiv_{M}g^{\prime h^{-1}}\) (as \(g\equiv_{M}g^{\prime}\) and \(\operatorname{tp}(h/M,g,g^{\prime})\) is a coheir over \(M\)), \(h\equiv_{M}h^{\prime}\), and \(\alpha\in F_{1}\).
**Expressing the proof in terms of Boolean algebras**
Suppose \(X\) is an abstract (rather than definable) approximate subgroup \(X\) and one is interested in finding a generalized locally compact model of \(X\). Taking \(M:=G=\langle X\rangle\) equipped with the full structure, \(S_{G,M}(N)\) becomes the subspace of \(\beta G\) (the space of ultrafilters on the Boolean algebra of all subsets of \(G\)) which consists of the ultrafilters concentrated on some \(X^{n}\) (for varying \(n\)). So no model theory is involved in those objects. In this situation, one should be able to completely eliminate model theory from our construction of the generalized locally compact model by not using realizations of types, but we find it unnatural and more technical, so we will not do that.
After the first author's conference talk on Theorem 3.25, Sergei Starchenko suggested that it could be interesting to modify our construction of the generalized definable locally compact model of a definable approximate subgroup \(X\) by replacing the Boolean algebra generated
by externally definable subsets of \(G\) by a smaller (or even smallest possible) Boolean algebra which does not refer to model theory; then ultrafilters on this algebra would be used in place of complete external types. One should be able to realize this suggestion by using Newelski's work on \(d\)-closed \(G\)-algebras [13]. Namely, Newelski showed that whenever \(\mathcal{A}\) is a \(d\)-closed \(G\)-algebra of subsets of \(G\), then there is an explicitly given semigroup operation \(*\) on the Stone space \(S(\mathcal{A})\) which extends the action of \(G\) and is left continuous. Now, in our situation of an approximate subgroup \(X\) and \(G:=\langle X\rangle\), take \(\mathcal{A}\) to be the \(d\)-closure (in the sense of [13]) of the \(G\)-algebra generated by all left translates of the sets \(X,X^{2},X^{3},\dots\). Let \(S_{G}(\mathcal{A})\) be the subflow of \(S(\mathcal{A})\) which consists of the ultrafilters containing one of the \(X^{n}\)'s (for varying \(n\)). Then the above semigroup operation \(*\) on \(S(\mathcal{A})\) restricts to a left continuous semigroup operation on \(S_{G}(\mathcal{A})\), and \(S_{G}(\mathcal{A})\) is locally compact. One should be able to adapt the theory developed in this paper for \(S_{G,M}(N)=S_{G,\operatorname{ext}}(M)\) to \(S_{G}(\mathcal{A})\); in particular, to state and prove a suitable variant of Theorem 3.25 yielding a generalized locally compact model of \(X\) which satisfies a version of definability (i.e. item (3) of Definition 3.1) in which separation by a definable set is replaced by separation by a set from the \(G\)-algebra \(\mathcal{A}\) (or even from the Boolean algebra of subsets of \(G\) generated by left and right translates of \(X,X^{2},\dots\)).
## 4. Universality
We will prove that the generalized definable locally compact model from Theorem 3.25 is an initial object in a certain category. In particular, this will explain what it means to be a generalized definable locally compact model in terms of factorization through \(u\mathcal{M}/H(u\mathcal{M})\) (with the notation from Section 3).
As in Section 3, take the situation and notation as at the end of Subsection 2.1. We introduce the notion of good quasi-homomorphism which will be used to define morphisms in our category.
**Definition 4.1**.: Let \(H\) be a locally compact group and \(S\) a compact, normal, symmetric subset of \(H\). A _good quasi-homomorphism for \((H,S)\)_ is a quasi-homomorphism \(h\colon H\to L:T\) for some compact, normal, symmetric subset \(T\) of a locally compact group \(L\) such that:
1. for every compact \(Y\subseteq L\), \(h^{-1}[Y]\) is relatively compact in \(H\);
2. for every compact \(V\subseteq H\), \(h[V]\) is relatively compact in \(L\);
3. \(h[S]\subseteq T^{n}\) for some \(n\in\mathbb{N}\);
4. there is \(m\in\mathbb{N}\) such that for any compact \(Y,Z\subseteq L\) with \(T^{m}Y\cap T^{m}Z=\emptyset\), \(S\operatorname{cl}(h^{-1}[Y])\cap S\operatorname{cl}(h^{-1}[Z])=\emptyset\).
_Remark 4.2_.: Let \((H,S)\) be as in Definition 4.1 and let \(h\colon H\to L:T\) be a good quasi-homomorphism for \((H,S)\). Then:
1. for every \(m\in\mathbb{N}\) there is \(n_{m}\in\mathbb{N}\) with \(h[S^{m}]\subseteq T^{n_{m}}\);
2. for every \(n\in\mathbb{N}\) there exists \(m_{n}\in\mathbb{N}\) such that for any compact \(Y,Z\subseteq L\) with \(T^{m_{n}}Y\cap T^{m_{n}}Z=\emptyset\) we have \(S^{n}\operatorname{cl}(h^{-1}[Y])\cap S^{n}\operatorname{cl}(h^{-1}[Z])=\emptyset\).
Proof.: (1) follows by an easy induction from item (3) of Definition 4.1 and the assumption that \(T\) is an error set of \(h\).
(2) By (1) and the assumption that \(T\) is an error set of \(h\), we get that \(h[S^{n-1}h^{-1}[Y]]\subseteq T^{k_{n}}Y\) and \(h[S^{n-1}h^{-1}[Z]]\subseteq T^{k_{n}}Z\) for some \(k_{n}\). We will show that \(m_{n}:=k_{n}+m\) works for any \(m\) satisfying the conclusion of item (4) of Definition 4.1. For that assume that \(T^{m_{n}}Y\cap T^{m_{n}}Z=\emptyset\). Then \(T^{m}(T^{k_{n}}Y)\cap T^{m}(T^{k_{n}}Z)=\emptyset\) and \(T^{k_{n}}Y\) and \(T^{k_{n}}Z\) are compact. Hence, \(S\operatorname{cl}(h^{-1}[T^{k_{n}}Y])\cap S\operatorname{cl}(h^{-1}[T^{k_{n} }Z])=\emptyset\) by the choice of \(m\). As the last intersection contains \(S^{n}\operatorname{cl}(h^{-1}[Y])\cap S^{n}\operatorname{cl}(h^{-1}[Z])\), we get that \(S^{n}\operatorname{cl}(h^{-1}[Y])\cap S^{n}\operatorname{cl}(h^{-1}[Z])=\emptyset\).
**Definition 4.3**.: Let \(f\colon G\to H:S\) and \(h\colon G\to L:T\) be definable generalized locally compact models of \(X\). A _morphism_ from \(f\) to \(h\) is a function \(\rho\colon H\to L\) which is a good
quasi-homomorphism \(\rho\colon H\to L:T^{k}\) for \((H,S)\), where \(k\in\mathbb{N}\) is such that \(\rho(f(g))\in h(g)T^{k}\) for all \(g\in G\). The set of morphisms from \(f\) to \(h\) will be denoted by \(\operatorname{Mor}(f,h)\).
_Remark 4.4_.: Morphisms are closed under composition, and so the class of definable locally compact models of \(X\) in the generalized sense with morphisms forms a category.
Proof.: Let \(f_{1}\colon G\to H_{1}:S_{1}\), \(f_{2}\colon G\to H_{2}:S_{2}\), and \(f_{3}\colon G\to H_{3}:S_{3}\) be generalized definable locally compact models, and \(\rho\in\operatorname{Mor}(f_{1},f_{2})\), \(\delta\in\operatorname{Mor}(f_{2},f_{3})\). The goal is to show that there is \(k\) such that \(\operatorname{error}_{r}(\delta\rho)=\{\delta(\rho(y))^{-1}\delta(\rho(x))^{-1} \delta(\rho(xy)):x,y\in G\}\subseteq S_{3}^{k}\) and \(\delta(\rho(f_{1}(g)))\in f_{3}(g)S_{3}^{k}\) for all \(g\in G\). Indeed, once we prove it, the fact that \(\delta\rho\colon H_{1}\to H_{3}:S_{3}^{k}\) is a good quasi-homomorphism for \((H_{1},S_{1})\) easily follows using Remark 4.2 which we leave as an exercise.
Let \(k_{1}\) and \(k_{2}\) be numbers witnessing that \(\rho\) and \(\delta\) are morphisms, respectively, and let \(n_{k_{1}}\) be the number from Remark 4.2(1) applied to the good quasi-homomorphism \(\delta\colon H_{2}\to H_{3}:S_{3}^{k_{2}}\) for \((H_{2},S_{2})\).
Regarding the first part of our goal, we have
\[\delta(\rho(y))^{-1}\delta(\rho(x))^{-1}\delta(\rho(xy))\in S_{3}^{k_{2}} \delta(\rho(x)\rho(y))^{-1}\delta(\rho(xy))\subseteq S_{3}^{k_{2}+2k_{2}} \delta((\rho(x)\rho(y))^{-1})\delta(\rho(xy))\subseteq\]
\[S_{3}^{k_{2}+2k_{2}+k_{2}}\delta(\rho(y)^{-1}\rho(x)^{-1}\rho(xy))\subseteq S_ {3}^{4k_{2}}\delta[S_{2}^{k_{1}}]\subseteq S_{3}^{4k_{2}+k_{2}n_{k_{1}}}.\]
Regarding the second part of our goal, we have
\[\delta(\rho(f_{1}(g)))\in\delta[f_{2}(g)S_{2}^{k_{1}}]\subseteq\delta(f_{2}(g ))\delta[S_{2}^{k_{1}}]S_{3}^{k_{2}}\subseteq f_{3}(g)S_{3}^{k_{2}+k_{2}n_{k_{ 1}}+k_{2}}=f_{3}(g)S_{3}^{2k_{2}+k_{2}n_{k_{1}}}.\]
We conclude that \(k:=4k_{2}+k_{2}n_{k_{1}}\) works.
The obtained category will be later modified to get that the generalized definable locally compact model from Theorem 3.25 is an initial object, as for the above category we will only obtain existence of a morphism and "approximate uniqueness". Before going to these main issues, let us make one more basic observation.
**Proposition 4.5**.: _Let \(f\colon G\to H:S\) be a generalized definable locally compact model of \(X\) and let \(h\colon H\to L:T\) be a good quasi-homomorphism for \((H,S)\). Then there is \(n\in\mathbb{N}\) such that \(h\circ f\colon G\to L:T^{n}\) is a generalized definable locally compact model of \(X\) and \(h\in\operatorname{Mor}(f,h\circ f)\)._
Proof.: The fact \(h\circ f\colon G\to L:T^{n}\) is a quasi-homomorphism for some \(n\) follows from:
\[h(f(y))^{-1}h(f(x))^{-1}h(f(xy))\in Th(f(x)f(y))^{-1}h(f(xy)) \subseteq T^{3}h((f(x)f(y))^{-1})h(f(xy))\subseteq\] \[T^{4}h(f(y)^{-1}f(x)^{-1}f(xy))\subseteq T^{4}h[S]\subseteq T^{ 4+n},\]
where \(n\) is a number witnessing item (3) of Definition 4.1 applied to the good quasi-homomorphism \(h\).
To check item (1) of Definition 3.1 for \(h\circ f\), consider any compact \(V\subseteq L\). Then \(h^{-1}[V]\) is relatively compact, so \((h\circ f)^{-1}[V]=f^{-1}[h^{-1}[V]]\subseteq X^{i}\) for some \(i\).
To see item (2) of Definition 3.1, note that \(\operatorname{cl}(f[X])\) being compact implies that \(h[\operatorname{cl}(f[X])]\) is relatively compact, and so \(\operatorname{cl}(h[f[X]])\) is compact.
To see that item (3) of Definition 3.1 holds for \(h\circ f\), choose \(l\) witnessing item (3) of Definition 3.1 for \(f\). Next, choose \(n\) so big that \(T^{n}\) is an error set of \(h\circ f\) (the existence of such an \(n\) was justified at the beginning of the proof) and for any compact \(Y,Z\subseteq L\) with \(T^{n}Y\cap T^{n}Z=\emptyset\), \(S^{l}\operatorname{cl}(h^{-1}[Y])\cap S^{l}\operatorname{cl}(h^{-1}[Z])=\emptyset\) (the existence of such an \(n\) is guaranteed by Remark 4.2(2)). Then for any compact \(Y,Z\subseteq L\) with \(T^{n}Y\cap T^{n}Z=\emptyset\) we have that \((h\circ f)^{-1}[Y]\) and \((h\circ f)^{-1}[Z]\) can be separated by a definable set.
The fact that \(h\in\operatorname{Mor}(f,h\circ f)\) is trivial.
**Theorem 4.6**.: _(Universality of \(f\colon G\to u\mathcal{M}/H(u\mathcal{M})\): existence) Let \(f\colon G\to u\mathcal{M}/H(u\mathcal{M}):C\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to G\to u\mathcal{M}/H(u\mathcal{M})\) be the generalized definable locally compact model of \(X\) from Theorem 3.25. Let \(h\colon G\to G\to G\to G\to G\to G\to G\to G\to G\to G\to G\to G\to G\to G\to G\to G\to G\to G\to G\to\mathcal{M}\to G\to\mathcal{M}
\(H:S\) be an arbitrary generalized definable locally compact model of \(X\). Then there exists a morphism \(\widetilde{h}\in\operatorname{Mor}(f,h)\).
_More precisely, define \(h_{M}\colon S_{G}(M)\to H\) by picking \(h_{M}(p)\) arbitrarily from the set \(\bigcap_{\varphi(x)\in p}\operatorname{cl}(h[\varphi(G)])\). Next, define \(h^{*}\colon\bar{G}\to H\) by \(h^{*}(a):=h_{M}(\operatorname{tp}(a/M))\), \(\bar{h}\colon S_{G,M}(N)\to H\) by \(\bar{h}(p):=h_{M}(p|_{M})\), and finally \(\widetilde{h}\colon u\mathcal{M}/H(u\mathcal{M})\to H\) by picking \(\widetilde{h}(p/H(u\mathcal{M}))\) arbitrarily from the set \(\bar{h}[pH(u\mathcal{M})]\). Then \(h^{*},\bar{h},\widetilde{h}\) are quasi-homomorphisms with the distinguished error set being \(S^{n}\) for some \(n\in\mathbb{N}\) independent of the choice of \(h\), and \(\widetilde{h}\in\operatorname{Mor}(f,h)\)._
Proof.: The proof is diced into parts. Items (1), (2), (6) below show that \(h^{*},\bar{h},\widetilde{h}\) are quasi-homomorphisms with suitable error sets. Items (6)-(11) show that \(\widetilde{h}\in\operatorname{Mor}(f,h)\).
Take \(l\in\mathbb{N}\) witnessing item (3) of Definition 3.1 for \(h\).
(1) \(h^{*}\colon\bar{G}\to H:S^{4l+1}\).
We have
\[h^{*}(ab)\in\bigcap_{\varphi(x)\in\operatorname{tp}(a/M),\psi(x) \in\operatorname{tp}(b/M)}\overline{h[\varphi(G)\cdot\psi(G)]}\subseteq \bigcap_{\varphi(x),\psi(x)}\overline{h[\varphi(G)]h[\psi(G)]S}=\] \[\bigcap_{\varphi(x),\psi(x)}\left(\overline{h[\varphi(G)]}\cdot \overline{h[\psi(G)]}\cdot S\right)=\bigcap_{\varphi(x)}\overline{h[\varphi(G )]}\cdot\bigcap_{\psi(x)}\overline{h[\psi(G)]}\cdot S,\]
where \(\varphi(x)\) ranges over \(\operatorname{tp}(a/M)\) and \(\psi(x)\) over \(\operatorname{tp}(b/M)\). (The last two equalities follow from compactness of \(\overline{h[\varphi(G)]}\), \(\overline{h[\psi(G)]}\), and \(C\) for sufficiently small \(\varphi(G)\) and \(\psi(G)\).) Therefore,
\[(*)\qquad h^{*}(ab)=\alpha\beta\gamma\]
for some \(\alpha\in\bigcap_{\varphi(x)\in\operatorname{tp}(a/M)}\overline{h[\varphi(G )]}\), \(\beta\in\bigcap_{\psi(x)\in\operatorname{tp}(b/M)}\overline{h[\psi(G)]}\), and \(\gamma\in S\).
We claim that
\[(**)\qquad S^{l}\alpha\cap S^{l}h^{*}(a)\neq\emptyset.\]
Suppose not. By compactness of \(S\) and local compactness of \(H\), there are compact neighborhoods \(F_{1}\) of \(\alpha\) and \(F_{2}\) of \(h^{*}(a)\) such that \(S^{l}F_{1}\cap S^{l}F_{2}=\emptyset\). Then, by the choice of \(l\), there is a formula \(\theta(x)\in L_{M}\) such that \(h^{-1}[F_{1}]\subseteq\theta(M)\) and \(h^{-1}[F_{2}]\subseteq G\vartheta(M)\). If \(\theta(x)\in\operatorname{tp}(a/M)\), then \(h^{*}(a)\in\overline{h[\theta(M)]}\subseteq\overline{F_{2}^{c}}\) which contradicts the fact that \(F_{2}\) is a neighborhood of \(h^{*}(a)\). If \(-\theta(x)\in\operatorname{tp}(a/M)\), then \(\alpha\in\overline{h[(-\theta(M)]}\subseteq\overline{F_{1}^{c}}\) which contradicts the fact that \(F_{1}\) is a neighborhood of \(\alpha\). So \((**)\) has been proved. Analogously, \(S^{l}\beta\cap S^{l}h^{*}(b)\neq\emptyset\). From these two observations and \((*)\), we get \(h^{*}(ab)\in S^{2l}h^{*}(a)S^{2l}h^{*}(b)S=S^{4l+1}h^{*}(a)h^{*}(b)\). Thus, (1) has been proved.
By (1) and the definition of the semigroup operation \(*\) on \(S_{G,M}(N)\), we immediately get
(2) \(\bar{h}\colon S_{G,M}(N)\to H:S^{4l+1}\).
(3) \(\bar{h}(ugu)\in h(g)S^{4(4l+1)}\).
Item (3) follows by the following computation
\[h(g)^{-1}\bar{h}(ugu)=h(g)^{-1}\bar{h}(u)h(g)\bar{h}(u)S^{2(4l+1)}\subseteq h( g)^{-1}S^{4l+1}h(g)S^{4l+1}S^{2(4l+1)}=S^{4(4l+1)},\]
which uses (2) and the fact that \(\bar{h}\restriction_{G}=h\) (which follows from the formulas for \(\bar{h}\) and \(h_{M}\)).
(4) _For every compact \(V\subseteq H\), \(\bar{h}^{-1}[V]\subseteq S_{X^{i},M}(N)\) for some \(i\), and so \(\bar{h}^{-1}[V]\cap u\mathcal{M}\) is relatively quasi-compact in the \(\tau\)-topology._
Chose a compact neighborhood \(U\) of \(V\). We have \(h^{-1}[U]\subseteq X^{i}\) for some \(i\), and we check that \(i\) is good. If not, there is \(p\in S_{G,M}(N)\backslash S_{X^{i},M}(N)\) with \(\bar{h}(p)\in V\). Then \(\bar{h}(p)\in\overline{h[G\backslash X^{i}]}\subseteq\overline{U^{c}}\) which is disjoint from \(V\) as \(U\) is a neighborhood of \(V\), a contradiction. This implies that \(\bar{h}^{-1}[V]\cap u\mathcal{M}\) is relatively quasi-compact in the \(\tau\)-topology by the argument in Claim 2 of the proof of Proposition 3.21 and the final paragraph of the proof of that proposition.
(5) \(\bar{h}[S_{X^{i},M}(N)]\) _is relatively compact for every \(i\in\mathbb{N}\)._
This is immediate from the fact that \(\bar{h}[S_{X^{i},M}(N)]\subseteq\overline{h[X^{i}]}\) and \(\overline{h[X^{i}]}\) is compact.
In order to show that \(\widetilde{h}\in\operatorname{Mor}(f,h)\), we will use the above observations and the following claims.
**Claim 1**.:
1. \(h^{*}(a^{-1})\in h^{*}(a)^{-1}S^{2(4l+1)}\)_._
2. \(h^{*}(ab^{-1})\in S^{3(4l+1)}\) _for every_ \(a\equiv_{M}b\)_._
3. \(h^{*}[F_{n}]\subseteq S^{(4n-1)(4l+1)}\)_._
Proof.: (i) follows from (1). The computation in (ii) is as follows: \(h^{*}(ab^{-1})\in h^{*}(a)h^{*}(b^{-1})S^{4l+1}\subseteq h^{*}(a)h^{*}(b)^{-1} S^{3(4l+1)}=h^{*}(a)h^{*}(a)^{-1}S^{3(4l+1)}=S^{3(4l+1)}\), where we used (1), (i), and the definition of \(h^{*}\). Finally, (iii) follows from (ii) and (1). \(\Box\)(claim)
**Claim 2**.: \(\tilde{h}(p/H(u\mathcal{M}))\in\tilde{h}(p)S^{12(4l+1)}\)_._
Proof.: By the definition of \(\tilde{h}\), \(\tilde{h}(p/H(u\mathcal{M}))=\bar{h}(q)\) for some \(q\in u\mathcal{M}\) and \(r\in H(u\mathcal{M})\) such that \(q=pr\). By Lemma 3.30, \(H(u\mathcal{M})\subseteq\tilde{F}_{3}\cap u\mathcal{M}\), so by Claim 1(iii), \(\bar{h}[H(u\mathcal{M})]\subseteq S^{11(4l+1)}\). Therefore, by (2), \(\bar{h}(p/H(u\mathcal{M}))=\bar{h}(q)\in\bar{h}(p)\bar{h}(r)S^{4l+1}\subseteq \bar{h}(p)S^{12(4l+1)}\). \(\Box\)(claim)
(6) \(\tilde{h}\colon u\mathcal{M}/H(u\mathcal{M})\to H:S^{37(4l+1)}\)
This follows from (2) and Claim 2. Namely, we have: \(\tilde{h}(p/H(u\mathcal{M})\cdot q/H(u\mathcal{M}))=\tilde{h}(pq/H(u\mathcal{ M}))\in\tilde{h}(pq)S^{12(4l+1)}\subseteq\bar{h}(p)\bar{h}(q)S^{13(4l+1)}\subseteq \tilde{h}(p/H(u\mathcal{M}))\tilde{h}(q/H(u\mathcal{M}))S^{37(4l+1)}\).
(7) \(\tilde{h}(f(g))\in h(g)S^{16(4l+1)}\).
This follows from (3) and Claim 2. Namely: \(\tilde{h}(f(g))=\tilde{h}(ugu/H(u\mathcal{M}))\in\bar{h}(ugu)S^{12(4l+1)} \in h(g)S^{4(4l+1)}S^{12(4l+1)}=h(g)S^{16(4l+1)}\).
(8) _For every compact \(V\subseteq H\), \(\tilde{h}^{-1}[V]\) is relatively compact._
This follows from (4). Namely, by the definition of \(\tilde{h}\), \(\tilde{h}^{-1}[V]\subseteq\pi[\bar{h}^{-1}[V]\cap u\mathcal{M}]\) (where \(\pi\colon u\mathcal{M}\to u\mathcal{M}/H(u\mathcal{M})\) is the quotient map). By (4), \(\operatorname{cl}_{\tau}(\tilde{h}^{-1}[V]\cap u\mathcal{M})\) is quasi-compact, and so \(\operatorname{cl}(\pi[\bar{h}^{-1}[V]\cap u\mathcal{M}])=\pi[\operatorname{ cl}_{\tau}(\tilde{h}^{-1}[V]\cap u\mathcal{M})]\) is compact. Therefore, \(\operatorname{cl}(\bar{h}^{-1}[V])\) being a closed subset of \(\operatorname{cl}(\pi[\bar{h}^{-1}[V]\cap u\mathcal{M}])\) is also compact.
(9) _For every compact \(V\subseteq u\mathcal{M}/H(u\mathcal{M})\), \(\tilde{h}[V]\) is relatively compact in \(H\)._
By the definition of \(\bar{h}\), \(\bar{h}[V]\subseteq\bar{h}[\pi^{-1}[V]]\). By Lemmas 3.31 and 3.30, the set \(\pi^{-1}[V]\) is contained in some \(S_{X^{i},M}(N)\). Thus, by (5), \(\bar{h}[\pi^{-1}[V]]\) is relatively compact, and so \(\tilde{h}[V]\) is relatively compact, too.
(10) \(\tilde{h}[C]\subseteq S^{51(4l+1)}\).
By Lemma 3.29(4), \(C\subseteq(\tilde{F}_{10}\cap u\mathcal{M})/H(u\mathcal{M})\). Hence, using Claim 2, \(\tilde{h}[C]\subseteq\bar{h}[\tilde{F}_{10}]S^{12(4l+1)}\). By Claim 1(iii), \(\tilde{h}[\tilde{F}_{10}]\subseteq S^{39(4l+1)}\). Therefore, \(\tilde{h}[C]\subseteq S^{51(4l+1)}\).
(11) _There is \(m\in\mathbb{N}\) such that for any compact \(Y,Z\subseteq H\) with \(S^{m}Y\cap S^{m}Z=\emptyset\), \(C\operatorname{cl}(\tilde{h}^{-1}[Y])\cap C\operatorname{cl}(\tilde{h}^{-1}[Z ])=\emptyset\)._
By Lemma 3.29(4), \(C\subseteq(\tilde{F}_{10}\cap u\mathcal{M})/H(u\mathcal{M})\), so it is enough to find \(m\) such that
\[(!)\qquad S^{m}Y\cap S^{m}Z=\emptyset\]
implies
\[(!)\qquad(\tilde{F}_{10}\cap u\mathcal{M})/H(u\mathcal{M})\operatorname{cl}( \tilde{h}^{-1}[Y])\cap(\tilde{F}_{10}\cap u\mathcal{M})/H(u\mathcal{M}) \operatorname{cl}(\tilde{h}^{-1}[Z])=\emptyset.\]
Since \(\bar{h}^{-1}[Y]\subseteq\pi[\bar{h}^{-1}[Y]\cap u\mathcal{M}]\), \(\bar{h}^{-1}[Z]\subseteq\pi[\bar{h}^{-1}[Z]\cap u\mathcal{M}]\), and \(\bar{h}^{-1}[Y]\cap u\mathcal{M}\) as well as \(\bar{h}^{-1}[Z]\cap u\mathcal{M}\) are relatively quasi-compact by (4), we deduce that \(\operatorname{cl}(\tilde{h}^{-1}[Y])\subseteq\operatorname{cl}(\pi[\bar{h}^{-1}[ Y]\cap u\mathcal{M}])=\pi[\operatorname{cl}_{\tau}(\tilde{h}^{-1}[Y]\cap u \mathcal{M})]\) and \(\operatorname{cl}(\tilde{h}^{-1}[Z])\subseteq\operatorname{cl}(\pi[\bar{h}^{-1 }[Z]\cap u\mathcal{M}])=\pi[\operatorname{cl}_{\tau}(\tilde{h}^{-1}[Z]\cap u \mathcal{M})]\). We conclude that in order to show (\(\dagger\)), it is enough to show that \((\tilde{F}_{10}\cap u\mathcal{M})H(u\mathcal{M})\operatorname{cl}_{\tau}(\tilde{h}^ {-1}[Y]\cap u\mathcal{M})\cap(\tilde{F}_{10}\cap u\mathcal{M})H(u\mathcal{M}) \operatorname{cl}_{\tau}(\tilde{h}^{-1}[Z]\cap u\mathcal{M})=\emptyset\). By virtue of Lemma 3.30, this boils down to
\[(\dagger\dagger)\qquad(\tilde{F}_{13}\cap u\mathcal{M})\operatorname{cl}_{\tau}( \bar{h}^{-1}[Y]\cap u\mathcal{M})\cap(\tilde{F}_{13}\cap u\mathcal{M}) \operatorname{cl}_{\tau}(\bar{h}^{-1}[Z]\cap u\mathcal{M})=\emptyset.\]
We will show that \(m:=56(4l+1)+2l\) works. Suppose for a contradiction that (!) holds for this \(m\), whereas (\(\dagger\dagger\)) fails.
By (4), \(\bar{h}^{-1}[Y]\) and \(\bar{h}^{-1}[Z]\) are contained in some \(S_{X^{n},M}(N)\) which is closed in \(S_{G,M}(N)\). Thus, by the defintion of \(\operatorname{cl}_{\tau}\) and an easy compactness argument, we get that any element \(p\) in the intersection from \((\dagger\dagger)\) is of the form
\[\operatorname{tp}(a_{1}b_{1}^{-1}\ldots a_{13}b_{13}^{-1}\alpha\beta/N)= \operatorname{tp}(a_{1}^{\prime}b_{1}^{\prime-1}\ldots a_{13}^{\prime}b_{13}^{ \prime-1}\alpha^{\prime}\beta^{\prime}/N)\]
for some \(a_{i},b_{i},a_{i}^{\prime},b_{i}^{\prime},\alpha,\alpha^{\prime},\beta,\beta^ {\prime}\in\bar{G}\) satisfying \(a_{i}\equiv_{M}b_{i}\), \(a_{i}^{\prime}\equiv_{M}b_{i}^{\prime}\), \(\alpha\models u\), \(\alpha^{\prime}\models u\), \(\operatorname{tp}(\beta/N)\in\overline{h^{-1}[Y]}\), \(\operatorname{tp}(\beta^{\prime}/N)\in\overline{h^{-1}[Z]}\), where \(\overline{h^{-1}[Y]}\) and \(\overline{h^{-1}[Z]}\) are closures computed in \(S_{G,M}(N)\). Pick such an element \(p\). By Lemma 3.27(1), it equals
\[\operatorname{tp}(a_{1}b_{1}^{-1}\ldots a_{14}b_{14}^{-1}\beta/N)= \operatorname{tp}(a_{1}^{\prime}b_{1}^{\prime-1}\ldots a_{14}^{\prime}b_{14}^ {\prime-1}\beta^{\prime}/N)\]
for some \(a_{14},b_{14},a_{14}^{\prime},b_{14}^{\prime}\in\bar{G}\) with \(a_{14}\equiv_{M}b_{14}\) and \(a_{14}^{\prime}\equiv_{M}b_{14}^{\prime}\).
**Claim 3**.: \(h^{*}(\beta)\in S^{2l}Y\) _and \(h^{*}(\beta^{\prime})\in S^{2l}Z\)._
Proof.: Suppose for a contradiction that \(h^{*}(\beta)\notin S^{2l}Y\). So \(S^{l}h^{*}(\beta)\cap S^{l}Y\neq\emptyset\). Then \(S^{l}U\cap S^{l}V=\emptyset\) for some compact neighborhoods \(U\) of \(h^{*}(\beta)\) and \(V\) of \(Y\). By the choice of \(l\), there is a formula \(\theta(x)\in L_{M}\) such that \(h^{-1}[U]\subseteq\theta(G)\) and \(h^{-1}[V]\subseteq G\backslash\theta(G)\).
Case 1. \(\theta(x)\in\operatorname{tp}(\beta/M)\). Since \(\operatorname{tp}(\beta/N)\in\overline{\bar{h}^{-1}[Y]}\), there is \(q\in[\theta(x)]\cap\bar{h}^{-1}[Y]\). Then \(\bar{h}(q)\in\overline{h[\theta(G)]}\cap Y\). On the other hand, \(h[\theta(G)]\subseteq V^{c}\), which implies that \(\overline{h[\theta(G)]}\cap Y=\emptyset\), because \(V\) is a neighborhood of \(Y\). This is a contradiction.
Case2. \(-\theta(x)\in\operatorname{tp}(\beta/M)\). Then \(h^{*}(\beta)\in\overline{h[G\backslash\theta(G)]}\). On the other hand, \(h[G\backslash\theta(G)]\subseteq U^{c}\), which implies that \(h^{*}(\beta)\notin\overline{h[G\backslash\theta(G)]}\), because \(U\) is a neighborhood of \(h^{*}(\beta)\). This is a contradiction.
Using (1), Claim 1(iii), and Claim 3, we get:
\[\bar{h}(p)=h^{*}\left(\prod_{i=1}^{14}a_{i}b_{i}^{-1}\beta\right)\in h^{*} \left(\prod_{i=1}^{14}a_{i}b_{i}^{-1}\right)h^{*}(\beta)S^{4l+1}\subseteq S^{ 55(4l+1)}S^{2l}YS^{4l+1}=S^{56(4l+1)+2l}Y.\]
Similarly, \(\bar{h}(p)\in S^{56(4l+1)+2l}Z.\) Hence, \(S^{56(4l+1)+2l}Y\cap S^{56(4l+1)+2l}Z\neq\emptyset\), which contradicts (!) for \(m:=56(4l+1)+2l\).
The usual notion of definable map from a definable subset \(D\) of \(M\) to a compact space is explained in terms of a factorization of this map through the type space \(S_{D}(M)\) via a continuous map. The notion of definability in item (3) of Definition 3.1 is less obvious. The next corollary explains it using a factorization through \(u\mathcal{M}/H(u\mathcal{M})\).
**Corollary 4.7**.: _A quasi-homomorphism \(h\colon G\to H:S\) with a compact, normal, symmetric subset \(S\) of a locally compact group \(H\) is a generalized definable locally compact model of \(X\) if and only if there exists a good quasi-homomorphism \(\tilde{h}\colon u\mathcal{M}/H(u\mathcal{M})\to H:S^{m}\) for \((u\mathcal{M}/H(u\mathcal{M}),C)\), for some \(m\in\mathbb{N}\) such that \(\tilde{h}(f(g))\in h(g)S^{m}\) for all \(g\in G\) (where \(f\colon G\to u\mathcal{M}/H(u\mathcal{M}):C\) is the generalized definable locally compact model of \(X\) from Theorem 3.25)._
Proof.: The implication \((\Rightarrow)\) follows directly from Theorem 4.6.
\((\Leftarrow)\) We check items (1), (2), (3) of Definition 3.1 applied to \(h\).
(1) For any compact \(V\subseteq H\) the set \(S^{m}V\) is also compact, and so \(\tilde{h}^{-1}[S^{m}V]\) is relatively compact. Thus, \(h^{-1}[V]\subseteq f^{-1}[\tilde{h}^{-1}[S^{m}V]]\) is contained in some \(X^{i}\).
(2) We know that \(f[X]\) is relatively compact, and so \(\tilde{h}[f[X]]\) is relatively compact. Hence, \(S^{m}\tilde{h}[f[X]]\) is relatively compact. Since \(h[X]\subseteq S^{m}\tilde{h}[f[X]]\), we conclude that \(h[X]\) is relatively compact, too.
(3) We will show that \(l:=m+m_{2}\) works, where \(m_{2}\) is a number witnessing that Remark 4.2(2) holds for the good quasi-homomorphism \(\tilde{h}\). For that take any compact \(Y,Z\subseteq H\) with
\(S^{l}Y\cap S^{l}Z=\emptyset\). Then \(S^{m_{2}}(S^{m}Y)\cap S^{m_{2}}(S^{m}Z)=\emptyset\) and \(S^{m}Y\), \(S^{m}Z\) are compact. So, by the choice of \(m_{2}\),
\[C^{2}\operatorname{cl}(\tilde{h}^{-1}[S^{m}Y])\cap C^{2}\operatorname{cl}( \tilde{h}^{-1}[S^{m}Z])=\emptyset.\]
Therefore, \(f^{-1}[\tilde{h}^{-1}[S^{m}Y]]\) and \(f^{-1}[\tilde{h}^{-1}[S^{m}Z]]\) can be separated by a definable set. Since \(h^{-1}[Y]\subseteq f^{-1}[\tilde{h}^{-1}[S^{m}Y]]\) and \(h^{-1}[Z]\subseteq f^{-1}[\tilde{h}^{-1}[S^{m}Z]]\), we conclude that \(h^{-1}[Y]\) and \(h^{-1}[Z]\) can be separated by a definable set.
**Theorem 4.8**.: _(Universality of \(f\colon G\to u\mathcal{M}/H(u\mathcal{M})\): approximate uniqueness) Take \(f\colon G\to u\mathcal{M}/H(u\mathcal{M}):C\) from Theorem 3.25. Let \(h\colon G\to H:S\) be an arbitrary generalized definable locally compact model of \(X\), and \(\rho\in\operatorname{Mor}(f,h)\) any morphism. Let \(\widetilde{h}\in\operatorname{Mor}(f,h)\) be a (non uniquely determined) morphism constructed in Theorem 4.6. Then there is \(n\in\mathbb{N}\) (depending only on \(l\) in item (3) of Definition 3.1 applied to \(h\), and on \(k\) in Definition 4.3 and \(m_{2}\) in Remark 4.2(2) both applied to \(\rho\)) such that \(\rho(p/H(u\mathcal{M}))\in\tilde{h}(p/H(u\mathcal{M}))S^{n}\) for all \(p\in u\mathcal{M}\)._
Proof.: We will show that \(n:=4\max(m_{2},k+12(4l+1))\) works. Suppose not, i.e. \(\rho(p/H(u\mathcal{M}))\notin\tilde{h}(p/H(u\mathcal{M}))S^{n}\) for some \(p\in u\mathcal{M}\). Then \(S^{\frac{n}{2}}\rho(p/H(u\mathcal{M}))\cap S^{\frac{n}{2}}\tilde{h}(p/H(u \mathcal{M}))=\emptyset\). So we can find a compact neighborhood \(V\) of the neutral element in \(H\) such that
\[S^{\frac{n}{2}}\rho(p/H(u\mathcal{M}))V\cap S^{\frac{n}{2}}\tilde{h}(p/H(u \mathcal{M}))V=\emptyset.\]
Put \(V^{\prime}:=VS^{\frac{n}{4}}\). Then
\[S^{\frac{n}{4}}\rho(p/H(u\mathcal{M}))V^{\prime}\cap S^{\frac{n}{4}}\tilde{h}( p/H(u\mathcal{M}))V^{\prime}=\emptyset,\]
and \(\rho(p/H(u\mathcal{M}))V^{\prime}\) and \(\tilde{h}(p/H(u\mathcal{M}))V^{\prime}\) are compact sets. Since \(n/4\geq m_{2}\), we get
\[(*)\qquad C^{2}\operatorname{cl}(\rho^{-1}[\rho(p/H(u\mathcal{M}))V^{\prime}] )\cap C^{2}\operatorname{cl}(\rho^{-1}[\tilde{h}(p/H(u\mathcal{M}))V^{\prime} ])=\emptyset.\]
Put \(P:=\operatorname{cl}(\rho^{-1}[\rho(p/H(u\mathcal{M}))V^{\prime}])\) and \(Q:=\operatorname{cl}(\rho^{-1}[\tilde{h}(p/H(u\mathcal{M}))V^{\prime}])\). By the proof of part (7) of the proof of Theorem 3.25, we conclude from \((*)\) that there exists \(\theta(x)\in L_{M}\) such that
\[(**)\qquad\hat{f}^{-1}[P]\subseteq[\theta(x)]\text{ and }\hat{f}^{-1}[Q] \subseteq[\neg\theta(x)].\]
Since \(p/H(u\mathcal{M})\in P\) and \(\hat{f}(p)=upu/H(u\mathcal{M})=p/H(u\mathcal{M})\), we see that \(p\in\hat{f}^{-1}[P]\), hence \(\theta(x)\in p\), and so \(\tilde{h}(p)\in\overline{h[\theta(G)]}\) (where \(\tilde{h}\) is chosen as in Theorem 4.6). By Claim 2 from the proof of Theorem 4.6, we conclude that \(\tilde{h}(p/H(u\mathcal{M}))\in\overline{h[\theta(G)]}S^{12(4l+1)}\). So
\[\begin{split}&\tilde{h}(p/H(u\mathcal{M}))\in\overline{\rho[f[ \theta(G)]]S^{k}}S^{12(4l+1)}=\overline{\rho[f[\theta(G)]]}S^{k+12(4l+1)} \subseteq\overline{\rho[\hat{f}[[\theta(x)]]]}S^{k+12(4l+1)}\subseteq\\ &\overline{\rho[Q^{c}]}S^{k+12(4l+1)}\subseteq\overline{(\tilde{ h}(p/H(u\mathcal{M}))V^{\prime})^{c}}S^{k+12(4l+1)}=\overline{(\tilde{h}(p/H(u \mathcal{M}))VS^{\frac{n}{4}})^{c}}S^{k+12(4l+1)}\subseteq\\ &\overline{(\tilde{h}(p/H(u\mathcal{M}))VS^{\frac{n}{4}})^{c}}S^{ \frac{n}{4}},\end{split}\]
where the first belonging is by the choice of \(k\), the first equality by compactness of \(S\), the first inclusion is obvious, the second follows by \((**)\), the next one by the definition of \(Q\), the next equality by the definition of \(V^{\prime}\), and the last inclusion since \(n/4\geq k+12(4l+1)\). Thus, \(\tilde{h}(p/H(u\mathcal{M}))\in\overline{(\tilde{h}(p/H(u\mathcal{M}))VS^{ \frac{n}{4}})^{c}}S^{\frac{n}{4}}\), which is impossible, because it implies that \(\tilde{h}(p/H(u\mathcal{M}))VS^{\frac{n}{4}}\cap(\tilde{h}(p/H(u\mathcal{M})) VS^{\frac{n}{4}})^{c}\neq\emptyset\).
To get full uniqueness (i.e. that \(f\) is the initial object) we have to modify the notion of morphism.
**Definition 4.9**.: Let \(f\colon G\to H:S\) and \(h\colon G\to L:T\) be generalized definable locally compact models of \(X\). Let \(\rho_{1},\rho^{\prime}_{1}\in\operatorname{Mor}(f,h)\). We say that \(\rho_{1}\) and \(\rho^{\prime}_{1}\) are _equivalent_ (symbolically, \(\rho_{1}\sim\rho^{\prime}_{1}\)) if for some \(l\in\mathbb{N}\), for every \(p\in H\) we have \(\rho^{\prime}_{1}(p)\in\rho_{1}(p)T^{l}\).
_Remark 4.10_.: \(\sim\) is an equivalence relation on \(\operatorname{Mor}(f,h)\).
**Proposition 4.11**.: _If \(f_{i}\colon G\to H_{i}:S_{i}\) for \(i\in\{1,2,3\}\) are generalized definable locally compact models of \(X\) and \(\rho_{1}\sim\rho_{1}^{\prime}\) belong to \(\operatorname{Mor}(f_{1},f_{2})\) and \(\rho_{2}\sim\rho_{2}^{\prime}\) belong to \(\operatorname{Mor}(f_{2},f_{3})\), then \(\rho_{2}\rho_{1}\sim\rho_{2}^{\prime}\rho_{1}^{\prime}\). Thus, all generalized definable locally compact models of \(X\) with morphisms modulo \(\sim\) form a category._
Proof.: Let \(l_{1}\) and \(l_{2}\) be witnesses for \(\rho_{1}\sim\rho_{1}^{\prime}\) and \(\rho_{2}\sim\rho_{2}^{\prime}\), that is \(\rho_{1}^{\prime}(p)\in\rho_{1}(p)S_{2}^{l_{1}}\) and \(\rho_{2}^{\prime}(q)\in\rho_{2}(q)S_{3}^{l_{2}}\) for all \(p\in H_{1}\) and \(q\in H_{2}\). Let \(k_{2}^{\prime}\) be a witness that \(\rho_{2}^{\prime}\in\operatorname{Mor}(f_{2},f_{3})\), that is \(\rho_{2}^{\prime}\colon H_{2}\to H_{3}:S_{3}^{k_{2}^{\prime}}\) and \(\rho_{2}^{\prime}(f_{2}(g))\in f_{3}(g)S_{3}^{k_{2}^{\prime}}\), and let \(n_{l_{1}}\) be the number from Remark 4.2(1) obtained for \(\rho_{2}^{\prime}\). Then, for every \(p\in H_{1}\) we have
\[\rho_{2}^{\prime}(\rho_{1}^{\prime}(p))\in\rho_{2}^{\prime}[\rho_{1}(p)S_{2}^ {l_{1}}]\subseteq\rho_{2}^{\prime}(\rho_{1}(p))\rho_{2}^{\prime}[S_{2}^{l_{1} }]S_{3}^{k_{2}^{\prime}}\subseteq\rho_{2}(\rho_{1}(p))S_{3}^{l_{2}}S_{3}^{k_{ 2}^{\prime}n_{l_{1}}}S_{3}^{k_{2}^{\prime}}=\rho_{2}(\rho_{1}(p))S_{3}^{k_{2}^ {\prime}n_{l_{1}}+l_{2}+k_{2}^{\prime}}.\]
Thus, for \(\rho_{1}\in\operatorname{Mor}(f_{1},f_{2})\) and \(\rho_{2}\in\operatorname{Mor}(f_{2},f_{3})\) we have a well-defined
\[\rho_{1}/\sim\circ\rho_{2}/\sim:=(\rho_{1}\circ\rho_{2})/\sim.\]
So it is clear that the all generalized definable locally compact models of \(X\) with morphisms modulo \(\sim\) form a category.
By Theorems 3.25, 4.6, and 4.8, we get the main result of this section.
**Corollary 4.12**.: _The generalized definable locally compact model \(f\colon G\to u\mathcal{M}/H(u\mathcal{M}):C\) from Theorem 3.25 is the initial object in the category from the last proposition._
We finish with some natural questions which arise in the special case of Theorem 4.6 when \(h:=f\), where \(f\colon G\to u\mathcal{M}/H(u\mathcal{M}):C\) is from Theorem 3.25. In this special case, the construction described in the second paragraph of Theorem 4.6 yields non uniquely determined functions \(\bar{f}\colon S_{G,M}(N)\to u\mathcal{M}/H(u\mathcal{M})\) and \(\tilde{f}\colon u\mathcal{M}/H(u\mathcal{M})\to u\mathcal{M}/H(u\mathcal{M})\) such that \(\tilde{f}\in\operatorname{Mor}(f,f)\). On the other hand, clearly \(\operatorname{id}\in\operatorname{Mor}(f,f)\). This leads to
**Question 4.13**.:
1. _Can we choose_ \(\tilde{f}\) _by the construction in Theorem_ 4.6 _so that_ \(\tilde{f}=\operatorname{id}\)_?_
2. _Can we choose_ \(\bar{f}\) _by the construction in Theorem_ 4.6 _so that_ \(\tilde{f}|_{u\mathcal{M}}\colon u\mathcal{M}\to u\mathcal{M}/H(u\mathcal{M})\) _is the quotient map?_
3. _Can we choose_ \(\bar{f}\) _by the construction in Theorem_ 4.6 _so that_ \(\bar{f}(p)=\hat{f}(p):=upu/H(u\mathcal{M})\) _for all_ \(p\in S_{G,M}(N)\)_?_
By how \(\tilde{f}\) is obtained from \(\bar{f}\), we see that a positive answer to (2) implies a positive answer to (1). Since \(\hat{f}(p)=p/H(u\mathcal{M})\) for all \(p\in u\mathcal{M}\), we get that a positive answer to (3) implies a positive answer to (2).
In the next section, the example with \(X\) being a definable, generic, symmetric subset of the universal cover \(\widetilde{\operatorname{SL}_{2}(\mathbb{R})}\) of \(\operatorname{SL}_{2}(\mathbb{R})\) will yield a negative answer to (3), but not to (2).
## 5. Compact case
In this section, we focus on the special case when the definable approximate subgroup \(X\) generates a group in finitely many steps. This is equivalent to \(G:=\langle X\rangle\) being a definable group in which \(X\) is a definable, generic, symmetric set (\(X\) being _generic_ in \(G\) means that finitely many left translates of \(X\) cover \(G\)). Thus, we will consider just this case or, slightly more generally, the case of a definable generic subset \(X\) of a definable group \(G\) (notice that then \(\langle X\rangle\) has finite index in \(G\)), which is fundamental in model theory.
In the case when \(G=\langle X\rangle\), the group \(u\mathcal{M}/H(u\mathcal{M})\) in the generalized definable locally compact model \(f\colon G\to u\mathcal{M}/H(u\mathcal{M})\) from Theorem 3.25 is compact, which follows from the last paragraph of the proof of Proposition 3.21, since \(u\mathcal{M}\subseteq S_{X^{n},M}(M)\) for some \(n\in\mathbb{N}\); in fact, in this case, all the topological dynamics developed in Subsection 3.1 boils down to the classical topological dynamics of the compact flow \(S_{G,M}(N)\), so \(u\mathcal{M}/H(u\mathcal{M})\) is compact.
In the more general context of \(X\) being a definable, generic, symmetric subset of a definable group \(G\), one can also use the compact group \(u\mathcal{M}/H(u\mathcal{M})\) computed for the compact \(G\)-flow \(S_{G,M}(N)\) and adapting (and even simplifying some parts of) the arguments from Subsection 3.2, we conclude with
**Theorem 5.1**.: _The function \(f\colon G\to u\mathcal{M}/H(u\mathcal{M})\) given by \(f(g):=ugu/H(u\mathcal{M})\) has the following properties._
1. \(f\) _is a quasi-homomorphism with compact, normal, symmetric error set_ \(C:=\mathrm{cl}_{\tau}(\tilde{F})\cup\mathrm{cl}_{\tau}(\tilde{F})^{-1}\)_, where:_ \[F_{n} :=\{x_{1}y_{1}^{-1}\ldots x_{n}y_{n}^{-1}:x_{i},y_{i}\in\bar{G}\text { and }x_{i}\equiv_{M}y_{i}\text{ for all }i\leqslant n\},\] \[\tilde{F}_{n} :=\{\mathrm{tp}(a/N)\in S_{G,M}(N):a\in F_{n}\},\] \[\tilde{F} :=((\tilde{F}_{7}\cap u\mathcal{M})/H(u\mathcal{M}))^{u\mathcal{ M}/H(u\mathcal{M})}.\] _Moreover,_ \((\tilde{F}_{3}\cap u\mathcal{M})/H(u\mathcal{M})\) _is an error set of_ \(f\)_._
2. \(f^{-1}[C]\subseteq X^{30}\)_._
3. _There is a compact neighborhood_ \(U\) _of the neutral element in_ \(u\mathcal{M}/H(u\mathcal{M})\) _such that_ \(f^{-1}[U]\subseteq X^{14}\) _and_ \(f^{-1}[UC]\subseteq X^{34}\)_._
4. _For any closed_ \(Z,Y\subseteq u\mathcal{M}/H(u\mathcal{M})\) _with_ \(C^{2}Y\cap C^{2}Z=\emptyset\) _the preimages_ \(f^{-1}[Y]\) _and_ \(f^{-1}[Z]\) _can be separated by a definable set._
Note that if \(X\) is definable, generic, but not symmetric, then replacing it by \(XX^{-1}\), we get a definable, generic, and symmetric set, and it is clear how to modify items (2) and (3) in this context. So the assumption that \(X\) is symmetric is rather minor.
The proof of Fact 3.4(1) adapts to
_Remark 5.2_.: For every neighborhood \(U\) of \(u/H(u\mathcal{M})\) the preimage \(f^{-1}[UC]\) is generic in \(G\), that is the preimage under \(f\) of any neighborhood of \(C\) is generic in \(G\).
Theorem 5.1(3) and Remark 5.2 can be thought of as a structural result on definable generic subsets of an arbitrary definable group \(G\). In concrete examples, this can lead to more precise information on generics.
In Subsection 5.1, we will illustrate it by the universal cover \(\widehat{\mathrm{SL}_{2}(\mathbb{R})}\) of \(\mathrm{SL}_{2}(\mathbb{R})\). Our analysis of \(\widehat{\mathrm{SL}_{2}(\mathbb{R})}\) also yields a negative answer to item (3) of Question 4.13, and a positive answer to item (2) in the special case of definable generics in \(\widehat{\mathrm{SL}_{2}(\mathbb{R})}\). Moreover, our analysis confirms a certain weakening of Newelski's conjecture (that we have had in mind for a while) in the special case of \(\widehat{\mathrm{SL}_{2}(\mathbb{R})}\). So we take the opportunity and state this weakened conjecture below.
Let \(G\) a group definable in a structure \(M\). Let \(N>M\) be \(|N|^{+}\)-saturated, and \(\mathfrak{C}>N\) a monster model. By \(\bar{G}\) we denote the interpretation of \(G\) in \(\mathfrak{C}\). Let \(u\mathcal{M}\) be the Ellis group of the flow \((G,S_{G,M}(N))\), and let \(\bar{G}^{00}_{M}\) be the smallest type-definable over \(M\) subgroup of \(\bar{G}\) which has bounded index. Newelski's conjecture says that the group epimorphism \(\theta\colon u\mathcal{M}\to\bar{G}/\bar{G}^{00}_{M}\) given by \(\theta(\mathrm{tp}(a/N)):=a/\bar{G}^{00}_{M}\) is an isomorphism under suitable assumptions on tameness of the ambient theory [20]. In [10], the conjecture was refuted for \(G:=\mathrm{SL}_{2}(\mathbb{R})\) treated as a group definable in \(M:=(\mathbb{R},+,\cdot)\), where the Ellis group turned out to be \(\mathbb{Z}_{2}\) while \(\bar{G}/\bar{G}^{00}_{M}\) is trivial. On the other hand, the conjecture was confirmed in [11] for definably amenable groups definable in NIP theories. In [11], we refined Newelski's epimorphism \(\theta\) obtaining a sequence of epimorphisms
\[u\mathcal{M}\to u\mathcal{M}/H(u\mathcal{M})\to\bar{G}/\bar{G}^{000}_{M}\to \bar{G}/\bar{G}^{00}_{M},\]
where \(\bar{G}^{000}_{M}\) is the smallest bounded index subgroup of \(\bar{G}\) which is invariant under \(\mathrm{Aut}(\mathfrak{C}/M)\). This leads to many counter-examples to Newelski's conjecture. Namely, whenever \(\bar{G}^{000}_{M}\neq\bar{G}^{00}_{M}\), then Newelski's conjecture fails; in fact, we proved that then even \(u\mathcal{M}/H(u\mathcal{M})\to\bar{G}/\bar{G}^{000}_{M}\) is
not an isomorphism. The first example where \(\bar{G}_{M}^{000}\neq\bar{G}_{M}^{00}\) was found in [12]: \(G:=\widehat{\mathrm{SL}_{2}(\mathbb{R})}\) treated as a group definable in the two-sorted structure \(M:=((\mathbb{R},+,\cdot),(\mathbb{Z},+))\) has this property. Many other examples were then found in [10], e.g. the non-abelian free groups equipped with the full structure. Another situation in which Newelski's conjecture fails is when \(H(u\mathcal{M})\) is nontrivial, equivalently when \(u\mathcal{M}\) is not Hausdorff in the \(\tau\)-topology. While in general we are able to find examples in which \(u\mathcal{M}\) is not Hausdorff, we have not found any such example with NIP. This leads to the following weakening of Newelski's conjecture.
**Conjecture 5.3**.: _If \(M\) has NIP, then \(u\mathcal{M}\) is Hausdorff._
From the above discussion, this is true for definably amenable groups definable in NIP theories. It is also true whenever \(u\mathcal{M}\) is finite, as the \(\tau\)-topology is \(T_{1}\) and so Hausdorff when \(u\mathcal{M}\) is finite. In Subsection 5.1, we will confirm it for \(G:=\widehat{\mathrm{SL}_{2}(\mathbb{R})}\) treated as a group definable in the two-sorted structure \(M:=((\mathbb{R},+,\cdot),(\mathbb{Z},+))\) which clearly has NIP; more precisely, the Ellis group in this case will turn out to be topologically isomorphic to the profinite completion \(\widehat{\mathbb{Z}}\) of \(\mathbb{Z}\).
### Case study of \(\widehat{\mathrm{SL}_{2}(\mathbb{R})}\)
From now on, \(M\) is the 2-sorted structure with the sorts \((\mathbb{R},+,\cdot)\) and \((\mathbb{Z},+)\), \(G:=\widehat{\mathrm{SL}_{2}(\mathbb{R})}\), and \(\tilde{G}:=\widehat{\mathrm{SL}_{2}(\mathbb{R})}\). So now \(\tilde{G}\) will play the role of \(G\) from the above discussion.
Recall that \(\tilde{G}\) can be written as \(\mathrm{SL}_{2}(\mathbb{R})\times\mathbb{Z}\) with the group operation given by \((a_{1},b_{1})(a_{2},b_{2}):=(a_{1}a_{2},b_{1}+b_{2}+h(b_{1},b_{2}))\), where \(h\colon G\times G\to\mathbb{Z}\) is the 2-cocycle defined as follows. For \(c,d\in\mathbb{R}\) put
\[c(d):=\left\{\begin{array}{ll}c,&\text{if }c\neq 0\\ d,&\text{if }c=0.\end{array}\right.\]
Then for any \(\left(\begin{array}{cc}a_{1}&b_{1}\\ c_{1}&d_{1}\end{array}\right),\left(\begin{array}{cc}a_{2}&b_{2}\\ c_{2}&d_{2}\end{array}\right)\in\mathrm{SL}_{2}(\mathbb{R})\), writing \(\left(\begin{array}{cc}a_{1}&b_{1}\\ c_{1}&d_{1}\end{array}\right)\,\cdot\,\left(\begin{array}{cc}a_{2}&b_{2}\\ c_{2}&d_{2}\end{array}\right)=\left(\begin{array}{cc}a_{3}&b_{3}\\ c_{3}&d_{3}\end{array}\right)\), we have
\[h\left(\left(\begin{array}{cc}a_{1}&b_{1}\\ c_{1}&d_{1}\end{array}\right),\left(\begin{array}{cc}a_{2}&b_{2}\\ c_{2}&d_{2}\end{array}\right)\right):=\left\{\begin{array}{ll}1,&\text{if }c_{1}(d_{1})>0,c_{2}(d_{2})>0,c_{3}(d_{3})<0, \\ -1,&\text{if }c_{1}(d_{1})<0,c_{2}(d_{2})<0,c_{3}(d_{3})>0,\\ 0,&\text{otherwise.}\end{array}\right.\]
From this formula, we see that \(h\) is definable in \(M\), and so \(\tilde{G}\) is definable in \(M\). It is clear that each set of the form \(G\times k\mathbb{Z}\) is a definable generic subset of \(\tilde{G}\). Using Theorem 5.1, we will deduce a weak converse.
**Proposition 5.4**.: _For every definable, generic, symmetric subset \(X\) of \(\tilde{G}\) there exists a nonzero \(k\in\mathbb{N}\) such that \(G\times k\mathbb{Z}\subseteq X^{696}\)._
Besides Theorem 5.1, we will need a few other ingredients, some of which will be also used in the proof of Proposition 5.13 below. The proof of Proposition 5.4 is given after the proof of Lemma 5.12.
By [12, Theorem 3.2], we know that \(\tilde{G}\) does not have any definable subgroups of finite index, so for every definable generic subset \(X\) of \(\tilde{G}\) we have that \(\langle X\rangle=\tilde{G}\). Hence, in this situation, Theorem 5.1 is a particular case of Theorem 3.25.
One of the ingredients will be _12-connectedness_ of \(\mathrm{SL}_{2}(\mathbb{R})\) which follows from [10, Theorem 6.5], which we will briefly discuss. By a _thick_ subset of a group \(H\) definable in a structure \(N\) we mean a definable symmetric subset \(Y\) of \(H\) for which there exists a positive \(m\in\mathbb{N}\) such that for every \(g_{1},\ldots,g_{m}\in H\) there are \(i<j\) with \(g_{i}^{-1}g_{j}\in Y\). Note that when \(Y\) is a definable generic, then \(Y^{-1}Y\) is thick. We will say that \(H\) is _\(n\)-connected_ if for every (definable) thick subset \(Y\) of \(H\) we have \(Y^{n}=H\). This is equivalent to saying that for \(P\) being the intersection
of all \(N\)-definable thick subsets of \(\bar{H}:=H(\mathfrak{C})\) (where \(\mathfrak{C}\succ N\) is a monster model) we have \(P^{n}=\bar{H}\). The next fact is a particular case of [10, Theorem 6.5].
**Fact 5.5**.: \(\mathrm{SL}_{2}(\mathbb{R})\) _is \(12\)-connected in any structure in which \(\mathrm{SL}_{2}(\mathbb{R})\) is definable, in particular in \(M\)._
Everywhere below \(\mathfrak{C}\succ M\) is a monster model, and bars are used to denote the interpretations of various objects in \(\mathfrak{C}\).
**Corollary 5.6**.: _Let \(X\) be a definable, generic, symmetric subset of \(\tilde{G}\). Then for every \(g\in\bar{G}\) there exists \(n\in\{-12,\ldots,0,\ldots,12\}\) such that \((g,n)\in\bar{X}^{24}\)._
Proof.: By Fact 5.5, \(\bar{G}=P^{12}\), where \(P\) is the intersection of all \(M\)-definable thick subsets of \(\bar{G}\). Hence, by [10, Lemma 1.3(1)], \(g=\prod_{i=1}^{12}a_{i}^{-1}b_{i}\) for some \(a_{i},b_{i}\in\bar{G}\) such that \(a_{i}\Theta_{M}b_{i}\), meaning that \((a_{i},b_{i})\) starts an infinite \(M\)-indiscernible sequence. Then clearly \(((a_{i},0),(b_{i},0))\) starts an infinite \(M\)-indiscernible sequence of pairs, i.e.
\[(*)\qquad(a_{i},0)\Theta_{M}(b_{i},0).\]
The corresponding entries of matrices \(a_{i}\) and \(b_{i}\) have the same sign (because they have the same type), so, using the explicit definition of \(h\) recalled above and the formula \(\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)^{-1}=\left(\begin{array}{cc}d&-b\\ -c&a\end{array}\right)\) for matrices in \(\mathrm{SL}_{2}(\mathbb{R})\), one easily checks that \(h(a_{i}^{-1},b_{i})=h(a_{i},a_{i}^{-1})\). Hence,
\[(a_{i},0)^{-1}(b_{i},0)=(a_{i}^{-1},-h(a_{i},a_{i}^{-1}))(b_{i},0)=(a_{i}^{-1} b_{i},h(a_{i}^{-1},b_{i})-h(a_{i},a_{i}^{-1}))=(a_{i}^{-1}b_{i},0).\]
As \(\mathrm{Im}(h)=\{-1,0,1\}\), the last thing implies that
\[\prod_{i=1}^{12}(a_{i},0)^{-1}(b_{i},0)\in\left\{\prod_{i=1}^{12}a_{i}^{-1}b_ {i}\right\}\times\{-12,\ldots,0,\ldots,12\}.\]
On the other hand, by thickness of \(\bar{X}^{2}\), \((*)\), and [10, Lemma 1.3(1)], we get that \(\prod_{i=1}^{12}(a_{i},0)^{-1}(b_{i},0)\in\bar{X}^{24}\). So there exists \(n\in\{-12,\ldots,0,\ldots,12\}\) such that \((g,n)\in\bar{X}^{24}\).
The topological dynamics of the \(G\)-flow \(S_{G}(\mathbb{R})\) was worked out in [1], including the computation of the Ellis group which turns out to be \(\mathbb{Z}_{2}\). We will also need the topological dynamics of the \(\tilde{G}\)-flow \(S_{\tilde{G}}(M)\) studied in [1, Section 5]. The results below which are stated without references can be found in [1, Section 5].
First of all, it is well-known that all types in \(S(M)\) are definable, because this is true for all types in \(S(\mathbb{R})\) and in \(S(\mathbb{Z})\) and there is no interaction between the two sorts of \(M\). So \(S_{G}(M)\) and \(S_{\tilde{G}}(M)\) coincide with \(S_{G,\mathrm{ext}}(M)\) and \(S_{\tilde{G},\mathrm{ext}}(M)\), respectively, and hence the Ellis semigroup operation on these sets is given by \(p*q=\mathrm{tp}(ab/M)\) for some/any \(b\models q\) and \(a\models p\) such that \(\mathrm{tp}(a/M,b)\) is a coheir over \(M\).
The Ellis group of the flow \(S_{G}(M)\) consists of two types \(q_{0},q_{1}\), where \(q_{0}:=\mathrm{tp}(A/M)\) and \(q_{1}:=\mathrm{tp}(-A/M)\) for
\[A:=\left(\begin{array}{cc}(1-x)b&(1-x)c-yb^{-1}\\ yb&yc+(1-x)b^{-1}\end{array}\right)\]
where \(b>\mathbb{R}\), \(c>\mathrm{dcl}(\mathbb{R},b)\), \(x\) positive infinitesimal, \(y\) positive with \((1-x)^{2}+y^{2}=1\), and \(\mathrm{tp}(x,y/M,b,c)\) coheir over \(M\) (which implies that \(x,y\) are greater than all infinitesimals in \(\mathrm{dcl}(\mathbb{R},b,c)\)). Then \(q_{0}\) is the neutral element of the Ellis group \(\{q_{0},q_{1}\}\), so an idempotent in a minimal left ideal of \(S_{G}(M)\), and hence we will denote \(q_{0}\) by \(u_{G}\).
The space \(S_{\tilde{G}}(M)\) is naturally homeomorphic with \(S_{G}(\mathbb{R})\times S_{\mathbb{Z}}(\mathbb{Z})\), and the induced semigroup operation is given by
\[(p,q)*(p^{\prime},q^{\prime})=(p*p^{\prime},q+q^{\prime}+h(p,p^{\prime})),\]
where \(h(p,p^{\prime}):=h(a,a^{\prime})\) for some/any \(a\models p\) and \(a^{\prime}\models p^{\prime}\) such that \(\operatorname{tp}(a/M,a^{\prime})\) is a coheir over \(M\), and \(+\) denotes the semigroup operation on \(S_{\mathbb{Z}}(\mathbb{Z})\) (which is indeed commutative). From now on, \((S_{\tilde{G}}(M),*)\) will be identified with \((S_{G}(\mathbb{R})\times S_{\mathbb{Z}}(\mathbb{Z}),*)\). Since \(*\) uses \(h\), we will denote this semigroup as \(S_{G}(\mathbb{R})\times_{h}S_{\mathbb{Z}}(\mathbb{Z})\). As to the semigroup \(S_{\mathbb{Z}}(\mathbb{Z})\), we will interchangeably use additive and multiplicative notation.
As \((\mathbb{Z},+)\) is stable, there is a unique minimal left ideal \(\mathcal{M}_{\mathbb{Z}}\) and it consists of the generic types. There is also a unique idempotent \(u_{\mathbb{Z}}\) in \(\mathcal{M}_{\mathbb{Z}}\) which the generic type concentrated on the component \(\bar{\mathbb{Z}}^{0}\) (the intersection of all definable subgroups of \(\bar{\mathbb{Z}}\) of finite index), and \(u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}=\mathcal{M}_{\mathbb{Z}}\).
By the explicit formula for \(h\) and the idempotency of \(u_{G}\), we get \(h(u_{G},u_{G})=0\). So [2, Proposition 5.6] yields
**Fact 5.7**.: _Let \(\mathcal{M}_{G}\ni u_{G}\) be a minimal left ideal of \(S_{G}(\mathbb{R})\). Then:_
1. \(\mathcal{M}_{\tilde{G}}:=\mathcal{M}_{G}\times\mathcal{M}_{\mathbb{Z}}\) _is a minimal left ideal of_ \(S_{\tilde{G}}(M)\)_;_
2. \(u_{\tilde{G}}:=(u_{G},u_{\mathbb{Z}})\) _is an idempotent in_ \(\mathcal{M}_{\tilde{G}}\)_;_
3. _The Ellis group_ \(u_{\tilde{G}}\mathcal{M}_{\tilde{G}}\) _equals_ \(u_{G}\mathcal{M}_{G}\times_{h}\mathcal{M}_{\mathbb{Z}}=u_{G}\mathcal{M}_{G} \times_{h}u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\)_._
\(f_{1}\colon u_{G}\mathcal{M}_{G}\to\mathbb{Z}_{2}\) given by \(q_{i}\mapsto i\) is clearly an isomorphism. Since \((\mathbb{Z},+)\) is stable, it is well-known that the natural map \(f_{2}\colon u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\to\bar{\mathbb{Z}}/\bar{ \mathbb{Z}}^{0}\cong\hat{\mathbb{Z}}\) given by \(\operatorname{tp}(a/\mathbb{Z})\mapsto a/\bar{\mathbb{Z}}^{0}\) is an isomorphism. Thus, the following corollary is deduced in [2, Example 5.7].
**Corollary 5.8**.: _The map \((f_{1},f_{2})\colon u_{\tilde{G}}\mathcal{M}_{\tilde{G}}\to\mathbb{Z}_{2} \times\hat{\mathbb{Z}}\) is an isomorphism, with the group operation on \(\mathbb{Z}_{2}\times\hat{\mathbb{Z}}\) given by \((x,n)(x^{\prime},n^{\prime}):=(x+_{2}x^{\prime},n+n^{\prime}-xx^{\prime})\). The target group is moreover topologically isomorphic to \(\hat{\mathbb{Z}}\) via the map \((x,n)\mapsto x-2n\)._
Our next goal is to show that the isomorphism in Corollary 5.8 is topological. This implies that \(u_{\tilde{G}}\mathcal{M}_{\tilde{G}}\) is topologically isomorphic to \(\hat{\mathbb{Z}}\), so Hausdorff which confirms Conjecture 5.3 for \(\widetilde{\operatorname{SL}_{2}(\mathbb{R})}\).
As the \(\tau\)-topology on \(u_{G}\mathcal{M}_{G}\) is \(T_{1}\), it is discrete, and so \(f_{1}\) is a topological isomorphism. Since \(f_{2}\) is an isomorphism which is continuous by [17, Theorem 0.1], we get that it is a topological isomorphism (with \(u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\) equipped with the \(\tau\)-topology).
The fact that the isomorphism \((f_{1},f_{2})\) from Corollary 5.8 is topological follows immediately from the above paragraph and the next proposition.
**Proposition 5.9**.: _The \(\tau\)-topology on \(u_{G}\mathcal{M}_{G}\times_{h}u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\) is the product of the \(\tau\)-topologies on \(u_{G}\mathcal{M}_{G}\) and \(u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\)._
Before the proof, let us show a few properties of \(h\) which will be used also in the proof of Proposition 5.13.
**Lemma 5.10**.:
1. \(h(p,u_{G})=0\) _for all_ \(p\in S_{G}(M)\)_._
2. \(h(u_{G},\operatorname{tp}(g/M))=0\) _for all_ \(g\in G\)_._
3. \(h(u_{G},gu_{G})=0\) _for all_ \(g\in G\)_._
Proof.: (1) Present \(u_{G}\) as \(\operatorname{tp}(A/M)\) for \(A:=\left(\begin{array}{cc}(1-x)b&(1-x)c-yb^{-1}\\ yb&yc+(1-x)b^{-1}\end{array}\right),\) where \(x,y,b,c\) are as above. Write \(p\) as \(\operatorname{tp}(B/M)\) for \(B=\left(\begin{array}{cc}\alpha&\beta\\ \gamma&\delta\end{array}\right)\) so that \(\operatorname{tp}(B/M,x,y,b,c)\) is a coheir over \(M\). Then \(pu_{G}=\operatorname{tp}(BA/M)\) and \(BA=\left(\begin{array}{cc}c_{11}&c_{12}\\ c_{21}&c_{22}\end{array}\right)\) with \(c_{21}:=\gamma(1-x)b+\delta yb\). Since \(yb>0\), by the explicit formula for \(h\), the only possibility for \(h(p,u_{G})\neq 0\) would be the case when \(\gamma(\delta)>0\) and \(c_{21}(c_{22})<0\). We will show that it never happens, namely \(c_{21}>0\) whenever \(\gamma(\delta)>0\). So assume that \(\gamma(\delta)>0\).
If \(\gamma=0\), then \(\delta>0\), and so \(c_{21}=\delta yb>0\). So assume that \(\gamma>0\). Suppose for a contradiction that \(c_{21}\leqslant 0\). Then \(\frac{\delta}{\gamma}\leqslant\frac{x-1}{y}<\mathbb{R}\) which contradicts the assumption that \(\operatorname{tp}(\gamma,\delta/\mathbb{R},x,y)\) is finitely satisfiable in \(\mathbb{R}\).
(2) The proof is similar and left to the reader.
(3) By the 2-cocycle formula, we have \(h(u_{G},g)+h(u_{G}g,u_{G})=h(u_{G},gu_{G})+h(g,u_{G})\). So the conclusion follows from (1) and (2).
Proof of Proposition 5.9.: Denote by \(\mathcal{T}\) the product of the \(\tau\)-topologies on \(u_{G}\mathcal{M}_{G}\) and \(u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\). Our goal is to prove that \(\tau=\mathcal{T}\).
(\(\supseteq\)) It is enough to show that any subbasic closed set of the form \(F\times u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\) or \(u_{G}\mathcal{M}_{G}\times E\) (where \(F\subseteq u_{G}\mathcal{M}_{G}\) and \(E\subseteq u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\) are \(\tau\)-closed) is closed in \(\tau\).
First, consider \(F\times u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\). Take any \(a\in\operatorname{cl}_{\tau}(F\times u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}})\). Then \(a=\lim(g_{i},n_{i})(f_{i},a_{i})\), where \(((g_{i},n_{i}))_{i}\) is a net from \(\tilde{G}\) converging to \(u_{\tilde{G}}=(u_{G},u_{\mathbb{Z}})\) and \((f_{i},a_{i})\in F\times u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\). As \((g_{i},n_{i})(f_{i},a_{i})=(g_{i}f_{i},n_{i}+a_{i}+h(g_{i},f_{i}))\in\{g_{i}f_{ i}\}\times\mathcal{M}_{\mathbb{Z}}\), we get that \(a\in\operatorname{cl}_{\tau}(F)\times\mathcal{M}_{\mathbb{Z}}=F\times u_{ \mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\), as required.
Now, consider \(u_{G}\mathcal{M}_{G}\times E\). Take any \(a\in\operatorname{cl}_{\tau}(u_{G}\mathcal{M}_{G}\times E)\). Then \(a=\lim(g_{i},n_{i})(f_{i},a_{i})\), where \(((g_{i},n_{i}))_{i}\) is a net from \(\tilde{G}\) converging to \(u_{\tilde{G}}=(u_{G},u_{\mathbb{Z}})\) and \((f_{i},a_{i})\in u_{G}\mathcal{M}_{G}\times E\).
**Claim 1**.: \(h(g_{i},f_{i})=0\) _for \(i\) large enough._
Proof.: Since \(\lim g_{i}=u_{G}\) is the type over \(M\) of a matrix with positive left bottom entry, there is \(i_{0}\) such that the left bottom entry of \(g_{i}\) is positive for all \(i>i_{0}\). Consider any \(i>i_{0}\). If \(f_{i}=u_{G}\), then \(h(g_{i},f_{i})=0\) by Lemma 5.10(1). If \(f_{i}=q_{1}\), then the left bottom entry of any matrix realizing \(f_{i}\) is negative, so \(h(g_{i},f_{i})=0\) by the explicit formula for \(h\).
By this claim, \((g_{i},n_{i})(f_{i},a_{i})=(g_{i}f_{i},n_{i}+a_{i}+h(g_{i},f_{i}))=(g_{i}f_{i}, n_{i}+a_{i})\). So \(a\in\operatorname{cl}_{\tau}(u_{G}\mathcal{M}_{G})\times\operatorname{cl}_{ \tau}(E)=u_{G}\mathcal{M}_{G}\times E\), as required.
(\(\subseteq\)) Consider a \(\tau\)-closed \(A\subseteq u_{\tilde{G}}\mathcal{M}_{\tilde{G}}\). We need to show that it is closed in \(\mathcal{T}\). So take any \(a=(a_{1},a_{2})\in\operatorname{cl}_{\mathcal{T}}(A)\). There are are nets \((a_{1,i})_{i}\subseteq u_{G}\mathcal{M}_{G}\) and \((a_{2,i})_{i}\subseteq u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\)\(\tau\)-converging to \(a_{1}\) and \(a_{2}\), respectively, with \((a_{1,i},a_{2,i})\in A\) for all \(i\). Passing to subnets, we can assume that the nets \((a_{1,i})_{i}\) and \((a_{2,i})_{i}\) converge in the usual topologies on \(S_{G}(M)\) and \(S_{\mathbb{Z}}(M)\) to some \(b_{1}\) and \(b_{2}\), respectively. By Lemma 3.19 and Hausdorffness of \(u_{G}\mathcal{M}_{G}\) and \(u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\), we get \(u_{G}b_{1}=a_{1}\) and \(u_{\mathbb{Z}}+b_{2}=a_{2}\). Approximating \(u_{G}\) by elements of \(G\) and \(u_{\mathbb{Z}}\) by elements of \(\mathbb{Z}\), using left continuity of the semigroup operations and the fact that the actions of \(G\) on \(S_{G}(M)\) and of \(\mathbb{Z}\) on \(S_{\mathbb{Z}}(M)\) are continuous, passing to subnets, we can assume that there are nets \((g_{i})_{i}\) in \(G\) and \((n_{i})_{i}\) in \(\mathbb{Z}\) converging to \(u_{G}\) and \(u_{\mathbb{Z}}\), respectively, and such that \(\lim g_{i}a_{1,i}=a_{1}\) and \(\lim n_{i}+a_{2,i}=a_{2}\) (in the usual topology on type spaces). Then \(\lim(g_{i},n_{i})=(u_{G},u_{\mathbb{Z}})\). On the other hand, by Claim 1, \(h(g_{i},a_{1,i})=0\) for sufficiently large \(i\)'s, and so
\[(g_{i},n_{i})(a_{1,i},a_{2,i})=(g_{i}a_{1,i},n_{i}+a_{2,i}+h(g_{i},a_{1,i}))=(g_ {i}a_{1,i},n_{i}+a_{2,i})\]
for sufficiently large \(i\)'s. Hence, \(\lim(g_{i},n_{i})(a_{1,i},a_{2,i})=(a_{1},a_{2})\). Therefore, \(a\in\operatorname{cl}_{\tau}(A)=A\).
The next lemma follows by an elementary matrix computation.
**Lemma 5.11**.: _For every \(\operatorname{tp}(B/M)\in S_{G}(M)\) with \(B=\left(\begin{array}{cc}\alpha&\beta\\ \gamma&\delta\end{array}\right),\) writing \(u_{G}=\operatorname{tp}(A/M)=\operatorname{tp}(A^{\prime}/M)\) for \(A:=\left(\begin{array}{cc}(1-x)b&(1-x)c-yb^{-1}\\ yb&yc+(1-x)b^{-1}\end{array}\right),\)\(A^{\prime}:=\left(\begin{array}{cc}(1-x^{\prime})b^{\prime}&(1-x^{\prime})c^{ \prime}-y^{\prime}b^{\prime-1}\\ y^{\prime}b^{\prime}&y^{\prime}c^{\prime}+(1-x^{\prime})b^{\prime-1}\end{array}\right)\) with the elements satisfying the requirements described before and such that \(\operatorname{tp}(B/M,A)\) and \(\operatorname{tp}(A^{\prime}/M,B,A)\) are coheirs over \(M\), we have that \(u_{G}\operatorname{tp}(B/M)u_{G}=\operatorname{tp}(C/M)\) with the left bottom entry of \(C\) equal to \(y^{\prime}b^{\prime}(\alpha(1-x)b+\beta yb)+(y^{\prime}c^{\prime}+(1-x^{\prime})b^{ \prime-1})(\gamma(1-x)b+\delta yb)\)._
**Lemma 5.12**.:
1. _For_ \(B=\left(\begin{array}{cc}\alpha&\beta\\ \gamma&\delta\end{array}\right)\in G\) _we have that_ \(u_{G}\operatorname{tp}(B/M)u_{G}=u_{G}=q_{0}\) _if_ \(\gamma>0\)_, and_ \(u_{G}\operatorname{tp}(B/M)u_{G}=q_{1}\) _if_ \(\gamma<0\)
_._
2. \(u_{G}\left(\begin{array}{cc}0&-1\\ 1&0\end{array}\right)u_{G}=u_{G}\)_._
3. \(u_{G}\operatorname{tp}\left(\left(\begin{array}{cc}-1&0\\ \gamma&-1\end{array}\right)/M\right)u_{G}=q_{1}\) _for all positive infinitesimals_ \(\gamma\)_._
Proof.: First, let \(B=\left(\begin{array}{cc}\alpha&\beta\\ \gamma&\delta\end{array}\right)\in\bar{G}\) with \(\gamma>0\). Pick \(x,y,b,c,x^{\prime},y^{\prime},b^{\prime},c^{\prime}\) and matrices \(A\) and \(A^{\prime}\) as in Lemma 5.11, satisfying additionally that \(\operatorname{tp}(B/M,x,y,b,c)\) and \(\operatorname{tp}(x^{\prime},y^{\prime},b^{\prime},c^{\prime}/M,x,y,b,c,\alpha,\beta,\gamma,\delta)\) are coheirs over \(M\). Observe that \(\gamma(1-x)b+\beta yb>0\). Indeed, it is clear if \(\delta\geq 0\). If \(\delta<0\), then it is equivalent to \(-\frac{\gamma}{\delta}>\frac{y}{1-x}\) which is true as \(\frac{y}{1-x}\) is a positive infinitesimal, \(-\frac{\gamma}{\delta}>0\), and \(\operatorname{tp}(\gamma,\delta/M,x,y)\) is a coheir over \(M\). Let \(d\) be the left bottom entry of \(A^{\prime}BA\). Hence, by Lemma 5.11, we conclude that \(d>0\) if \(\alpha(1-x)b+\beta yb\geq 0\). In the case when \(\alpha(1-x)b+\beta yb<0\), we have that \(d>0\) if and only if \(\frac{y^{\prime}}{c^{\prime}}<(1+\frac{1-x^{\prime}}{y^{\prime}b^{\prime}c^{ \prime}})(-\frac{\gamma(1-x)+\delta y}{\alpha(1-x)+\beta y})=(1+\frac{1-x^{ \prime}}{y^{\prime}b^{\prime}c^{\prime}})(-\frac{\gamma+\delta\frac{y}{1-x}}{ \alpha+\beta\frac{1-x}{1-x}})=:\zeta\).
(1) Since \(u_{G}=\operatorname{tp}(A/M)\) and \(q_{1}=\operatorname{tp}(-A/M)\), replacing \(B\) by \(-B\), we see that it is enough to consider the case when \(\gamma>0\) and to show that then \(d>0\). By the above consideration, this boils down to showing that \(\frac{b^{\prime}}{c^{\prime}}<\zeta\) if \(\alpha(1-x)b+\beta yb<0\). By the assumption of (1), \(\alpha,\beta,\gamma,\delta\in\mathbb{R}\). Thus, \(\alpha(1-x)b+\beta yb<0\) implies that \(\alpha<\beta\frac{y}{x-1}\) which is infinitesimal, so \(\alpha\leq 0\). Now, if \(\alpha=0\), then \(\zeta>\mathbb{R}\), so \(\zeta>\frac{b^{\prime}}{c^{\prime}}\) (as \(\frac{b^{\prime}}{c^{\prime}}\) is infinitesimal). If \(\alpha<0\), then \(\zeta>-\frac{\gamma}{2\alpha}>\frac{b^{\prime}}{c^{\prime}}\), as \(-\frac{\gamma}{2\alpha}\) is a positive real number.
(2) is a particular case of (1).
(3) It is enough to see that \(\zeta\leq\frac{b^{\prime}}{c^{\prime}}\). We have \(\zeta=(1+\frac{1-x^{\prime}}{y^{\prime}b^{\prime}c^{\prime}})(\gamma-\frac{y} {1-x})<2(\gamma-\frac{y}{1-x})\) which is a positive infinitesimal (as \(\frac{1-x^{\prime}}{y^{\prime}b^{\prime}c^{\prime}}\), \(\gamma\) and \(\frac{y}{1-x}\) are positive infinitesimals, and \(\operatorname{tp}(\gamma/M,x,y)\) is a coheir over \(M\)). The conclusion follows from the fact that \(\frac{b^{\prime}}{c^{\prime}}\) is a positive infinitesimal and \(\operatorname{tp}(b^{\prime},c^{\prime}/M,x,y,\gamma)\) is a coheir over \(M\).
We have now all the tools to prove Proposition 5.4.
Proof of Proposition 5.4.: Let \(B:=\left(\begin{array}{cc}0&-1\\ 1&0\end{array}\right)\). Then \(B^{2}=-I\) and \(B^{4}=I\). So, by the explicit formula for the 2-cocycle \(h\), we have \(h(B,B)=1\) and \(h(B^{2},B^{2})=-1\). Hence, working in \(\tilde{G}\), we get
\[(*)\qquad(B,0)^{4}=(I,2h(B,B)+h(B^{2},B^{2}))=(I,1).\]
As observed after Corollary 5.8, \(u_{\tilde{G}}\mathcal{M}_{\tilde{G}}\cong\hat{\mathbb{Z}}\) is Hausdorff, so \(H(u_{\tilde{G}}\mathcal{M}_{\tilde{G}})\) is trivial. We also have that \(f_{2}(u_{\mathbb{Z}}+n+u_{\mathbb{Z}})=n\) for \(n\in\mathbb{Z}\). Take \(f\colon\tilde{G}\to u_{\tilde{G}}\mathcal{M}_{\tilde{G}}\) from Theorem 5.1. Using Lemma 5.10,
\[f((g,n)):=(u_{G},u_{\mathbb{Z}})(g,n)(u_{G},u_{\mathbb{Z}})=(u_{ G}gu_{G},u_{\mathbb{Z}}+n+u_{\mathbb{Z}}+h(g,u_{G})+h(u_{G},gu_{G}))=\] \[(u_{G}gu_{G},u_{\mathbb{Z}}+n+u_{\mathbb{Z}}).\]
Hence, by Theorem 5.1(3), there is a positive \(k\in\mathbb{N}\) such that \(\{g\in G:u_{G}gu_{G}=u_{G}\}\times k\mathbb{Z}\subseteq X^{14}\). Thus, by Lemma 5.12(2), \(\{B\}\times k\mathbb{Z}\subseteq X^{14}\). Therefore, using \((*)\), we conclude that \(\{I\}\times(1+k\mathbb{Z})\subseteq X^{14\cdot 4}=X^{56}\), and so \(I\times(\{-12,\ldots,0,\ldots,12\}+k\mathbb{Z})\subseteq X^{56\cdot 12}=X^{672}\). Using Corollary 5.6, this implies that \(G\times k\mathbb{Z}\subseteq X^{672+24}=X^{696}\).
One could also prove more directly (without using topological dynamics) a version of Proposition 5.4 with a bigger number in place of 696, but we will not do that, as our point was to illustrate by a non-trivial example how Theorem 5.1 leads to a better understanding of generics in definable groups.
Finally, we give a negative answer to Question 4.13(3), and a positive answer to Question 4.13(2) in the particular case when \(X\) is a definable, generic, symmetric subset of \(\tilde{G}\). First,
let us describe the context. Let \(X\) be a definable, generic, symmetric subset of \(\tilde{G}\). Then \(X\) is a definable in \(M\) approximate subgroup and, as discussed after Proposition 5.4, \(\tilde{G}=\langle X\rangle\). By definability of types in \(S(M)\), the flows \(S_{\tilde{G},M}(N)\) and \(S_{\tilde{G}}(M)\) are identified (as discussed above). We have already proved that \(H(u_{\tilde{G}}\mathcal{M}_{\tilde{G}})\) is trivial. Let \(f\colon\tilde{G}\to u_{\tilde{G}}\mathcal{M}_{\tilde{G}}\) be the generalized definable locally compact model from Theorem 3.25 and let \(\bar{f}=f_{M}\colon S_{\tilde{G}}(M)\to u_{\tilde{G}}\mathcal{M}_{\tilde{G}}\) be as discussed before Question 4.13. Note that \(\bar{f}\) extends \(f\). On the other hand, we have the function \(\hat{f}\colon S_{\tilde{G}}(M)\to u_{\tilde{G}}\mathcal{M}_{\tilde{G}}\) given by \(\hat{f}(p):=u_{\tilde{G}}pu_{\tilde{G}}\) which also extends \(f\).
**Proposition 5.13**.: _The function \(\bar{f}\) is uniquely determined by the construction from Theorem 4.6 and continuous, whereas \(\hat{f}\) is not continuous, and so \(\bar{f}\neq\hat{f}\). However, \(\bar{f}|_{u_{\tilde{G}}\mathcal{M}_{\tilde{G}}}=\hat{f}|_{u_{\tilde{G}} \mathcal{M}_{\tilde{G}}}=\operatorname{id}\)._
Proof.: Identifying \(u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\) with \(\hat{\mathbb{Z}}\) via \(f_{2}\), the second displayed computation in the proof of Proposition 5.4 yields
\[f((g,n))=(u_{G}gu_{G},n).\]
It follows from Lemma 5.11 and definability of types in \(S(M)\) that the sets \(\{g\in G:u_{G}gu_{G}=u_{G}\}\) and \(\{g\in G:u_{G}gu_{G}\neq u_{G}\}\) are both definable. On the other hand, the function \(\mathbb{Z}\to\hat{\mathbb{Z}}\) given by \(n\mapsto n\) is definable in the sense that the preimages of any two disjoint closed subsets of \(\hat{\mathbb{Z}}\) can be separated by a definable set.
All of this together with Proposition 5.9 implies that \(f\colon\tilde{G}\to u_{\tilde{G}}\mathcal{M}_{\tilde{G}}\) is a definable map. Therefore, by Lemma 3.2 of [1] and its proof, \(\bar{f}=f_{M}\) is uniquely determined by the construction from Theorem 4.6 and continuous.
Pick a positive infinitesimal \(\gamma\). Let \(B:=\left(\begin{array}{cc}-1&0\\ \gamma&-1\end{array}\right).\) Choose any net \((g_{i})_{i}\) of elements of \(G\) converging to \(p:=\operatorname{tp}(B/M)\). Then the left bottom entries of the matrices \(g_{i}\) are positive for all \(i>i_{0}\) for some \(i_{0}\). So, by Lemma 5.12(1), \(u_{G}g_{i}u_{G}=u_{G}\) for all \(i>i_{0}\). On the other hand, by Lemma 5.12(3), \(u_{G}pu_{G}=q_{1}\). Therefore, \(\hat{f}((p,0))\in\{u_{G}pu_{G}\}\times u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}= \{q_{1}\}\times u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\) and \(\hat{f}((g_{i},0))\in\{u_{G}g_{i}u_{G}\}\times u_{\mathbb{Z}}\mathcal{M}_{ \mathbb{Z}}=\{u_{G}\}\times u_{\mathbb{Z}}\mathcal{M}_{\mathbb{Z}}\) for all \(i>i_{0}\). Since the net \(((g_{i},0))_{i}\) tends to \((p,0)\) and \(q_{1}\neq u_{G}\), we conclude that \(\hat{f}\) is not continuous at \((p,0)\). Hence, \(\bar{f}\neq\hat{f}\) by continuity of \(\bar{f}\). More precisely, since \(\bar{f}|_{\tilde{G}}=\hat{f}|_{\tilde{G}}\), we get that \(\bar{f}(p)\neq\hat{f}(p)\).
It remains to show that \(\bar{f}|_{u_{\tilde{G}}\mathcal{M}_{\tilde{G}}}=\operatorname{id}\), as directly from the definition of \(\hat{f}\) we have \(\hat{f}|_{u_{\tilde{G}}\mathcal{M}_{\tilde{G}}}=\operatorname{id}\). Consider any \(n\in\hat{\mathbb{Z}}\). Our goal is to show that \(\bar{f}((u_{G},n))=(u_{G},n)\) and \(\bar{f}((-u_{G},n))=(-u_{G},n)\). We will prove the first equality; the second one can be proved analogously.
Choose any net \(((g_{i},n_{i}))_{i}\) from \(\hat{G}\) converging to \((u_{G},n)\). Then the left bottom entry of \(g_{i}\) is positive for all \(i>i_{0}\) for some \(i_{0}\). By Lemma 5.12(1), \(u_{G}g_{i}u_{G}=u_{G}\) for all \(i>i_{0}\). Therefore, using Lemma 5.10, we get
\[\bar{f}((g_{i},n_{i}))=(u_{G}g_{i}u_{G},n_{i}+h(g_{i},u_{G})+h(u_{G},g_{i}u_{G }))=(u_{G},n_{i})\]
for all \(i>i_{0}\), so it clearly tends to \((u_{G},n)\). Since the net \(((g_{i},n_{i}))_{i}\) tends to \((u_{G},n)\), by continuity of \(\bar{f}\), we conclude that \(\bar{f}((u_{G},n))=(u_{G},n)\).
|
2309.05548 | Distance-Aware eXplanation Based Learning | eXplanation Based Learning (XBL) is an interactive learning approach that
provides a transparent method of training deep learning models by interacting
with their explanations. XBL augments loss functions to penalize a model based
on deviation of its explanations from user annotation of image features. The
literature on XBL mostly depends on the intersection of visual model
explanations and image feature annotations. We present a method to add a
distance-aware explanation loss to categorical losses that trains a learner to
focus on important regions of a training dataset. Distance is an appropriate
approach for calculating explanation loss since visual model explanations such
as Gradient-weighted Class Activation Mapping (Grad-CAMs) are not strictly
bounded as annotations and their intersections may not provide complete
information on the deviation of a model's focus from relevant image regions. In
addition to assessing our model using existing metrics, we propose an
interpretability metric for evaluating visual feature-attribution based model
explanations that is more informative of the model's performance than existing
metrics. We demonstrate performance of our proposed method on three image
classification tasks. | Misgina Tsighe Hagos, Niamh Belton, Kathleen M. Curran, Brian Mac Namee | 2023-09-11T15:33:00Z | http://arxiv.org/abs/2309.05548v1 | # Distance-Aware eXplanation Based Learning
###### Abstract
eXplanation Based Learning (XBL) is an interactive learning approach that provides a transparent method of training deep learning models by interacting with their explanations. XBL augments loss functions to penalize a model based on deviation of its explanations from user annotation of image features. The literature on XBL mostly depends on the intersection of visual model explanations and image feature annotations. We present a method to add a distance-aware explanation loss to categorical losses that trains a learner to focus on important regions of a training dataset. Distance is an appropriate approach for calculating explanation loss since visual model explanations such as Gradient-weighted Class Activation Mapping (Grad-CAMs) are not strictly bounded as annotations and their intersections may not provide complete information on the deviation of a model's focus from relevant image regions. In addition to assessing our model using existing metrics, we propose an interpretability metric for evaluating visual feature-attribution based model explanations that is more informative of the model's performance than existing metrics. We demonstrate performance of our proposed method on three image classification tasks.
eXplanation Based Learning, Interactive Machine Learning, eXplainable AI
## I Introduction
Research on model transparency in deep learning is dominated by studies on dataset bias [1], model interpretability, and explainability [2]. Another field of study that aims to improve model transparency, Interactive Machine Learning (IML), hits two birds with one stone [3, 4]. First, it provides transparency through engagement by allowing user interaction in the model training process. Second, it improves model performance by collecting expert knowledge directly from users. IML usually considers users as _dumb partners_ with the sole responsibility of categorizing training instances into one of a set of pre-selected categories as opposed to _clever partners_ who can clarify their feedback in addition to categorizing instances. However, advances in model explanation research opens the door for a more detailed and richer interaction between models and users during training.
### _eXplanation based learning_
While model explanation methods have been proposed and continue to be used to tackle the "_black-box_" nature of deep learning models [5, 6], they can also be used in an interactive learning approach to promote a more transparent model training process [7, 8]. This is known as eXplanation Based Learning (XBL)1, which collects user feedback on model explanations and uses the feedback to train, debug, or refine a trained model.
Footnote 1: Different terms such as _explanatory debugging_[8], _explanatory interactive learning_[9], _explanatory guided learning_[10] are used in the literature. We choose to use the term _eXplanation Based Learning_ because we believe it generalizes all of them.
In applications such as medical image classifications, deep learning models have been observed to focus on non-relevant or confounding parts of medical images such as artifacts for their classification or prediction outputs [11, 12]. In addition to promoting transparent learning process, XBL has the potential to unlearn such wrong correlations, which are termed as confounding regions, confounders, or spurious correlations (used interchangeably in this paper) [12, 13]; confounding regions are parts of training instances that are not correlated with a category, but incorrectly assumed to be so by a learner.
As is displayed in Fig. 1, XBL is generally made up of four steps. The first is traditional model training which often uses a categorical loss. The next step is generating model explanations. Feature attribution based local explanations [9] or surrogate model explanations [10] can be used for this. We limit the scope of this work to a saliency based local explanation, Gradient-weighted Class Activation Mapping (Grad-CAM) [14]. In the third step, explanations are presented to users and feedback is collected. For method development and experiment purposes confounding regions can be added to a dataset and their masks used as user feedback for XBL. Finally, the collected feedback is used to calculate an explanation loss, which in turn is used to augment the initial categorical loss, and refine the original model using XBL training [7].
The training process in XBL augments loss functions to include an explanation loss, which can be based on either or both of: (1) a model's deviation from user annotated feedback that shows objects of interest; and (2) a model's focus on user annotation of non-salient or confounding image regions. This explanation loss is usually based on the intersection of the user annotation of image features and a model's visual explanation. Loss functions are generally augmented as follows:
\[L_{expl}=\sum_{i=1}^{N}e(expl_{i,c},M_{i,c}) \tag{1}\]
\[L_{CE}=-\sum_{i=1}^{N}e(\hat{y}_{i},Y_{i}) \tag{2}\]
\[L=L_{expl}+L_{CE}+\lambda\sum_{i=1}\theta_{i} \tag{3}\]
The term, \(L_{expl}\) in (1) is the explanation loss calculated as the error, \(e\), between the model's explanation, \(expl_{i,c}\) for input \(i\) with category \(c\), and the ground truth annotation, \(M_{i,c}\), where \(M=1\) for relevant regions and \(M=0\) for non-salient regions. The term, \(L_{CE}\), in (2) is the traditional cross entropy loss which is calculated based on the error, \(e\), between the model's prediction \(\hat{y}_{i}\) and ground-truth label \(Y_{i}\) for instance \(i\). While \(Y_{i}\) only holds category label, \(M_{i,c}\) holds a mask annotation of relevant objects in an input \(i\). Finally, XBL consists of the sum of \(L_{expl}\), \(L_{CE}\) and a regularization term, \(\lambda\) where \(\theta_{i}\) is the network parameters.
Most XBL loss function augmentations in the literature fail to consider two scenarios: (1) focus of a model's attention may get closer to- and gradually shift to the relevant regions of training instances; for this reason, we need to penalize the learner less as the explanations (the model's attention) starts to improve. This means there is a need to make loss functions positively related to the distance of a model's wrong attention from the relevant regions; and (2) model activations that usually make up model explanations are not as strictly bounded as user annotations and we need to relax the training penalization as we get closer to the relevant parts of training images. In order to address these shortcomings of existing XBL methods and assuming model explanations correctly highlight the reasoning behind a model's output, in this paper, we address the following research question: "_Can we augment XBL loss functions in way that is sensitive to distances between explanations and user annotations of relevant image regions for better classification and explanation performance?_"
Another aspect of XBL that is often overlooked is using coefficients that weigh and balance impact of explanation losses and classification losses and optimizing them. We also consider these coefficients as hyper-parameters and tune them to find their optimal values before starting model training with XBL.
### _Evaluation of model explanations_
While subjective evaluations of explanations that involve humans would give a user-centric assessment of model generated explanations [15], objective evaluations are often used for a speedy assessment and comparison in the development of model explainability methods [16]. Most of the existing evaluation methods give weight to the generated explanations over the ground truth feature annotations. This can result in over-confident evaluation results. In addition to using an existing evaluation method, to address this issue we propose an interpretability metric that assesses how much of the ground truth feature annotation has been identified as relevant by model explanations. We restrict our work in this paper to objective evaluations.
The main contributions of this paper are:
1. Decoyed versions of image classification datasets are created for XBL experiments.
2. A new XBL method, Distance-Aware eXplanation Based Learning (XBL-D), is proposed and evaluated.
3. A saliency map explanation interpretability metric, Activation Recall, is proposed and demonstrated.
4. Our experiments demonstrate that incorporating distance-aware learning into XBL performs better than baseline algorithms in classification tasks, and generates more accurate explanations. Furthermore, Code and links to download the datasets are shared online2.
Fig. 1: The eXplanation Based Learning (XBL) loop. The user feedback, which is expected to be an annotation mask of the confounding image region in the lower left corner (highlighted by the saliency map), is portrayed here as a red circle for easier visualization.
## II Related work
In this section, we present a review of relevant literature on XBL and model explanation evaluation metrics.
### _eXplanation based learning_
XBL methods can be generally categorized into two categories: (1) augmenting loss functions with explanation losses; and (2) using user feedback to augment training datasets by removing confounding or spurious regions identified by users.
#### Ii-A1 Augmenting loss functions
The model explanation method used has a huge impact on an interactive learning process, not only because it is directly used to compute explanation loss (\(expl_{i,c}\) as in (1)), but also because it can impact user experience and feedback quality. Right for the Right Reasons (RRR) [17] penalises a model with high input gradient model explanations on the wrong image regions annotated by a user. RRR uses
\[L_{expl}=\sum_{n}^{N}(M_{n}\frac{\partial}{\partial x_{n}}(\sum_{k=1}^{K}\log \hat{y}_{nk}))^{2} \tag{4}\]
for a function \(f(X|\theta)=\hat{y}\in R^{N\times K}\) trained on images \(x_{n}\) of size \(N\) with \(K\) categories, where \(M_{n}\in\ \{0,\ 1\}\) is user annotation of image regions that should be avoided by the model.
A Grad-CAM model explanation was used instead of input gradients in RRR-G by Schramowski _et al._[18] using the following loss function:
\[L_{expl}=\sum_{n}^{N}M_{n}GradCAM(x_{n}) \tag{5}\]
Similarly, Right for Better Reasons (RBR) [19] uses Influence Functions (IF) in place of input gradients to correct a model's behavior. Contextual Decomposition Explanation Penalization (CDEP) [20] penalizes features and feature interactions.
User feedback in XBL experiments can be one or both of: (1) telling the model to ignore non-salient image regions; and (2) instructing the model to focus on important image regions in a training dataset [21]. While the XBL methods presented above refine a model by using the first feedback type, Human Importance-aware Network Tuning (HINT) does the opposite by teaching a model to focus on important image parts using Grad-CAM model explanations [22].
Most of the literature on XBL focuses on using feature attribution based saliency maps such as input gradients and Grad-CAMs as model explanations. Prototype based explanations have also been utilized in Bontempelli _et al._[23] to debug Part-Prototype networks at concept level.
#### Ii-B2 Augmenting training dataset
Instead of augmenting loss functions, XBL can be implemented by relabeling, augmenting existing instances, or adding new training instances based on user feedback. Instance relabeling has been deployed to clean label noise in a training dataset that is identified using example based explanations [24]. Counter-Examples (CE), which are variants of training instances with added modifications using user feedback can be generated to augment dataset for model re-training [9]. Simpler surrogate models have also been used as global explanations to elicit feedback in the form of new training instances [10].
### _Evaluating feature attribution based explanations_
Feature attribution based explanations can be evaluated intrinsically and/or extrinsically [25]. Intrinsic evaluation involves only the model and the generated explanations themselves [26], while extrinsic evaluation involves subjective human evaluation [27] or objective usage of ground-truth annotation data.
Objective evaluation of model explanations provides an easier and quicker way of assessing interpretability by comparing model explanations to ground-truth data. Overlap of visual explanations and feature annotations can be used to compute localization ability of a model's explanations; to avoid explanations with high false positive rates which cover wide area of an image, thereby scoring a high overlap with annotations, penalized versions of overlap: Penalized Localization Accuracy (PLA) was proposed [28]. Activation Precision (AP) is another approach that computes how many of the pixels predicted as relevant by a model are actually relevant [29]. AP is presented in (6), where \(A_{obj_{n}}\) is a mask of relevant image regions in input image \(x_{n}\) and \(T_{r}\) is a threshold function that finds the (100-\(r\)) percentile and sets elements of the explanation, \(expl_{\theta}\), below this value to zero and the remaining elements to one. AP usually requires a low \(r\) value or high threshold so we can avoid explanations with high false positive rates.
\[AP=\frac{1}{N}\sum_{n}^{N}\frac{T_{r}(expl_{\theta}(x_{n}))*A_{obj_{n}}}{T_{r}( expl_{\theta}(x_{n}))} \tag{6}\]
There is a trade-off between selecting a higher threshold and accurately assessing model explanations. While increasing the threshold would mean focusing on smaller areas of an explanation and avoiding high false positive rates, it also means parts of an explanation would be masked before they are assessed, which could result in overconfidence in model explanations.
## III Distance-aware explanation based learning
We view the training images as instances made up of three parts: (1) the relevant regions, masked by \(A_{obj}\), that are considered important for category classification; (2) the confounding regions, masked by annotation \(A_{con}\), which are not correlated with any category but can trick the learner into learning that they are; and (3) the remaining image parts that are usually easily ignored by a learner as background image regions.
Our explanation loss penalizes a learner based on the amount of wrong attention it gives to \(A_{con}\), with due consideration of this wrong attention's distance from \(A_{obj}\). For
example, in Fig. 2, a Grad-CAM explanation shows a model giving attention to a confounder located on the lower left corner of an input image. Distance of the attention to the confounder is illustrated with a Viridis color-map showing largest distances as dark purple. In this case, this would result in the highest penalty. As the model's (wrong) focus starts to get closer to \(A_{obj}\), the explanation loss would decrease. We used Grad-CAM because it was found to be more sensitive to training label reshuffling and model parameter randomization [26] than other saliency based explanations.
Equations (7) and (8) underpin how we propose to integrate explanation and classification losses. Algorithm 1 shows how this combined loss function is integrated into the overall XBL-D approach. Here, \(G_{n}\) is the center of gravity of objects of interest in input images that are masked with \(A_{obj}\), \(expl_{\theta}(x_{n})\) is a Grad-CAM explanation of input \(x_{n}\) to model \(F\), and \(A_{con}\) is the annotation of a confounding region in \(x_{n}\). A model's incorrect focus on a confounding region is detected using the intersection \(I_{\theta}(x_{n})\) between \(expl_{\theta}(x_{n})\) and \(A_{con}\). The distance between a model's wrong attention to a confounding region and center of \(A_{obj}\) or \(G_{n}\) is then approximated by calculating average of the minimum and maximum euclidean distances, \(d\), between points in \(I_{\theta}(x_{n})\) and \(G_{n}\). This gives us a measure of how far a model's incorrect attention is from the relevant image regions. In (8), \(L_{CE}\) represents the cross entropy loss and \(\lambda\sum_{i=1}\theta_{i}\) is a weight (\(\theta\)) regularization term.
\[L_{expl} =\sum_{n}^{N}d(G_{n},I_{\theta}(x_{n})) \tag{7}\] \[L =\lambda_{1}L_{CE}+\lambda_{2}L_{expl}+\lambda\sum_{i=1}\theta_{i} \tag{8}\]
**Input**: confounded training dataset \(\hat{X}\) and ground-truth category \(Y\), feature annotation of object(s) of interest in \(\hat{X}\): \(A_{obj}\), feature annotation of confounders in \(\hat{X}\): \(A_{con}\).
**Parameters**: classification loss coefficient: \(\lambda_{1}\), explanation loss coefficient: \(\lambda_{2}\), regularization term: \(\lambda\), network parameters: \(\theta\)
**Output**: refined function \(F\)
```
1:\(F\)\(\leftarrow\) Fit function using \(\hat{X}\)
2:repeat
3:\(G\)\(\leftarrow\) center of gravity of objects of interest in \(A_{obj}\).
4:\(expl_{\theta}\)\(\leftarrow\) saliency map explanations of \(\hat{X}\) generated using Grad-CAM.
5:\(I_{\theta}\)\(\leftarrow\) set of intersections between \(expl_{\theta}\) and \(A_{con}\)
6:\(L_{expl}\)\(\leftarrow\) explanation loss as average of the minimum and maximum euclidean distances between points in \(I_{\theta}\) and \(G\)
7:\(L_{CE}\)\(\leftarrow\) classification loss between \(Y\) and \(F(\hat{X})\)
8: Total loss, \(L\)\(\leftarrow\)\(\lambda_{1}\) * \(L_{CE}\) + \(\lambda_{2}\) * \(L_{expl}\) + \(\lambda\sum_{i=1}\theta_{i}\)
9: update \(F\) using \(L\)
10:until\(L\leq\sigma\), where \(\sigma\) is a tolerable total loss
11:return\(F\)
```
**Algorithm 1** Distance-aware eXplanation Based Learning (XBL-D)
### _Activation recall_
In addition to using AP, we propose Activation Recall (AR) to assess visual explanations, such as Grad-CAMs, generated by a trained model. AR measures how much of the relevant parts of test images are considered relevant by a model. This is presented in 9, where (similarly to AP) \(T_{r}\) is a threshold function that finds the (100-\(r\)) percentile and sets elements of \(expl_{\theta}(x_{n})\) below this value to zero and the remaining elements to one.
\[AR=\frac{1}{N}\sum_{n}^{N}\frac{T_{r}(expl_{\theta}(x_{n}))*A_{obj_{n}}}{A_{ obj_{n}}} \tag{9}\]
Instead of selecting a single threshold to assess generated explanations, we compute AP and AR at different thresholds to show impacts of choosing threshold on the evaluation metrics. This also gives us an insight into how explanation evaluation can be misleading without the full information, i.e thresholding.
## IV Experiments
In this section, we describe the datasets, model architectures, and training details used in our experiments to evaluate the performance of XBL-D.
### _Dataset_
In order to validate performance of XBL-D, locations of the confounding regions needs to be known beforehand. For this reason, we used a publicly available decoyed dataset and
Fig. 3: Sample images from MS COCO with confounding regions added to random corners and their corresponding object masks
Fig. 2: [Best viewed in color.] Illustration of a distance-aware explanation loss calculation for an input image (left), Grad-CAM (middle). Distance is represented using Viridis color-map in the right figure. Yellow is for the smallest distance and dark purple for the largest. In this case, the confounding region (in the lower left image region) that is wrongly found relevant is as far as it can be from the important region. Pixel intensity of Grad-CAM on the confounding region is exaggerated for presentation purposes.
created two new decoyed versions of existing datasets for our experiments:
1. Decoy Fashion MNIST3. This was created by Teso and Kersting [9]. 4x4 pixel confounders with random pixel intensities were added to random corners of images from the Fashion MNIST training dataset [30]. The 10,000 images from the test dataset were left clean.
2. Decoy CIFAR-104. We created this dataset by adding 4x4 pixel confounders with random pixel intensities to random corners of the training set of CIFAR-10 dataset. The CIFAR-10 dataset contains a training set of 50000 and test set of 10000 32x32 RGB images categorized into 10 classes. Similar to the Decoy Fashion MNIST, the test set of this dataset was also left clean for evaluation purposes.
3. Decoyed subset of MS-COCO. We extracted a total of 2000 images for training and 600 image for testing, from the _Train_ and _Zebra_ categories of the MS-COCO dataset [31]. We then added 16x16 confounding pixels with random pixel intensities to random corners of the training images, which are of size 224x224. Images in the test set were left clean. We selected the _Train_ and _Zebra_ categories based on the low intersection of objects from both categories. Sample images from this dataset are shown in Fig. 3. We refer to this dataset as Decoy MS-COCO\({}_{(2)}\).
Footnote 3: We collected this dataset at [https://codeocean.com/capsule/7818629/tree/v1](https://codeocean.com/capsule/7818629/tree/v1)
### _Architecture selection and training_
We performed all of our experiments using Tensorflow and Keras5. For all our datasets, we searched for the best model architectures and hyper-parameters using HyperBand algorithm [32] in Keras tuner6. We considered and optimized the hyper-parameters: number and size of convolutional layers, number of pooling layers, number and size of fully connected layers, and learning rate. A Convolutional Neural Network (CNN) with one convolutional layer containing 160 filters and two fully connected consecutive layers of sizes 992 and 800 nodes, and a learning rate = 1.158e-04 was found to perform best for the Decoy Fashion MNIST dataset. For the Decoy CIFAR-10, a CNN with two convolutional layers of filters 250 and 300 followed by one fully connected layer with 912 nodes, and a learning rate = 1.267e-04 was selected. Similarly, we found that a CNN with four convolutional layers (containing 160, 352, 416, and 224 consecutive filters) each followed by a max-pooling layer, one fully connected layer of size 480 nodes, and a learning rate = 1.789e-05 performed best for the Decoy MS-COCO\({}_{(2)}\) dataset.
Footnote 4: [https://osf.io/w5f7y/view_only=abb75f55bfc48fb8c891838f699c0d3](https://osf.io/w5f7y/view_only=abb75f55bfc48fb8c891838f699c0d3)
Footnote 5: [https://www.tensorflow.org/api_docs/python/tf/keras](https://www.tensorflow.org/api_docs/python/tf/keras)
Footnote 6: [https://keras.io/keras_tuner/](https://keras.io/keras_tuner/)
To start with, the selected model architectures are fitted on the corresponding dataset using categorical cross-entropy loss and the Adam optimizer. We refer to the resulting models as _Unrefined_. All the models are then refined using XBL-D. For the Decoy MS-COCO\({}_{(2)}\) dataset, we run 20 epochs of refinement where each epoch took an average of 15 minutes, while for each of the Decoy CIFAR-10 and Decoy Fashion MNIST datasets, we run 50 epochs of refinement each taking averages of 7 and 5 minutes, respectively. Model training was performed on a machine with NVIDIA RTX A5000 graphics card.
Before starting the model refinement using XBL-D, we searched for optimal values of the coefficients of the categorical cross entropy loss (\(\lambda_{1}\)) and explanation loss (\(\lambda_{2}\)) using HyperBand in Keras tuner and we ended up with \(\lambda_{1}=2.7\) and \(\lambda_{2}=0.1\). We searched all hyper-parameters for each of the datasets separately. However, since \(\lambda_{1}\) and \(\lambda_{2}\) influence how XBL-D works, we decided to find one set that should work for the other domains for domain transferability purposes. Hence, the hyper-parameter search of \(\lambda_{1}\) and \(\lambda_{2}\) was performed on the most challenging task among the 3 datasets, which is the decoy MS-COCO\({}_{(2)}\) that contains large RGB images.
## V Results
In this section, we present classification and explanation performance results of our proposed method and compare them against baseline methods.
### _Classification_
Table I presents classification accuracy performance of XBL-D and comparison against baseline methods. On the original test set of Fashion MNIST dataset, our proposed method achieves classification performance of 0.904 surpassing previous XBL methods [33]. The second best performing model was RRR with a classification accuracy of 0.894. None of the available baseline methods were implemented for our Decoy CIFAR-10 and Decoy MS-COCO\({}_{(2)}\). For this reason, we trained a model using the best performing method, RRR, on the decoyed CIFAR-10 and MS-COCO\({}_{(2)}\) datasets for comparison purposes. Again, compared to RRR and Unrefined models, XBL-D achieved superior classification performance on the original test sets of CIFAR-10 and MS-COCO\({}_{(2)}\) achieving accuracies of 0.843 and 0.938, respectively, as is summarized in Table I.
### _Explanation performance_
While AR and AP evaluations, on the original test sets of the Fashion MNIST, MS-COCO\({}_{(2)}\) and CIFAR-10, across thresholds ranging from 40% to 95% with step size = 5 are presented in Figures 4, 5, and 6, Table II presents a summary of evaluations of explanations.
#### V-B1 Fashion MNIST
Our proposed method scores higher than both RRR and Unrefined models using both metrics. At threshold = 40%, XBL-D scored highest values of AR = 0.557 and AP = 0.663 (Table II). Given that higher threshold means considering smaller areas of Grad-CAM, AR values decrease with increasing threshold (see Fig. 4). However, even though AP seemed to decrease with increasing threshold values, it starts to increase at threshold above 90% (we accredit this to the Gray-Scale nature of the decoy Fashion MNIST dataset).
#### V-B2 Cifar-10
Similar to the Fashion MNIST, our proposed method performs higher than both RRR and Unrefined models using both metrics. At threshold = 40%, XBL-D scored highest values of AR = 0.516 (Table II) and at threshold = 95%, XBL-D scored AP = 0.342, outperforming both methods. While AR naturally decreases with increasing threshold, AP increases given the RGB nature of CIFAR-10.
#### V-B3 Ms-Coo\({}_{(2)}\)
Our method scored better AR at lower thresholds and performed comparable to RRR at other thresholds (at threshold = 40%, XBL-D scored AR = 0.860, Table II). Similar to the other datasets, we also found that low threshold values led to higher AR values (see Fig. 6). However, unlike the models trained on the Fashion MNIST dataset but similar to the CIFAR-10, AP values increase with increasing threshold (at threshold=95%, RRR scored highest AP of 0.761, Table II). We accredit this to the RGB nature of the MS-COCO\({}_{(2)}\) dataset.
Sample Grad-CAM outputs of input images from both categories are displayed in Fig. 7. We show sample explanation outputs for the MS-COCO\({}_{(2)}\) images because their high resolution makes them well suited for presentation. While the clean test sets were used in computing AR and AP explanation evaluations, sample of the decoyed images from training set of the MS-COCO\({}_{(2)}\) are shown in Fig. 7 to demonstrate the ability of XBL-D in avoiding confounding regions and to compare it against RRR and the Unrefined model. As is displayed in the sample outputs, our proposed method was able to produce accurate explanations that focus on relevant parts of objects in input images and successfully ignores confounders.
## VI Discussion
In addition to explaining a model's classification output, XBL facilitates a more transparent machine learning process by providing a rich user interaction mechanism. As opposed to the traditional interactive machine learning that is usually performed through instance category labeling, a user would be able to get involved at a deeper level by interacting with model explanations in the machine learning process. In XBL, a user would be able to teach a learner model by observing and commenting on the reasoning (i.e correcting model explanations) behind its predictions. This kind of user engagement has the potential to circumvent the _black-box_ public image of deep learning models since it aims to build a rapport with users by providing a transparent way of interaction with an opportunity to refine the models.
When compared against baseline methods, XBL-D achieved superior performance in classifying all three datasets. We believe this is because it unlearns confounding regions, which were wrongly found relevant by a model, based on their locations and distances from the user annotated relevant regions. As shown in the sample outputs in Fig. 7, a model's focus,
Fig. 4: AR and AP evaluations of explanations generated for a clean Fashion MNIST test dataset using a model trained on the Decoy Fashion MNIST. The evaluations are performed at threshold values ranging from 40% to 95% with step size = 5
Fig. 5: AR and AP evaluations of explanations generated for a clean CIFAR-10 test dataset using a model trained on the Decoy CIFAR-10. The evaluations are performed at threshold values ranging from 40% to 95% with step size = 5
Fig. 6: AR and AP evaluations of explanations generated for a clean MS-COCO test dataset using a model trained on the Decoy MS COCO\({}_{(2)}\). The evaluations are performed at threshold values ranging from 40% to 95% with step size = 5
shown with visual explanations is not strictly bounded and, however good it is, there is always a good chance it might exceed boundaries of relevant region(s). Based on this fact, XBL-D instructs a learner that it is not only acceptable to focus on the user annotated parts but also around it as long as it keeps a distance from the confounding region. Had the explanation loss been based on intersection of generated explanations with the confounding regions, it would penalize the model whenever it focuses on the confounders without consideration for the confounders' locations.
In addition to XBL-D, we observe that the Unrefined model performed better than most of the other XBL models in classifying the decoy Fashion MNIST. We attribute this to the accuracy-interpretability trade-off in deep learning. Although the existence of this trade-off is debated [34, 35], deep learning models that are refined with an explanation based learning could lose performance if the refinement is not performed using a fitting approach such as our proposed method, XBL-D.
We also proposed an interpretability metric, Activation Recall (AR). AR measures how much of the user annotated relevant image regions were actually considered relevant by a trained model. It circumvents a possible over-confidence that may result from mainly focusing on explanations (saliency maps in this case) during explanation evaluation. By redirecting the focus from explanations to ground-truth annotations, AR provides a reliable metric for explanation evaluation. We recommend AR should be used in conjunction with AP for a reliable assessment of model explanations.
Objective evaluations of generated explanations of test images of employed datasets across different thresholds also show that XBL-D performs better than RRR and Unrefined models in generating accurate explanations. Threshold selection is important in computing AR and AP of generated explanations. In all the datasets, we observed that low threshold values lead to higher AR while the opposite is true for AP. This is because parts of Grad-CAM considered for AR calculation increase with decreasing threshold. We also note that the Gray-Scale nature of Fashion MNIST affects AP values and it plummets with increasing threshold, but recovers after threshold = 90%.
In addition to performing better at objective evaluations, XBL-D also outputs visually accurate saliency maps compared to the RRR and Unrefined models as can be seen in Fig. 7. We were able to observe that XBL-D is better than RRR and the Unrefined models at localizing objects of interest in input images.
## VII Conclusion
In this paper we proposed and demonstrated superior performance of XBL-D, a distance-aware explanation loss for XBL loss function augmentation. This introduces a new direction for XBL research: the consideration of the distance of a model's wrong attention from relevant regions. XBL-D was able to achieve superior classification and interpretability performance compared to baseline methods on three different datasets. This assures that our proposed method generalizes across different datasets.
Fig. 7: Sample Grad-CAM Outputs. Original size of all Grad-CAM images was 14x14; They are up-sampled to 224x224 for easier comparison against input images.
## Acknowledgment
This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 18/CRT/6183. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
|
2309.12034 | A detection analysis for temporal memory patterns at different
time-scales | This paper introduces a novel methodology that utilizes latency to unveil
time-series dependence patterns. A customized statistical test detects memory
dependence in event sequences by analyzing their inter-event time
distributions. Synthetic experiments based on the renewal-aging property assess
the impact of observer latency on the renewal property. Our test uncovers
memory patterns across diverse time scales, emphasizing the event sequence's
probability structure beyond correlations. The time series analysis produces a
statistical test and graphical plots which helps to detect dependence patterns
among events at different time-scales if any. Furthermore, the test evaluates
the renewal assumption through aging experiments, offering valuable
applications in time-series analysis within economics. | Fabio Vanni, David Lambert | 2023-09-21T12:56:16Z | http://arxiv.org/abs/2309.12034v1 | # A detection analysis for temporal memory patterns at different time-scales
###### Abstract
This paper introduces a novel methodology that utilizes latency to unveil time-series dependence patterns. A customized statistical test detects memory dependence in event sequences by analyzing their inter-event time distributions. Synthetic experiments based on the renewal-aging property assess the impact of observer latency on the renewal property. Our test uncovers memory patterns across diverse time scales, emphasizing the event sequence's probability structure beyond correlations. The time series analysis produces a statistical test and graphical plots which helps to detect dependence patterns among events at different time-scales if any. Furthermore, the test evaluates the renewal assumption through aging experiments, offering valuable applications in time-series analysis within economics.
**Keywords:** Time-scales memory, Statistical test, Renewal processes, Bursty events, Machine learning technique, Econometrics
## 1 Introduction
Latency in counting the events so to have delayed processes and performing a exchangeability test using time-windows of observations which are period between the initiation of something and the occurrence. Bursty renewal patterns in evolving systems can be studied using a temporal-scale perspective of the inter-arrival event time series, possibly, revealing blocks of memory in events. Renewal theory as been deeply discussed by the seminal works of Feller (1968, 1991); Cox (1967) and it began with the study of stochastic systems whose evolution through time was interspersed with renewals or regeneration times when, in a statistical sense, the process began anew. The importance of searching for recurrent patterns is due to the fact that the existence of repetitive scheme makes it always possible to discuss essential features of a sequence of random variables in spite of the laws governing such sequence could be so intricate to preclude a complete analysis. For example, the study of recurrent patterns can circumvent the impossibility of a straightforward analysis of a possible non-markovian behavior of some stochastic processes (Smith, 1958).
Renewal and regenerative processes are models of stochastic phenomena in which an event (or combination of events) occurs repeatedly over time, and the times between occurrences are independent. The theory does not need to specify the meaning or effect of single events, and this is reason why renewal processes are at the core of many stochastic problems found throughout all fields of science.
The field of complex systems can be used as a common framework where many heterogeneous interacting agents produce a systemic bursty dynamics with non-ordinary statistics. In particular, temporal networks represent a cross-road for many disciplines towards a common understanding of the backbone core of any natural systems. In particular the analysis we proposed will be applied to some model of networks to test its validity.
A critical point for processes with renewal patterns is the intrinsic difficulty to assess if a real world process entails the presence of such recurrent events over its evolution. consequently, it is of great importance the development of a statistical tool which may detect the presence of renewal events. In literature, this is a challenging issue has been assessed in different ways. A standard statistical tool to detect the presence of renewal events has been determined through a statistical test directly derived by the property of ergodic processes with finite moments of distribution of the inter-arrival times. The authors (Wang and Coit, 2005; Bain, 1991) define different hypothesis tests to determine whether and how the pattern of events are significantly renewal by analyzing both homogeneous and non-homogeneous Poisson processes.
Another popular and well known tool is often used in terms of correlation analysis between inter-events time intervals (Perkel et al., 1967a,b; Avila-Akerberg and Chacron, 2011). An important statistics, which quantifies the correlations among events is the serial correlation coefficient (SCC).
We, in alternatively, propose a statistical test for renewal processes based on the property of aging of such systems when we observe events a later times of observation. Typical Poisson-types process does not show any aging, on the contrary fat-tails inter-events's distributions show such aging property which typically makes the previous described tool for renewal assessment useless.
The present aging-based renewal test can also provide deep insights renewal and not-renewal properties of the process for different time-scales so contributing to a better understanding of processes with mixed type of events or the presence of process which behaves differently for different scales, or the presence of truncations in finite-size systems.
We will also devote our analysis to the case where a renewal event might be masked by a cloud of secondary events, of Poisson nature, generating the wrong impression that the process is not renewal, and that its memory is a property of the individual trajectories.
In the paragraph 2, we will provide an overview of the importance of the study of renewal process in economics and other sciences, making a review of the key features of renewal process as well as of the aging properties of those.
In paragraph 3, we develop the statistical tool describing the steps needed to test the significant presence of renewal property in the observed processes. We start with synthetic time series whose renewal nature is theoretically known, in order to validate our statistical test. In paragraph 4, we apply our test analysis to real world time series providing variation of the statistical test in the case of data with low number of samples from big data up to single realizations.
## 2 Memory between events and the aging experiment
Let us consider a counting process \(N(t)\) that counts the number of some type of events occurring during a time interval \([0,t]\) and let us suppose \(0\leq t_{1}\leq t_{2}\leq\ldots\)are finite random times at which a certain event occurs. The number of the times \(t_{n}\) in the interval \((0,t]\) is:
\[N(t)=\sum_{n=1}^{\infty}\mathbf{1}(t_{n}\leq t),\quad t\geq 0 \tag{1}\]
we will consider \(t_{n}\) as points (or locations) in \(\mathbb{R}^{+}\) with a certain property, and \(N(t)\) is the number of points in \([0,t]\). The process \(\{N(t):t\geq 0\}\), denoted by \(N(t)\), is a point process on \(\mathbb{R}^{+}\). The \(t_{n}\) are its occurrence times (or point locations)1. The time elapsed between consecutive events are random variables represent the inter-occurrence times \(\tau_{n}=t_{n}-t_{n-1},\text{ for }n\geq 1\). The \(t_{n}\) are called renewal times, and \(\tau_{n}\) are the inter-renewal times (or waiting times), and \(N(t)\) is the number of renewal events in \([0,t]\).
Footnote 1: The point process N (t) is simple if its occurrence times are distinct: \(0<t_{1}<t_{2}<\cdots\) a.s. (there is at most one occurrence at any instant).
The epoch of the \(n\)th occurrence is given by the sum:
\[S_{n}=\tau_{1}+\cdots+\tau_{n} \tag{2}\]
As an example. a simple point process \(N(t)\) is a renewal process if the inter-occurrence times \(\tau_{n}=t_{n}-t_{n-1},\text{ for }n\geq 1\), are independent with a common distribution \(\psi\), where \(\psi(0)=0\) and \(t_{0}=0\).
Those waiting time random variables are called exchangeable if their distribution function is symmetric, so that event series has serial dependence if the value at some time \(t\) in the series is statistically equivalent to any other event at another time.
A finite sequence of random variables \((\tau_{1},\ldots,\tau_{n})\) is called exchangeable if \(\forall n\geq 2\),
\[(\tau_{1},\tau_{2},\ldots,\tau_{n})\stackrel{{ D}}{{=}}(\tau_{\pi( 1)},\tau_{\pi(2)},\ldots,\tau_{\pi(n)})\qquad\forall\pi\in S(n) \tag{3}\]
where \(S(n)\) is the group of permutations of \((1,2,\ldots,n)\)(Aldous, 1985; Niepert and Domingos, 2014). This clearly implies (assuming existence) that means and variances are constant (stationarity). Clearly, independent identically distributed variables are also exchangeable but the opposite is not true in general, for example using the de Finetti's theorem, a exchangeable infinite sequence can be expressed as a mixture of underlying iid sequences (Finetti, 1982; Shanbhag, 2001; Kallenberg, 2005), so exchangeability is meant to capture symmetry in a problem, symmetry in a sense that does not require independence. Exchangeability generalises the notion of a sequence of random variables being iid and in frequentist approach to statistics obsereved data is assumed to be generated by a series of iid RVs with distribution parameterised by some unknown \(p\) which, on the contrary from a Bayesian perspective, it has some prior distribution, so the random variables which give the data are no longer independent. As regard with point processes (Huang, 1990)
Similarly, a time series has serial correlation if the condition holds that some pair of values are correlated rather than the condition of statistical dependence. In particular, a sequence of random variables is independent and identically distributed (iid) if each random variable has the same probability distribution as the others and all are mutually independent, i.e. for \(n\) random variables s \((\tau_{1},\ldots,\tau_{n})\) we have:
\[P(\tau_{1},\tau_{2},\ldots,\tau_{n})=\prod_{i=0}^{n}P(\tau_{i}) \tag{4}\]
which is the basic property required in the definition of renewal processes. However, generally, statistics and in particular machine learning have the purpose of discovering statistical dependencies in data, and the use of those dependencies to perform predictions using the fact that future observations of a sequence behave like earlier observations. A formalization of the notion of "the future predictable by past experience" is the exchangeability of random variables.
Finally, we apply our _XA_ test to a sequence of events which is not iid but it has the property of exchangeability. We replicate a classic process to generate exchangeable binary sequence: the Polya urn model (Hill et al., 1987).
## 3 Statistical aging experiment
To turn the theoretical prediction that would make it possible to establish renewal aging though an ensemble observation, we have to find a way to establish renewal aging observing a single sequence. Fig. 1 illustrates how to make the renewal aging assessment using a single realization. We move a window of size \(t_{a}\) along the time series, locating the left size of the window on the time of occurrence of an event. The window size prevents us from assessing if there are or not events before the end of the window. We record the time distance between the end of the window and the occurrence time of the first event that we can perceive. The moving window serves the important purpose of mimicking the use of a very large number of identical systems. In fact, if non-stationarity is not due to changing with time rules, the exact moment when an event occurs can be selected as time origin of the observation process. Beginning our observation process at a distance \(t_{a}\) from the occurrence of an event can be done with the events of the time series under study. This is the purpose of the moving window of Fig. (1).
Using the intermittence jargon we call _laminar region_ the time interval between the occurrence of two consecutive events. It is evident that the times that we record are portions of the original laminar regions.
In this case the aging experiment illustrated by Fig. (1), generating only fractions of the original laminar region, has the effect favoring the long-time laminar regions, because cutting a very large laminar region may have the effect of leaving very extended also the laminar region produced by the delayed observation. The short-time laminar regions are affected much more from the delayed observation.
## 4 Renewal-Aging test
In order to asses a statistical measure of the renewal patterns in a sequence of events, we build our hypothesis upon a well define aging experiment in the previous paragraph.
We finally consider the problem of multiple testing of a single hypothesis, with a standard goal of combining a number of p-values without making any assumptions about their dependence structure. We will use a combined probability test to combine the results from several independent tests bearing upon the same overall hypothesis.
1. \(\forall t_{a}\) latency, a point-wise significance test analysis using the aged distributions and the reshuffled version: 1. perform the two/sample test techinque (i.e. Kolmogorov-Smirnov or Permutation test) to verify the hypothesis that the the original aged sequence and a shuffled aged one have the same distribution (null hypothesis) 2. Check if the distribution of the \(p\)-values obtained by the test are uniformly distributed and compute Fisher's combined \(p\)-value as age-wise significance test.
2. Perform a \(p\)-value boxplot over different ages \(t_{a}\) (latency) for a qualitative overview of the renewal property for each \(t_{a}\). We also computes the geometric means of \(p\)-values for every age \(t_{a}\)
3. As a global statistical evidence one can test the property of the behavior of geometric-mean for each box respect to the expected distribution under the hypothesis the process is renewal.
First we discuss the test for a given age \(t_{a}\) for which an aging experiment has been carried out.
### Two samples tests
We want to statistic quantifies a distance between the empirical distribution functions of the two kind of aged-samples derived from the aging procedure previously explained: the original aged sequence of inter-arrival times and the reshuffled aged sequence one.
One of the central goals of data analysis is to measure and model the statistical dependence among random variables. Empirical distribution functions have been used for studying the serial independence of random variables at least since Hoeffding [62].
The discussion about two-sample tests also applies to the problem of testing whether two random variables are independent. The reason is that testing for independence really amounts to tasting whether two distribution are the same, namely, the joint distribution and the product distribution.
Figure 1: Illustration of the aging experiment to establish the renewal nature of the process. _I_) we age the sequence of events using the observation time \(t_{a}\) from which the observer is ready to detect the next coming event, then the observer cannot detect any other events before another time \(t_{a}\) is passed by, so registering the new aged inter-arrival times \(\tau_{i}\) if the figure. As consequence, the rate of the observation of the process could have impact on the statistical property of the inter-arrival intervals. _II_) we collect the new (aged) time intervals which is the aged experimental distribution \(\Psi_{t_{a}}^{(exp)}(\tau)\). _III_) is the last step of the aging experiment, we just reshuffle the aged time-intervals so having a distribution \(\Psi_{t_{a}}^{(rem)}(\tau)\) which has to be equal to the experimental distribution if the process is renewal.
There are many statistical tools which can provide such hypothesis testing, and we will focus the attention to the well know Kolmogorov-Smirnov test (K-S) which is a non-parametric distribution free statistical hypothesis procedure for determining if two samples of data are from the same distribution (Kolmogorov, 1933; Smirnov, 1933)2
Footnote 2: There are many alternatives which also improve the K-S test, for example the Anderson-Darling test or The Cramer-von Mises test, but we use the K-S as main reference of our Renewal hypothesis as a standard and well assessed procedure in the statistical literature.
The combination of many K-S test is performed through a such prescription can be developed for independent observations under the same hypothesis, so it can be applied for artificial data from model simulations and from big data analysis where many independent measurements has been performed over the same process, as different neurons during their spiking activities.
Let us indicate the aged time interval sample as \(m\) i.i.d. random variables \((\tau_{\iota}^{(1)},\ldots,\tau_{\iota_{\iota}}^{(m)})\) and let us indicate another independent sequence obtained where the aged time interval have been shuffled as \(n\) i.i.d. random variables \((s_{\iota_{\iota}}^{(1)},\ldots,s_{\iota_{\iota}}^{(n)})\). Let \(\mathcal{T}_{m}(z)\) and \(\mathcal{S}_{n}(z)\) the corresponding empirical distribution functions and the new random variable \(D_{m,n}\) by
\[D_{m,n}=\sup_{z}|\mathcal{T}_{m}(z)-\mathcal{S}_{n}(z)| \tag{5}\]
and using the Glivenko-Cantelli theorem (Van der Vaart, 2000; Gibbons, 2011) which guarantees that the two empirical distributions have samples made up from the same distribution, the statistic \(D_{m,n}\) almost surely converges to zero. Such test statistic is appropriate for a general two sided hypothesis test:
\[\mathcal{H}_{0} :\psi_{\tau}(z)=\psi_{s}(z)\qquad\text{for all }z\qquad(\text{\emph{exchangeable events}}) \tag{6}\] \[\mathcal{H}_{1} :\psi_{\tau}(z)\neq\psi_{s}(z)\qquad\text{for some }z\quad(\text{\emph{not-exchange events}}) \tag{7}\]
The p-value for statistic \(D_{m,n}\) may be obtained by evaluating the asymptotic limiting distribution \(Q(t)\) as:
\[p=\Pr(D_{o}\leq D_{m,n}|\mathcal{H}_{0}) \tag{8}\]
where \(D_{o}\) is the observed value of the two-sample K-S test statistic, as consequence we obtain the value that is the probability that the observed statistic occurred by chance alone, assuming that the null hypothesis is true.
In order to get the value of eq.(8) one could use the analytical approach used in (Feller, 2015) where:
\[\lim_{m,n\rightarrow\infty}\Pr\left(D_{m,n}\sqrt{\frac{n+m}{nm}}\leq z\right) =1-2\sum_{i=1}^{\infty}(-1)^{i-1}e^{-2i^{2}z^{2}}=:Q(z) \tag{9}\]
where \(Q(z)\) is the c.d.f of Kolmogorov-Smirnov distribution and so \(D_{m,n}\) serves as a consistent test statistic for our hypothesis test.
Following the approach in Stephens (1970), we can numerically determine the \(p\)-value as \(p\simeq 1-Q(\lambda)\) where \(\lambda=D_{o}\cdot\left(\sqrt{(n+m)/nm}+0.12+0.11/(\sqrt{(n+m)/nm}\right)\) which becomes asymptotically accurate as \(nm/(n+m)\geq 4\)3.
Footnote 3: This is a numerical procedure used in most of K-S algorithm in the main scientific programming language based on the Press et al. (2007, ch.14) in C, Matlab, STATA, R and many others. An alternative approach is by comparing the test statistic \(D_{m,n}\) with a critical value \(c_{\alpha}\) where \(\Pr(D_{m,n}\geq c_{\alpha}|\mathcal{H}_{0})\leq\alpha\), so obtaining the rejection decision if \(D_{o}>c_{\alpha}\sqrt{(n+m)/nm}\) where the critical values \(c_{\alpha}\) can be obtained from tables.
Let us notice that we will also make use of computational approaches to testing statistical hypotheses such as two sample permutation test especially useful when the assumptions of K-S test are violated: for example K-S test is exact only for continuous variables. but it is conservative for discrete variables, so in the case of small samples the non-continuous variables have a significant effect on the test, in alternative we will make use of computational statistical tests as the permutation test approach.
Another violation of K-S family tests is the case when the two samples are not mutually independent or when the sample are not completely random. In those cases we will devote a discussion in how to detect and try to avoid or minimize this sort of artifact dependence among data.
The K-S test is originally used to asses if a single observation can fit with the hypothesis of a renewal sequence in a single realization. However, in the case we have many independent sequences we can perform multiple hypothesis testing for the renewal assumption. This is the case when the sequences are synthetic realizations derived from models so it is always possible to perform as many tests we want so to have a more reliable outcome of the \(p\) values about
the renewal property of the underlying process. Another situation in which we can perform multiple testings is in the presence of big amount of data made up of independent observations of the same (or at least equivalent) process where. for example, we can ran a statistical test on each gene in an organism, or on demographics within each of hundreds of counties keeping the tests independent among them.
In those cases the challenge would be to find a suitable procedure to combine the results from several independent tests bearing upon the same overall hypothesis (renewal assumption)4.
Footnote 4: Such research question would be distinguished from another type of multiple hypothesis testing of statistical comparison of many competing hypotheses in order to discover hidden processes underlying observed patterns of data (called data dredging or p-hacking).
### Meta-analysis
Once we have chosen the statistical test to assess the equivalence between the two distributions derived from the aged-sequence and a reshuffled one, one can produce many two-sample comparisons producing many \(p\)-values obtained through a chosen two-sample test (K-S in this case). Combining p-values from independent statistical tests is a popular approach to meta-analysis, in particular we will introduce a procedure of combining the information in the \(p\)-values from different renewal statistical tests in order to obtain a single overall test under the assumption that the tests are statistically independent. There are many methods for combining \(p\)-values in a single test of common hypothesis as extensively shown by Loughin (2004).
Basically, our analysis is based on the approach in Fisher (Fisher, 1932) about a consistent way to combine \(p\)-values coming from independent repeated tests over the same null hypothesis.
Consider a set of \(N\) independent hypothesis tests, each of these to test a certain null hypothesis \(\mathcal{H}_{0i}\), \(i=\{1,2,\ldots,N\}\). For each test, a significance level \(p_{i}\) (p-value) is obtained. All these \(p\)-values can be combined into a joint test whether there is a global effect, i.e., if a global null hypothesis \(\mathcal{H}_{0}\) can be rejected. The test is based on the fact that the probability of rejecting the global null hypothesis is related to intersection of the probabilities of each individual test. If the underlying test statistics \(D_{1},\ldots,D_{N}\) have absolutely continuous probability distributions under their corresponding null hypotheses, the joint null hypothesis for the \(p\)-values is \(\mathcal{H}_{0}:p_{i}\sim U[0,1]\): so the several \(p\)-values are considered as random variable which is uniformly distributed when the global null hypothesis is true.
Let us to stress here that the geometric mean of a set of \(p\)-values is \(g_{p}=\left(\prod_{i=1}^{N}p_{i}\right)^{1/N}\), no matter how alike or different between the individual elements, so the geometric mean is not technically a combined \(p\)-value but it is the "best" average for the \(p\)-value (Vovk and Wang, 2018).
We will use the Fisher's approach for a qualitative and quantitative statistical clarity of our renewal hypothesis testing, having in mind that each test is performed for different ages \(t_{a}\), so that the overall test is spread over all the possible \(t_{a}\), where a pure renewal processes would always accept the null renewal hypothesis for any \(t_{a}\).
In order to consider an combined measure of many independent \(p_{i}\) p values, a test based on the geometric mean is a preferable since it is consistent, in the sense that it can not fail to reject the overall test null hypothesis although the result of one of the partial tests is extremely significant.
Under the null hypothesis of renewal assumption, let us call \(N\) the number of \(p_{i}\)-values from \(N\) independent K-S tests, under the null, the geometric mean of uniformly distributed \(p\)-values has a probability density function as
\[\rho_{N}(g_{p})=\frac{N}{\Gamma(N)}(-Ng_{p}\log g_{p})^{N-1}\mathbb{I}_{(0,1) }(g_{p}) \tag{10}\]
So, under the null hypothesis, it is expected that the geometric mean variable has the following mean and variance:
\[E[g_{p}] =\mu_{0}=\left(1+1/N\right)^{-N} \tag{11}\] \[\to e^{-1}\quad\text{ for }N\to\infty\] \[\text{Var}[g_{p}] =\sigma_{N}^{2}=\left(1+2/N\right)^{-N}-\left(1+1/N\right)^{-2N}\] (12) \[\sim e^{-2}/N+O(1/N^{2})\to 0\quad\text{ for }N\to\infty\]
### Overall time-scales test : XA plots
In the previous paragraphs we first defined the tools to compare two aged distributions coming from ordinary and shuffled inter-arrival time intervals. We performed many repeated independent tests obtaining many \(p\) values for each repetition of the K-S test which we combine in unique average \(p\) value for each time-scale (age) \(t_{a}\).
Finally, in the last step, we perform the same repeated K-S tests for different length of observation time \(t_{a}\) viewing at the geometric mean \(g_{p}(t_{a})\) for different temporal-scales of agings.
At this purpose we can construct the Renewal-Aging (R-A) plots which shows the geometric mean points of p-values over different ages, then a stripe is shown which indicates a 95% confidence interval around the expected geometric mean. If the computed \(g_{p}(t_{a})\)'s are statistically compatible with the renewal assumption those points stays within the stripe with the correspondent expected geometric mean. Moreover light gray bars are shown for each \(t_{a}\), those are the blox plots showing the distribution of p-values over \(N\) different K-S test for each particular \(t_{a}\). In Fig.3 we show the typical R-A plot for 20 ages \(t_{a}\). Under the null hypothesis one expect to see a uniform distributied box-plot where for each age \(t_{a}\) one sees a uniform distribution of p-values for serveral K-S tests, and the geometric mean should be around its expected value within a certain expected deviation.
A process is intended to be "pure" renewal or "pure" not-renewal if it exhibits always the same inter-events dynamics for every time scales \(t_{a}\), as in the case of exponential and power-law distribution presented here. In such cases it is possible to derive a final unique index of significance of the global renewal test for every ages \(t_{a}\).
Despite the R-A test is quite simple it is important to discuss the meaning of some parameters in the test as reported in Table 1. First of all we can fix a smallest and largest ages which gives the minimal scale of memory and the maximal one in the sequence of events. In particular the minimal \(t_{a}^{min}\) is fixed so that we have observation rates which can acctualy can aged the observed sequence of events. The maximum \(t_{a}\) instead can be considered as
Figure 2: Meta-analysis of \(X\!A\) test per age. Per each observation rate \(t_{a}\), in step a) we can generate \(N\) pairs of realizations of the process (synthetic ones or observations from data), \(A_{i},B_{i}\) which are independent. In step b) the aging experiment of the two inter-arrival times is performed, in one case (i.e. the sequence). In step c) A two sample statistical significance is performed (e.g. K-S test) and so computing the \(p\)value of the \(i\)-th comparison test. After collecting a vector of \(p\)-values \(\{p_{1},\ldots,p_{N}\}\) which have to be uniform distributed under the null hypothesis there is no memory in the system (the process is renewal). The procedure will be repeated for different time scales \(t_{a}\) so aging the observed distributions
proportional to length of the sequence multiplied by the average rate of the waiting times between two consecutive events. This is done in order to have enough samples to perform the two-sample comparison test at large ages. The temporal resolution \(T_{a}\) is the number of ages \(t_{a}\) and it is connected to the resolution of temporal scales of the _XA_ plots since it is connected with the increments between two consecutive ages. There is no limit to such parameters since it always increase the number of geometric means for which we make the _XA_ test. Finally the number of K-S test for each \(t_{a}\) is kept constant for all the ages and it sets the precision of the _XA_ test since increasing \(N\) we have that, under the null, the standard error of geometric means tends to zero, so that the amount of chance fluctuation we can expect in sample estimates will reduce. This number \(N\) represents also the degree of freedom in the Fisher's combined test so we decided to keep this name, and its only constraint for such parameter is the computational speed of the test.
A global test statistics is the standard core of the sample mean of \(\{g_{p}(t_{a})\}\):
\[Z_{g}=\frac{\overline{g_{p}}-\mu_{0}}{\sigma_{n}}=\frac{\overline{g_{p}}-1/e}{ \sqrt{e^{-2}/N}}=\sqrt{N}\left(e\,\overline{g_{p}}-1\right) \tag{13}\]
and \(Z_{g}\to 0\) for \(n\rightarrow\infty\) under the null hypothesis, otherwise, under the alternative hypothesis, \(Z_{g}\) diverges.
For large samples, the test statistic is approximately distributed as a standard normal distribution according to the central limit theorem. Therefore, using a lower-tailed test we can reject the null hypothesis of a renewal process if \(Z_{g}<z_{-\alpha}\) where \(\alpha\) is a given confidence level accepting the alternative hypothesis of memory between the events, whenever \(Z_{g}\leq\mu_{0}+z_{1-\alpha}\sigma_{n}\). Otherwise, a upper-tailed test given \(Z_{g}>z_{\alpha}\) can detect positive dependence in the samples we used in the test which has nothing to do with a possible correlation in the sequence of events. Such positive dependence,for example can arise if one uses not independent samples in the K-S tests or a poor performance of K-S procedure if the sample size is low (discrete samples when the continuous sample approximation fails). However this
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(t_{a}^{max}\) & \(T_{a}\) & \(N\) \\ \hline \hline \(\sim L/\langle\tau\rangle\) & (_free_) & _dof_ (_free_) \\ \hline largest & _temporal resolution_ & _statistical precision_ \\ memory block & test’s sample size & population sample \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of the \(R\)-\(A\) test. Where the length \(L\) is the number of events in the data sequence \(\{\tau_{i}\}_{i=1,-L}\). Then, \(T_{a}\) is the number of \(t_{a}\)’s used in the repeated tests so that \(\{t_{a}\}_{i=1,-T_{a}}\). Moreover, \(T_{a}\) also represents the number of geometric means calculated in the whole statistical procedure, and \(T_{a}\) is considered the sample size of the test We can also notice that another parameter stricly related to \(T_{a}\) is the step-size \(\delta_{t_{a}}=t_{a}^{max}/T_{a}\) which represents the sampling interval between two consecutive geometric means in the \(R\)-\(A\) plots. On the other side, \(S\) is related to the variability of the test’s sample, as consequence increasing \(N\) we reduce the variance of the geometric mean variables so increasing the statistical precision of the test.
Figure 3: R-A plot for a renewal exponential inter-arrival times. Since the process is renewal each time-scale \(t_{a}\) is the age at which the repeated p-values are evaluated in the \(N=100\) Kolmogorov-Smirnov tests. One can see the boxplot for each \(t_{a}\) which is a uniform distribution so that the geometric mean is a random variable which stays within the 95%confidence interval in the gray stripe around its mean value \(1/e\), the gray horizonatd dotted line. As results the test cannot reject the null hypothesis of renewal events so not revealing significant presence of memory between events at all the time-scales of the process.
case indicates an artifact in the test which has to be taken in account and the presence of such spurious artifact should be discussed in details separately.
It is important to evaluate the effect of the test parameter \(N\) (number of trials in repeated two-sample statistical tests) and \(T_{a}\) (number of time scales \(t_{a}\) ages), since the have an impact on the power of the test in accepting the presence of memory in the event sequence when the renewal hypothesis is false. The power of a lower-tailed z-test is:
Power \[=Pr(Z_{g}\leq\mu_{0}+z_{1-\alpha}\,\sigma_{n}|H_{1})\] \[=1-Pr\left(\frac{Z_{g}-\mu_{1}}{\sigma_{n}}\geq\frac{\mu_{0}-\mu _{1}}{\sigma_{n}}+z_{1-\alpha}\Big{|}H_{1}\right)\] \[=1-\Phi\left(\frac{\mu_{0}-\mu_{1}}{\sigma_{n}}+z_{1-\alpha}\right)\]
where \(\Phi\) is the cdf of a normal distribution where we used \(z_{\alpha}=-z_{1-\alpha}\), and \(H_{1}\) is the alternative hypothesis of non-renewal process with the consequent presence of memory between the events.
It is worth to point out how \(N\) (p-value trials on repeated two-sample tests) and \(T_{a}\) (the sample size of the geometric mean i.e. number of time-scales \(t_{a}\)) are two free parameters which can be chosen in order to get a desired spatial and temporal resolution respectively. If they are increased also the power of the test increases since it increase the ability to reject the null hypothesis when the null hypothesis is false, so revealing the presence of memory when the assumption of lack of memory is actually false.
In Fig.4 we plot the power of the z-test for different values of the parameters \(N\) and \(T_{a}\) revealing that both increases the power of the test, so in principle one should prefer to use large values for those parameters in order to detect the presence of memory.
However, the _XA_ plots can reveal also an heterogeneous behavior of the process at different time-scales of the observer's rate, so a straight Z-test is not suggest without first considering the _XA_ plot in its complete version, before proceeding with a final test on the overall hypothesis.
Another useful test for assessing the overall renewal hypothesis is the Maximum Likelihood estimation of our sample of geometric means fitted respect to a normal distribution. So we can set a change of variable to transform the geoemtric mean \(\rho_{N}(x)\) distribution into a normal distribution \(G(y)\) with a given mean \(\mu_{y}\) and variance \(\sigma_{y}^{2}\). We can find a transformation \(y=h(x)\) with the Jacobian factor so that the transformation is \(G(y)=\rho_{N}(x)\frac{1}{|\rho_{N}^{2}(x)|}\).
The final solution of such transformation is:
\[h(x)=\sqrt{2\pi\sigma_{y}^{2}}\;\mathrm{erf}^{-1}\left[\pm\frac{2\Gamma(N,-N \log(x))}{\Gamma(N)}\right]+\mu_{y} \tag{14}\]
Figure 4: Power of lower one-tailed Z test of XA plots at 5% confidence level.The probability of rejecting the renewal hypothesis given that the alternative non-renewal hypothesis is true since memory between events is significantly present. In the figure (a) we plot the power of the z-test for different number of total number of repetition of K-S test for single time-scale \(t_{a}\) with a total number of \(T_{a}=100\) ages. In fig (b) we plot, instead, the power of the test in the case we increase the number of time-scales \(T_{a}\) for a given \(N=100\).
Using the new Gaussian random variables of the geometric means we can perform a fit to the expected theoretical normal distribution, see Fig.5 where the transformation is computed in order to have a normal distribution with the same mean and variance of the original geometric mean distribution.
However whatever one wants to use, z-test or MLE-normal fit, it is always strongly suggested to not use those tests by itself but it has to be always associated to the graphical inspection of the _XA_ plots in order to detect memory in events at different time scales.
Figure 5: Distribution of the geometric mean variable \(g_{p}\) and its transformed gaussian variable under the function \(h(\cdot)\) for different values of repeated trials \(N\). As shown in (b), for \(N\rightarrow\infty\) the expected value is \(e^{-1}\) (the dotted vertical line) and zero variance.
## 5 Validation of the _Xa_ test
In this section we will validate the R-A test on synthetic realizations of events' sequences derived from models from which we know to be renewal or not-renewal by the mathematical property of the process. We wil apply the statistical technique to renewal, not-renewal and mixed processes, to highlight the effectiveness and usefulness of R-A plots as general tools to detect memory between events.
We, so, provide a couple of example where the sequence of inter-event times are dependent but not correlated.
First, let us consider a simple, auto-correlated volatility structure which can generate a sequence of samples which mimics dependent time-intervals which are not-correlated.
Let \(z_{t}\) be a i.i.d. normally distributed random variables, \(z_{t}\sim\mathcal{N}(0,1)\), and let \(\sigma_{t}\) follow an \(AR(1)\) process \(\sigma_{t}=\beta\sigma_{t-1}+s\,\epsilon_{t}\) where \(\epsilon\sim\mathcal{N}(0,1)\). Finally, let us define the time-intervals as:
\[\Delta_{t}=\exp\{\,z_{t}\,\sigma_{t}\} \tag{15}\]
where \(\Delta_{t}\) and \(\Delta_{t-1}\) are uncorrelated but clearly dependent.
If series values are independent, then nonlinear instantaneous transformations such as logarithms, exponential, absolute values, or squaring preserve independence.
However, the same is not true of correlation, as correlation is only a measure of linear dependence. For example, in financial time series analysis, higher-order serial dependence structure in data can be explored by studying the autocorrelation structure of the absolute returns (of lesser sampling variability with less mathematical tractability) or that of the squared returns (of greater sampling variability but with more manageability in terms of statistical theory). If the returns are independently and identically distributed, then so are their transformations. Hence, if the absolute or squared returns admit some significant autocorrelations, then these autocorrelations provide evidence against the hypothesis that the returns are independently and identically distributed.
### Synthetic Renewal Processes
As preliminary example, we use an homogeneous Poisson process whose inter-arrival times are exponential distributed and the events are renewal. The inter-arrival times has distribution \(\psi(\tau)=\lambda e^{-\lambda\tau}\) and the aged distribution \(\psi_{t_{a}}(\tau)=\lambda e^{\lambda t_{a}}e^{-\lambda\tau}\), since it is a renewal point process we expected to not reject the null hypothesis for all the \(t_{a}\) ages. In Fig. 7 we plot the aging renewal hypothesis testing on a process with characteristic time \(\lambda=1\) performed
Figure 6: Statistical test for memory of inter-events time intervals from eq.(15) with \(b=0.97,s=0.89\) in terms of dependency (a) and in terms of correlation only (b)
over \(N=100\) K-S independent tests. The boxplot represents the distribution of the \(p_{i}\) values which are uniformly distributed for every \(t_{a}\).
In practice, the statistic requires a relatively large number of data points to properly reject the null hypothesis (Conover, 1999, ch.6). As a consequence, in setting the domain of \(t_{a}\) one have to take in account the length of the observed time intervals sequence \(T\) in such a way tht the number samples of the two distributions in the K-S tests should have \(nm/(n+m)>4\) and \(\min\{n,m\}>30\) in order to make the K-S test to work properly. In our case we have a total number of events for the samples is \(n\approx\lambda T=3\cdot 10^{3}\), so the maximum age is \(t_{a}^{max}=n/30=100\)
Other then poisson processes with exponential inter-events time distributions, we now consider non-poisson renewal processes with power-law inter-events time distribution. For this purpose we could use the Manneville map approach (Aquino et al., 2001) which produces renewal events whose waiting times are distributed exactly as in eq.(39) as a Pareto-like distribution, we show in Fig.8 the results of the _XA_ test applied to events distributed with \(\mu=2.1\) and \(\mu=1.5\). Clearly the test reveal no significant memory between the events, so acpeting the renewal hypothesis of the underlying process even when the distribution does not have some or any finite moments.
Also in this case it is important to address the domain for \(t_{a}\), but as regard with the case of the power law coefficient \(1<\mu<2\) we do not have a finite mean-time of the waiting times, so we should always check the l0w-sample situation numerically.
### Synthetic Non-Renewal Processes
We will generate surrogate sequences with a marginal distribution of correlated inter-event intervals in order to obtain a surrogate process which is not renewal (Farkhooi et al., 2009). A typical history-dependent process can be modeled by an autoregressive AR process within the limits of stationarity and ergodicity conditions (Brockwell and Davis, 2013) and a general form of the autoregressive process with serial dependence up to a finite lag \(L\) reads:
\[X_{s}=c+\beta_{1}X_{s-1}+\beta_{2}X_{s-2}+\ldots+\beta_{L}X_{s-L}+\epsilon_{s} \tag{16}\]
where \(\epsilon_{s}\) is assumed to be iid variable with the specific mean and finite variance, \(\beta_{i}\) are the correlation parameters for each specific lag, and \(c\) a constant. For our purpose of generating a surrogate non-renewal process, we will only take \(\beta_{1}=\beta\neq 0\) in the stationary case of \(|\beta|<1\) so we have the AR(1) process as:
\[X_{s}=c+\beta X_{s-1}+\epsilon_{s} \tag{17}\]
Figure 7: Renewal hypotheis testing via aging experiment for a poissonian point process with exponential inter-arrival time intervals with rate 1. we can notice that the renewal property is never rejected
where \(\epsilon_{s}\) is taken normally distributed with zero mean and unit variance.
At this point, as an example, we can mimic the inter-arrival time periods in two different ways, a linear transformation of AR model and an exponential transformation of \(X_{s}\) in the case \(c=0\). The correlation structure dies off geometrically as the lag increases.
In the first case the inter-event times intervals could be taken as:
\[\Lambda_{s}=\left|X_{s}-E(X_{s})\right| \tag{18}\]
so that the waiting times are positive and \(E[\Lambda_{s}]=\sqrt{2\sigma_{X}^{2}/\pi}\) where \(\sigma_{X}^{2}=\sigma_{\epsilon}^{2}/(1-\beta^{2})\).
The _XA_ plots applied to linear auto-regressive waiting times is then plotted in Fig,9, where it is possible clearly see that
In the other case of exponential transformation (Granger and Newbold, 1976), we define the inter-arrivals time intervals as:
\[\Delta_{s}=e^{X_{s}}=e^{\beta X_{s-1}+\epsilon_{s}} \tag{19}\]
where \(\Delta_{s}\) is the series of correlated intervals, \(\beta\) describes the negative serial dependence of the series \(X_{s}\) and \(\epsilon_{s}\) is an iid normal variable with zero mean and unit variance. The resulting log-normal distribution of \(\Delta\) has mean and variance as:
\[E[\Delta_{s}] =e^{\frac{1}{2(1-\beta^{2})}} \tag{20}\] \[\text{Var}[\Delta_{s}] =e^{\frac{1}{2(1-\beta^{2})}}\left(e^{\frac{1}{2(1-\beta^{2})}}-1\right) \tag{21}\]
In such process the rate of events is \(\lambda=1/E[\Delta_{s}]\) so the maximum ages where we can perform the test is \(t_{a}^{max}=\lambda T/30\) under the condition \(\lambda>1/\sqrt{e}\). Let us now perform our renewal statistical test on such kind of non renewal test, in the specific choice of the AR model parameter of \(\beta=0.674\) so that \(\lambda=0.4\), and a simulation length of \(T=10^{4}\), we obtain a \(t_{a}^{max}\), in Fig.10 we recover the evidence against the null hypothesis that the process is renewal since all the ages, the geometric mean points are always outside and below the confidence stripe of the null hypothesis: this confirms the presence of intense memory in the event process.
At this point, we apply our renewal test to a class of non-homogeneous poisson process where the instantaneous event rate is modulated by the past occurrence of events, breaking any renewal property in the process. In particular, among the possible self-exciting models. we select an Hawkes process with exponential kernel 5 so that the event rate
Figure 8:
is defined as:
\[\lambda(t)=\lambda_{0}+\sum_{j:t_{j}\sim\alpha}\alpha e^{-\beta(t-t_{j})} \tag{22}\]
so that each arrival of an event in the system increases the arrival intensity by the factor \(\alpha\), after the event, the arrival's influence decays at rate \(\beta\).
The process is stationary if \(\alpha<\beta\) and we have that \(\overline{\lambda}=\frac{\beta}{\beta-\alpha}\lambda_{0}\) is the average rate of events6. Choosing \(T=4e3\) and \(\lambda_{0}=0.75\), \(\alpha=0.2\), \(\beta=0.4\), the maximum age we have \(t_{a}^{max}=\overline{\lambda}T/30\approx 10^{2}\). We plot in 11 the _XA_ test, in which we can see two an initial not-renewal feature of the system for short ages up to the order of the exponential decay in the memory kernel of Hawkes process. While as global test one have to reject the renewal assumption, the _XA_ plot allows to check the renewal conditions a different temporal scales: in this cases, a short time scale we detect memory between events, but, after a transition, we see how at large time scale the events looks without any memory.
Footnote 6: Notice that in the case of \(\alpha>\beta\), no mean rate (\(\overline{\lambda}\)) of events is defined, and we should use the same procedure as in the power-law inter-event case, where we numerically check the low-sample condition.
Figure 9:
### Superposition of events
The case of the Hawkes process described above, is a typical case of spurious process with mixed behavior with renewal and not renewal patterns at different time scales.
However, one can go beyond a single process which produces events of different memory scales. One can in principle have a series of events generated by different underlying processes. There are, in fact, many ways to produce generalized renewal processes (Cox, 1965, ch.9), for our purpose we select the specific case of processes' superposition (Cox and Smith, 1954; Cinlar and Agnew, 1968; Teresalam and Lehoczky, 1991). It consists in considering the case where there is a number of independent sources at each of which events occur from time to time.
Let \(\{A_{n}\}\) and \(\{B_{n}\}\) be two independent point processes in general with different interrenewal distributions. The
Figure 11: Surrogate of a stationary Hawkes process with exponential kernel. \(T=4e3\) and \(\lambda_{0}=0.75\), \(\alpha=0.2\), \(\beta=0.4\). We can observe that the geometric mean points have a transition from a intense memory to a renewal condition passing through the the values from which the exponential kernel decays.
Figure 10: Surrogate not-Renewal exponential AR process so that the rate of events is \(\lambda=0.25\). The renewal hypothesis testing via aging experiment reject the renewal assumption as an overall result of the test.The gray circles are the geometric means for each age, and they clearly do not fluctuate around the expected value (dashed line) under the null hypothesis.
pooled process has a number of events as:
\[N(t):=N_{A}(t)+N_{B}(t)=\sum_{n=1}^{\infty}\left[1_{\{0,t\}}(A_{n})+1_{\{0,t\}}(B _{n})\right].\]
In general, the correspondent point process is renewal for only particular situations (Ferreira, 2000). For example the superposition of two poisson renewal processes produce a pooled process that is renewal and with the sum of the rates of the original exponential inter-arrival distributions.
We take a particular case in order to produce a given pattern of events' sequence not present in the previous examples. For that reason, let us take two independent point processes: one is renewal and the other is not renewal, the resulting superposition of the twos, as show in Fig.12 is a process formed by pooling the two types of events.
In particular, let us take process \(A\) as a renewal poisson process and the process \(B\) as a not renewal process (as in the auto-regressive case). We, in particular, consider the case when the rates of the two sources of events are \(\lambda_{A}>\lambda_{B}\) and in a particular case the renewal process has a rate \(\lambda_{A}=8\) and \(\lambda_{B}=0.75\), so the two different scale are at least one order of magnitude different.
The results of \(X\!A\) test is shown in Fig.13 where it is clear that, or that particular case, the XA plots show a clear two time-scales at which the process have events without memory at small ages, and instead it shows renewal events for larger ages. The transition in memory happens ate ages closer to the average inter-arrival times of the not-renewal processes where its rate dominate the higher rates renewal events. However it is important to notice that in general we cannot infer from XA plots the properties original sources of the pooled sequence. One can have many types of superpositions. The only inference one could make using the \(X\!A\) test is to possibly detecting time-scales for which the events show memory. Further, the response of the test as in Fig.12
Superposition of point processes is an important class of stochastic processes for its wide range of possible applications where sequences of activities arrive at a central collector of events from a number of independent sources. For example, in production networks, during an industrial stage, several machines operate independently in parallel. The sequence of times at which items are produced follow a superposition of processes. In managing the next stage of production one can find useful to know the properties of the pooled sequence of events. Another application of a superposition of renewal processes is used to model the effect of imperfect maintenance (Kallen et al., 2010) and arrival processes applied to model queue behaviors (Albin, 1984).
## 6 Single sequence of events: approximated \(X\!A\) test
In many practical case, the researcher has only single (or few) observation of the process, in that case our exact renewal test is not applicable since it requires the assumption of many independent samples. In the worst case scenario, one have data with a single observation of the events and consequently only one realization of the point-process time series.
However, in such situation, it is possible to set up a statistical technique based on the available data which extend the use of our exact \(X\!A\) test to an approximated test valid for single realizations of events' sequences.
Figure 12: Superposition of two independent sources of events. The process \(A\) generates renewal events, the process \(B\) generates not-renewal events. The pooled process is a spurious sequence of events.
In the case that the data consist only in one sequence it can be considered as a single realization of the process generating the events. Moreover, we also consider the worst case scenario where the observation is made up of few events implying so a low statistics of the sample. The most crucial statistical problem in applying the exact _XA_ test is that one has to guarantee the independence among each pair of samples in the two-sample confide nce test (for example in the Kolmogorov-Smirnov test).
In this case, we propose two combined resampling techniques which try to minimize the dependence in the data due to the fact that the hypothesis test has to be performed on a single observation of the process.
Essentially, we use split the original sequence making as many as independent randomized sub-samples from to original realization and then perform two-samples test without using the K-S test.
There are several popular resampling techniques which are often used in computational statistics and machine learning (Good, 2004) and which can be used to build an approximated version of the the _XA_ test. We have focuses our study on two resampling methods for single observation test: one method (_bootstrapping_) is used to create independent samples and another randomization technique (_permutation test_) will be used to perform the statistical test replacing the Kolmogorov-Smirnov test which suffers of small, not independent and discrete samples. The only assumption made up by resampling approaches is that the observed data are a representative sample from the underlying population but no assumptions are made on the population distribution and parameters.
The difference between the exact _XA_ test and its approximated version is focused on the two-sample significance test as in Fig.2. Since we cannot generate other sequences of the inter-arrival times under the null hypothesis we can infer the behavior of the population by the only observation we have, bootstrapping the distribution of the inter-events times from the observed events, so obtaining estimates of \(p\)-values from the two-samples tests and finally the geometric means variable \(\tilde{g}_{p}(t_{a})\).
Surrogate events from BootstrappingLet us call \(L\) the data size of the sample set (i.e. the length of events' time series), \(\{\tau_{1},\tau_{2},\ldots,\tau_{L}\}\). We split the sample set in \(t_{w}\) not-overlapping windows to create replica of the process with sub-samplings of the original dataset. In such way, \(N=L/t_{w}\) sub-samples are considered as independent realizations of the waiting times's sequence of events. Since we use the two-sample significance test, we need independent samples for the shuffled (no-memory) waiting times. Without knowledge of real distribution of the observed \(\tau_{i}\)'s, we use the empirical cumulative density function \(F_{n}(\tau)\) as an estimate of the original cdf \(F(\tau)\). Subsequently, we sample from empirical density function using a random generator of events which is equivalent to sampling with replacement from originally observed sequence. Sampling from \(F_{n}(\tau)\) is equivalent to sampling with replacement from originally
Figure 13: Superposition of two independent sources of events. The process \(A\) generates renewal events, the process \(B\) generates not-renewal events. The pooled process is a spurious sequence of events. In the case (a) the renewal process generates events with average inter-event intervals \(\langle\tau\rangle_{A}=1/\lambda_{A}=0.125\) and the not-renewal events have inter-arrival intervals \(\langle\tau\rangle_{B}=1/\lambda_{B}=1.\bar{3}\). In the case (b) the renewal process generates events with average inter-event intervals \(\langle\tau\rangle_{A}=1/\lambda_{A}=0.5\) and the not-renewal events have a mean inter-arrival time interval \(\langle\tau\rangle_{B}=1/\lambda_{B}=0.125\).
observed inter-events times. The aging experiments and the two sample tests will be computed comparing the samples in one of the \(N\) windows with other samples of the same size \(t_{w}\) drawn from the bootstrapped distribution. In this way we can guarantee independence among the two distributions in the two-sample test, i.e. the samples within scrolling windows versus the bootstrapped samples.7.
Footnote 7: In alternative we also used \(k\)-fold Cross-Validation resampling technique: the observed dataset is partitioned into \(k\) groups, where each group is given the opportunity of being used as a held out test set leaving the remaining groups as the training set. As results the bootstrapping technique produce a more extended age \(t_{a}\) allowing larger time-scales to be explored.
Two-sample Permutation tests.As statistical significance test the distribution of the test statistic \(D_{m,n}\) in eq.(5) under the null hypothesis is obtained by calculating all possible values of the test statistic under rearrangements of the labels on the observed data points. In the specific of the two-sample problem we will replace the non-parametric Kolmogorov-Smirnov test with a permutation test which can be used without any assumptions on the distribution of data. In fact, a permutation test gives a simple way to compute the sampling distribution for any test statistic under the null hypothesis. The statistical significance of the permutation test, as expressed in a \(p\)-value in eq.(8), is calculated as the fraction of permutation values that are at least as extreme as the original statistic, which was derived from non-permuted data. If the null hypothesis is true the shuffled (randomized) data sets should look like the observed data (within the time windows), otherwise they should look different from the real data. Despite that the permutation test is an exact test, it usually could be extremely costly in terms of computational resources. In particular, choosing the same number of samples in each sequence, there are exactly \(s=\binom{2t_{w}}{t_{w}}\) ways of randomly allocating \(t_{w}\) of the observed time-intervals and the remaining \(t_{w}\) of the bootstrapped intervals. Since the exact permutation test can be computationally intensive, we also allow the use of an empirical method directly couples both the minimal obtainable \(p\)-value and the resolution of the \(p\)-value to the number of permutations. Thereby, one can impose a maximum number of permutations, i.e. \(s=1000\), we have that \(p=1/s=0.001\) is the smallest possible \(p\)-value. However if the length \(L\) of the single realization of the events is very large the sample-size can be enough to perform the Komlogorov-Smirnov test or any parametric two-sample test instead of the very expensive permutation test.
Single realization XA plots.The approximated \(XA\) plot for single realization is obtained in the same way as in the exact \(XA\) test, expect that in Fig.2 one should take \(A_{i}\) as sample of size \(t_{w}\) in a given time interval of the entire sequence, and the sample \(B_{i}\) is the sequence of events generated by the bootstrapped distribution. Moreover the two-sample test could be performed using a permutation test other than the non parmametric Kolmogorov-Smirnov test.
Another important difference in performing the approximated \(XA\) test is given by the presence of the new parameter \(t_{w}\) which determines a series of consequences summarized in Table 2; the fact that we split the entire unique sequence in many pieces introduces more constraints in the test which are not present in the exact \(XA\) test as described in the caption.
The main limitations of the approximated \(XA\) test for single realizations, are due to a short range of the time-scale one could explore, low maximal age \(t_{a}^{max}\) and a less power of the test related mostly to the limited spatial resolution \(N\) which determines the precision of our confidence about memory in the process.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(t_{w}\) & \(t_{a}^{min}\) & \(t_{a}^{max}\) & \(T_{a}\) & \(N\) \\ \hline \hline _accuracy_ & \(\min\{\tau_{i}\}\) & \(\sim\min\{L/\langle\tau\rangle\,,\,t_{w}\}\) & \(\lesssim 1/\min\{\tau_{i}\}\) & \(\mathit{dof}\approx L/t_{w}\) \\ \hline & smallest & largest & _temporal resolution_ & \\ & memory block & memory block & test’s sample size & statistical precision \\ \hline \end{tabular}
\end{table}
Table 2: Parameters of the single realization \(XA\) test. One one have a single observation, \(t_{w}\) is a new parameter which derives from the fact that we split the unique observed sequence in non-overlapping intervals and \(t_{w}\) indicates the number of events per window \(t_{w}=L/N\gg 1\). Then, \(T_{a}\) is the number of \(t_{a}\)’s used in the repeated tests representing the sample size of geometric means in the whole statistical procedure. But in the case of single realization \(XA\) test the step-size \(\delta_{t_{a}}=(t_{a}^{max}-t_{a}^{min})/T_{a}\) should be chosen in such a way to keep the aged-sequences independent so \(\delta_{t_{a}}\gg\min\{\tau_{i}\}\). On the other side, also \(N\) is constrained to the data sample size \(L\) as trade-off between precision of the test when \(N\) is large and the maximum time scale we can explore \(t_{a}^{max}\) which is related to our choice of \(t_{w}\). When one increases \(t_{w}\) it is possible to explore the memory on a larger time scale but with the effect of poor precision since \(N\) becomes smaller. In the other way around, increasing the precision mean reducing \(t_{w}\) and so the time-scale of the memory blocks.
At this purpose, we re-compute some of the synthetic case in the exact _XA_ test in the single-realization case. The approximated _XA_ plots in Fig.25 clearly shows the ability of the test in detecting memory in the synthetic sequence of inter-arrival times.
As last resort of correlated events, one should take in account which the meta-observation we reconstruct are not fully independent for many reasons, it is not correct use the Fisher's approach to multiple comparison of p-value. In such situation all the tests have something in common and are considered as a family of tests In such a case the adjustment methods try to ensure that the chances of a Type I error are maintained below the claimed size of the test. In such a corrected Bonferroni method of multiple p-values will be more reliable since the method will not claim significance unless some individual tests do.
## 7 Conclusions
The main advantage is that one does not have to worry about distributional assumptions of classical testing procedures; the disadvantage is the amount of computer time required to actually perform a large number of permutations, each one being followed by re-computation of the test statistic. Despite In terms of future applications on supercomputers and high performance computing, the combined use of speed processors, parallel techniques and GPU accelerations would allow users to perform any computational statistical test using the permutation method.
|
2309.03139 | Using Multiple Vector Channels Improves E(n)-Equivariant Graph Neural
Networks | We present a natural extension to E(n)-equivariant graph neural networks that
uses multiple equivariant vectors per node. We formulate the extension and show
that it improves performance across different physical systems benchmark tasks,
with minimal differences in runtime or number of parameters. The proposed
multichannel EGNN outperforms the standard singlechannel EGNN on N-body charged
particle dynamics, molecular property predictions, and predicting the
trajectories of solar system bodies. Given the additional benefits and minimal
additional cost of multi-channel EGNN, we suggest that this extension may be of
practical use to researchers working in machine learning for the physical
sciences | Daniel Levy, Sékou-Oumar Kaba, Carmelo Gonzales, Santiago Miret, Siamak Ravanbakhsh | 2023-09-06T16:24:26Z | http://arxiv.org/abs/2309.03139v1 | # Using Multiple Vector Channels Improves E(n)-Equivariant
###### Abstract
We present a natural extension to E(\(n\))-equivariant graph neural networks that uses multiple equivariant vectors per node. We formulate the extension and show that it improves performance across different physical systems benchmark tasks, with minimal differences in runtime or number of parameters. The proposed multi-channel EGNN outperforms the standard single-channel EGNN on N-body charged particle dynamics, molecular property predictions, and predicting the trajectories of solar system bodies. Given the additional benefits and minimal additional cost of multi-channel EGNN, we suggest that this extension may be of practical use to researchers working in machine learning for the physical sciences.
Machine Learning, ICML
## 1 Introduction
Designing neural network architectures that correctly account for the symmetry of physical laws is an important requirement for applications of artificial intelligence in science. In particular, for dynamical systems in physics, the relevant properties transform invariantly or equivariantly under Euclidean transformations. This is also the case when modelling particles or atomistic systems, for which machine learning simulators are already widely used.
There now exist a wide variety of equivariant neural network architectures leveraging a diverse set of mathematical formulations, among which we highlight two important classes. First, some architectures apply spherical harmonics mappings to incorporate directional information in an equivariant way (Thomas et al., 2018; Fuchs et al., 2020). These architectures have the advantage of being highly expressive but are also computationally expensive and challenging to implement. Second, we highlight models that fall under the equivariant multilayer perceptron (E-MLP) paradigm (Finzi et al., 2021). The idea behind this paradigm is to simply generalize standard multilayer perceptrons by composing equivariant linear layers with appropriate non-linear functions. These architectures are much simpler to work with and more computationally efficient than those using spherical harmonics, but in principle, require lifting input quantities to high-order tensors to achieve high expressivity (Finkelstein et al., 2022). Prior works, however, show that one can achieve satisfactory modelling performance without requiring higher-order representations. One such example is the Vector Neurons (Deng et al., 2021) model, which can be seen as an equivariant multilayer perceptron with order-1 vector features. This architecture also leverages the fact that the number of vector channels (neurons) in each layer can be arbitrary to increase expressivity.
The E(\(n\))-equivariant Graph Neural Network (EGNN) model by Satorras et al. (2021) is an example of a model that does not clearly fit into one of the categories above. Nevertheless, EGNN has become widely applied mainly due to efficiency and simple model design. EGNN uses the message-passing framework, which captures the inductive bias that sparse interactions between entities should lead to better generalization. EGNN also has the advantage of separating equivariant features into a separate channel that only follows equivariant operations. The work of (Brandstetter et al., 2022) extends EGNN by using ideas inspired by spherical-harmonics-type architectures. Their Steerable E(\(n\))-Equivariant Graph Neural Network (SEGNN) achieves better performance across some benchmarks but suffers from similar conceptual shortcomings in addition to increased computational complexity.
In this paper, we explore the direction of generalizing EGNN by drawing from E-MLP-type architectures. EGNN only updates a single vector for each node in the graph over each layer. A natural way to increase the expressivity of this model is to make the number of vector channels arbitrary. In our experiments, we show that this change alone
leads to an important increase in performance for some physical modelling tasks. This multi-channel extension also retains the simplicity and computational efficiency of the original architecture and makes intuitive physical sense: the network may use the different channels to store additional physical quantities relevant to the prediction task.
We note that GMN (Huang et al., 2022) proposes to use multiple channels as part of a generalized EGNN-like model, as does GVP-GNN (Jing et al., 2021). However, since this is one contribution amongst several others in GMN, and GVP-GNN the advantage of using multiple channels is not clear. Here we show that we can obtain significant benefits only with the additional channels. In this short paper, we highlight that simply adding multiple channels to EGNN can lead to a significant performance increase compared to much more expensive methods such as SEGNN. We believe this result should be of use to practitioners looking to preserve the advantages of EGNN.
## 2 Method
Following the formulation of EGNN, we assume that the model operates on graphs embedded in \(n\)-dimensional Euclidean space (typically \(n=3\)). To each node is associated coordinates \(\mathbf{x}_{i}\in\mathbb{R}^{n}\) and node features \(\mathbf{h}_{i}\in\mathbb{R}^{d_{h}}\). Edge features \(\mathbf{a}_{i,j}\in\mathbb{R}^{d_{e}}\) can also be considered. The original EGNN layer is defined by the following equations:
\[\mathbf{x}_{ij} =\mathbf{x}_{i}-\mathbf{x}_{j} \tag{1}\] \[\mathbf{m}_{ij} =\mathbf{\phi}_{e}\left(\mathbf{h}_{i},\mathbf{h}_{j},||\mathbf{x}_{ij}||^{2}, \mathbf{a}_{ij}\right)\] (2) \[\mathbf{x}_{i}^{t+1} =\mathbf{x}_{i}^{t}+C\sum_{j\in\mathcal{N}(i)}\mathbf{x}_{ij}^{t}\phi_{x} (\mathbf{m}_{ij})\] (3) \[\mathbf{h}_{i}^{t+1} =\mathbf{\phi}_{h}(\mathbf{h}_{i}^{t},\sum_{j\in\mathcal{N}(i)}\mathbf{m}_{ ij}) \tag{4}\]
where \(\mathbf{\phi}_{e}:\mathbb{R}^{d_{h}+d_{h}+1+d_{e}}\rightarrow\mathbb{R}^{d}\), \(\phi_{x}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) and \(\mathbf{\phi}_{h}:\mathbb{R}^{d_{h}+d}\rightarrow\mathbb{R}^{d_{h}}\) are multilayer perceptrons (MLPs) and \(\mathcal{N}(i)\) is the neighborhood of node \(i\).
We define the Multi-Channel E(\(n\))-Equivariant Graph Neural Network (MC-EGNN) by replacing \(\mathbf{x}_{i}\) with the matrix \(\mathbf{X}_{i}\in\mathbb{R}^{3\times m}\), where \(m\) is the number of vector channels. Using this, we modify the above equations as follows:
\[\mathbf{X}_{ij} =\mathbf{X}_{i}-\mathbf{X}_{j} \tag{5}\] \[\mathbf{m}_{ij} =\mathbf{\phi}_{e}\left(\mathbf{h}_{i},\mathbf{h}_{j},||\mathbf{X}_{ij}||_{c}^{2},\mathbf{a}_{ij}\right)\] (6) \[\mathbf{X}_{i}^{t+1} =\mathbf{X}_{i}^{t}+C\sum_{j\in\mathcal{N}(i)}\mathbf{X}_{ij}^{t}\Phi_{x }(\mathbf{m}_{ij}) \tag{7}\]
for MLPs \(\mathbf{\phi}_{\mathbb{F}}:\mathbb{R}^{d_{h}+d_{h}+m+d_{e}}\rightarrow\mathbb{R}^ {d}\) and \(\mathbf{\Phi}_{x}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m\times m}\), where \(m^{\prime}\) is the output channel dimension. \(||\mathbf{X}_{ij}||_{c}\) denotes the channel-wise Euclidean norm. Equation 4 stays the same. We set \(m=1\) for the first and last layer to use a single vector for each node's inputs and outputs. The modification to EGNN does not affect the equivariance properties of the architecture since the Euclidean and permutation groups do not act over the channels dimension.
## 3 Experiments
### Solar System Predictions
#### 3.1.1 Dataset
To investigate the necessity of different number of vector channels, we perform experiments on prediction of dynamics on an N-body system based on the solar system. We look at a dataset of 30 years of real observations of the 31 highest-mass bodies in the solar system (including the sun, 8 planets, and 22 moons) curated by Lemos et al. (2022) and sourced originally from NASA Horizons 1. We set up
Figure 1: Visualization of the Solar Systems dynamics dataset. Solid circles represent initial positions of celestial bodies and traces their training trajectories.
the task of simultaneously predicting the positions of each of the bodies 60 Earth-days into the future, given each of their positions, velocities, and log-masses.
We constructed a train/validation/test split using three subsequent years of data, with one year of data for each partition. We provide a visualization of sample training set trajectories at Figure 1. We trained our models on a fully connected graph of the solar system bodies using a mean squared error loss normalized by the true distance between the initial and final positions of each bodies. This is done to account for the broad ranges in velocities in the system.
#### 3.1.2 Results
Most of the predictions made in this task involve moons orbiting planets while also orbiting the sun, and so we hypothesized that 3 vectors would be needed to approximate their dynamics efficiently: to keep track of the coordinates, the angular momentum around the sun, and the angular momentum around the planet. Note that velocity is already considered since we use a variant of the multi-channel EGNN described in Appendix A.
We first conducted a hyperparameter search using a one vector-channel EGNN to maximize its performance on the validation set, and then used those hyperparameters when testing the EGNN models with 2, 3, and 5 vector channels.
Our results, shown in Table 1, validate our hypothesis. While using 2 vector channels improves over using 1, it takes 3 vector channels for the model to achieve its highest performance. Note that the difference in performance between using 3 and 5 channels is not statistically significant, it is therefore not crucial to tune this parameter to an exact value. Figure 2, shows clearly that original single channel EGNN model is not able to provide an accurate estimate of the future and is widely off the trajectory.
### Charged Particles System
We also compare againt other models using a widely used benchmark. In the Charged Particles N-body experiment (Kipf et al., 2018), the task is to predict the positions of charged particles several timesteps into the future, given their charges, positions, and velocities.
We use the variant created by (Satorras et al., 2021), which consists of 3,000 training samples, each consisting of a system of 5 particles with their charges and 3d coordinates and velocities given, and we train our network on a loss of the mean squared error of the particle's position after 1000 timesteps. We also use the velocity version of the multi-channel EGNN in this experiment. For this experiment, we use the implementation and hyperparameters of the EGNN used by (Satorras et al., 2021). The only modification we make is to incorporate multiple vector channels. Specific architecture details and hyperparameters are listed in Appendix C.1.
\begin{table}
\begin{tabular}{l l} \hline \hline \# of Vectors & Normalized MSE \\ \hline
1 & 0.109 \(\pm\) 0.051 \\
2 & 0.082 \(\pm\) 0.047 \\
**3** & **0.024 \(\pm\) 0.007** \\
5 & 0.030 \(\pm\) 0.008 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance on the solar system prediction task using differing number of vector channels. Performance is shown averaged across all 31 solar system bodies, and normalized by true distance.
\begin{table}
\begin{tabular}{l l} \hline \hline Method & MSE \\ \hline SE(3)-Tr (Fuchs et al., 2020) & 0.0244 \\ TFN (Thomas et al., 2018) & 0.0155 \\ NMP (Gilmer et al., 2017) & 0.107 \\ Radial Field (Köhler et al., 2019) & 0.0104 \\ CN-GNN (Kaba et al., 2022) & 0.0043 \(\pm\) 0.0001 \\ FA-GNN (Puny et al., 2022) & 0.0057 \(\pm\) 0.0002 \\ SEGNN (Brandstetter et al., 2022) & 0.0043 \(\pm\) 0.0002 \\ \hline EGNN (Satorras et al., 2021) & 0.0070 \(\pm\) 0.0005 \\
**MC-EGNN-2** & **0.0041 \(\pm\) 0.0006** \\ MC-EGNN-5 & 0.0043 \(\pm\) 0.0003 \\ MC-EGNN-10 & 0.0044 \(\pm\) 0.0005 \\ MC-EGNN-25 & 0.0048 \(\pm\) 0.0005 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test set MSE for the N-body experiment. Our results are averaged over 5 random seeds.
Figure 2: Predictions of the position of Venus for different numbers of vectors channels. The cross indicates the ground truth position.
negligible added cost: Table 4 (Appendix B.1) shows that for a small number of vector channels, the forward time and the model's number of parameters are largely unaffected.
### QM9 - Molecular Property Prediction
Lastly, we applied the EGNN the task of predicting chemical properties of small molecules. The QM9 dataset consists of 100,000 training samples of molecules, each described by the atom type and positions of their constituent atoms (Ramakrishnan et al., 2014). We use the same training setup as EGNN which facilitates comparison. The hyperparameters are detailed in Appendix C.2.
We predict 12 chemical properties using the multi-channel EGNN. In theory, the properties are entirely determined by the atom types and positions. Unlike in the N-body experiment or the solar system experiment, the predicted properties are coordinate-invariant. The output is therefore obtained by pooling the invariant node embeddings. The vector channels are only used in the intermediate layers, but as we report hereafter, they still contribute to increased performance on all targets compared to the standard EGNN. The results, shown in Table 3, demonstrate that performance is also comparable to SEGNN when using 8 vector channels.
## 4 Conclusion
We show here that adding multiple channels to the EGNN model leads to performance improvements in prediction tasks on physical systems, sometimes matching more complicated architectures. This is achieved without a significant increase in the forward runtime of the model because only a small number of vector channels are needed to obtain improvements.
This generalization could also be useful for tasks where the number of input vectors attached to each node or to predict is arbitrary (for example, when quantities such as angular velocity, spin, or polarization are included). Translationally invariant predictions can also be produced simply by removing the residual connection in the positions update equation.
We plan to investigate further whether there is a particular semantic meaning to the different vectors computed by the multi-channel model that makes it helpful. One possible downside we noticed with the multi-channel model was that training could be less stable when more vector channels were used. In practice, we found that gradient clipping could be used to help address this issue, but this was not used in our experiments. Analyzing the learned vectors could lead to better architecture design and hyperparameter selection.
We further plan to apply this model to larger datasets where we may have many more interactions than in any of the experiments tested in this paper. We believe that the relative computational efficiency of the method proposed here may allow it to prove useful in applications that were previously unattainable for E(\(n\)) equivariant neural networks.
|
2310.12157 | Desynchronization of large-scale neural networks by stabilizing unknown
unstable incoherent equilibrium states | In large-scale neural networks, coherent limit cycle oscillations usually
coexist with unstable incoherent equilibrium states, which are not observed
experimentally. We implement a first-order dynamic controller to stabilize
unknown equilibrium states and suppress coherent oscillations. The
stabilization of incoherent equilibria associated with unstable focus and
saddle is considered. The algorithm is demonstrated for networks composed of
quadratic integrate-and-fire (QIF) neurons and Hindmarsh-Rose neurons. The
microscopic equations of an infinitely large QIF neural network can be reduced
to an exact low-dimensional system of mean-field equations, which makes it
possible to study the control problem analytically. | Tatjana Pyragiene, Kestutis Pyragas | 2023-09-15T12:00:17Z | http://arxiv.org/abs/2310.12157v1 | Desynchronization of large-scale neural networks by stabilizing unknown unstable incoherent equilibrium states
###### Abstract
In large-scale neural networks, coherent limit cycle oscillations usually coexist with unstable incoherent equilibrium states, which are not observed experimentally. We implement a first-order dynamic controller to stabilize unknown equilibrium states and suppress coherent oscillations. The stabilization of incoherent equilibria associated with unstable focus and saddle is considered. The algorithm is demonstrated for networks composed of quadratic integrate-and-fire (QIF) neurons and Hindmarsh-Rose neurons. The microscopic equations of an infinitely large QIF neural network can be reduced to an exact low-dimensional system of mean-field equations, which makes it possible to study the control problem analytically.
keywords: Neural network; Mean-field equations; Synchronization control; Quadratic integrate-and-fire neurons; Hindmarsh-Rose neurons
## 1 Introduction
Synchronization studies in large populations of coupled oscillatory or excitable elements are relevant in fields ranging from physics to neuroscience [1; 2; 3; 4]. The role of synchronization in neural systems can be twofold. In a healthy state, it is responsible for learning and cognition [5; 6], however, excessive synchronization can cause a variety of neurological conditions such as Parkinson's disease [7], epilepsy [8; 9], tinnitus [10], and others. High-frequency (HF) deep brain stimulation (DBS) is a standard procedure for the treatment of neurological disorders [11; 12]. The mechanisms of DBS are not yet well understood [13; 14]. Simple models show that the HF DBS effect can be explained either as the result of stabilizing the resting state of individual neurons [15] or as suppressing synchronized oscillations without forcing individual neurons into silence [16]. HF DBS may cause side effects and its therapeutic effect may decrease over time, so there is a significant clinical need for less invasive and more effective stimulation methods [17]. In open loop control systems such as HF DBS, adverse effects on neural tissue can be reduced by optimizing the waveform of the stimulus signal [18; 19].
However, a number of theoretical works show that the desynchronization of coherent oscillations is especially effective with the help of closed-loop (feedback) control algorithms. Various control strategies based on linear [20; 21; 22; 23; 24] and nonlinear [25; 26; 27] time-delayed feedback, linear feedback bandpass filters [28; 29; 30], proportional-integro-differential feedback with a separate stimulation-registration setup [31], act-and-wait time-delayed feedback [32; 33] and others [34; 35; 36] were considered.
Recent advances in the theory of nonlinear dynamical systems have provided the neuroscience community with simple, low-dimensional models of neural networks referred to as next-generation neural mass models [37]. Such models are useful objects for developing, testing, and understanding various synchronization control algorithms. Here we show that these models can naturally explain the desynchronization mechanism of our feedback control algorithm in terms of stabilizing unknown unstable incoherent states. The next-generation models are derived directly from the microscopic dynamics of individual neurons and are accurate in the thermodynamic limit of infinite network size. These models represent a closed system of mean-field equations for biophysically relevant parameters such as mean membrane potential and firing rate. Low-dimensional dynamics in a large population of coupled oscillatory elements was first discovered by Ott and Antonsen [38] in the Kuramoto model [2]. Later, this discovery was successfully applied to derive a low-dimensional system of mean-field equations for a certain class of networks consisting of all-to-all pulse-coupled QIF neurons [39], which are canonical models of class I neurons [40].
In recent years, next-generation models have been obtained for a large number of different modifications of QIF
neural networks [41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51]. These models make it possible to carry out their detailed bifurcation analysis and reveal synchronization mechanisms. It has been shown that synchronized limit cycle oscillations can arise from various bifurcations, such as the Hopf bifurcation in Refs. [16; 42] or the homoclinic bifurcation in Ref. [42]. However, stable limit cycles are always accompanied by unstable fixed points, which correspond to unstable incoherent equilibrium states of the network. These unstable states are not observed experimentally. Here, we show that a priori unknown unstable incoherent states can be stabilized using the control algorithm proposed in Refs. [52; 53]. Initially, this algorithm was developed and tested to stabilize unknown unstable equilibrium states of low-dimensional dynamical systems, and recently it has been implemented to stabilize unstable pedestrian flows in the collective behavior of large crowds of people [54]. Here, we implement this algorithm to stabilize unstable incoherent states in large-scale neural networks consisting of QIF and Hinmarsh-Rose [55] neurons. We demonstrate effective control of two types of equilibrium states associated with an unstable focus and a saddle point. As far as we know, the control of saddle equilibrium states in neural networks has not been considered in the literature.
The paper is organized as follows. Section 2 describes the control algorithm. In Sec. 3, we apply this algorithm to a population of synaptically coupled excitatory QIF neurons. Here we stabilize incoherent states associated with an unstable focus and a saddle fixed point. The latter is stabilized by an unstable controller. Section 4 is devoted to the control of two interacting populations of excitatory and inhibitory QIF neurons. In Sec. 5, we apply our algorithm to a population of chaotically spiking Hindmarsh-Rose neurons, whose microscopic model equations cannot be reduced to a low-dimensional system. The conclusions are presented in Sec. 6.
## 2 Control algorithm
We consider a large network of coupled neurons generating collective coherent oscillations. We assume that, along with the synchronous mode of coherent oscillations, the network has an unstable equilibrium state characterized by incoherent oscillations of individual neurons. Our goal is to stabilize the incoherent state and transition the network from synchronous to incoherent mode. To achieve this goal, we turn to the algorithm for stabilizing unknown unstable equilibrium points of low-dimensional dynamical systems, developed in Refs. [52; 53]. The algorithm uses a simple first-order dynamic controller based on a low-pass filter (LPF). The block diagram of this algorithm, adapted for neural networks, is shown in Fig. 1.
We assume that the mean membrane potential \(v(t)\) of the entire or some part of the neural population can be measured at the output of the network. In addition, we assume that all or part of the population of neurons can be stimulated by the input current \(I_{c}(t)\). In general, the measured and stimulated subpopulations may differ. The input and output of the network are connected by a feedback loop described by the following equations:
\[\dot{w} = \omega_{c}(v-w), \tag{1a}\] \[I_{c} = k(w-v), \tag{1b}\]
where \(w\) is a dynamic variable of the controller (LPF). The control algorithm has two adjustable parameters: the cut-off frequency \(\omega_{c}\) of the LPF and the control gain \(k\). Let us denote the average membrane potential of the free network in a state of unstable equilibrium as \(v=v^{*}\), which in the thermodynamic limit should be a constant, \(v^{*}=const\). We assume that this value is a priori unknown. The control algorithm is designed in such a way that the equilibrium value of \(v^{*}\) remains unchanged in the stationary state of the closed loop system. Indeed, at \(\dot{w}=0\) the control variable coincides with the mean membrane potential \(w=w^{*}=v^{*}\), and the feedback perturbation vanishes, \(I_{c}=0\). However, feedback perturbation affects the stability of the incoherent state. The examples below show that this state can be stabilized by adjusting control parameters \(\omega_{c}\) and \(k\) accordingly.
This algorithm has a number of advantages. Firstly, it is weakly invasive. Below, we will show that the feedback perturbation \(I_{c}\) decreases according to a power law with increasing network size and vanishes as the network size tends to infinity. Secondly, this algorithm does not require knowledge of the mean membrane potential \(v^{*}\) of an unstable equilibrium state and, thirdly, the algorithm provides tracking of the equilibrium state in the case of slowly varying system parameters [53].
Note that the control algorithm with an ordinary LPF (\(\omega_{c}>0\)) has a limitation. It works well for unstable equilibrium points like focuses but doesn't work for saddles. More precisely, Ref. [52] gives a theorem that a stable controller cannot stabilize unstable equilibrium points with an odd number of real positive eigenvalues. This limitation can be avoided by using an unstable controller in the same way as it is done in the delayed feedback control algorithm [56] when
Figure 1: Block diagram of stabilization of unknown incoherent states in neural networks. The mean membrane potential \(v(t)\) represents the output of the network. The network is stimulated by the input current \(I_{c}(t)\). In a feedback loop, LPF stands for low-pass filter.
stabilizing a certain type of unstable periodic orbits [57]. Here, to stabilize an unstable incoherent state of the saddle type, we will use an unstable LPF with the parameter \(\omega_{c}<0\). An unstable LPF can be implemented using an RC circuit with a negative resistor.
In the following sections, we will demonstrate the performance of this algorithm for three examples of neural networks. The first two examples deal with large populations of synaptically coupled QIF neurons. In the limit of infinite size, microscopic models of these networks can be reduced to exact low-dimensional systems of mean-field equations. In the first example, one population of excitatory neurons is considered, and in the second example, two interacting populations of excitatory and inhibitory neurons are analyzed. The third example is devoted to electrically coupled chaotic Hindmarsh-Rose neurons.
## 3 Controlling a population of synaptically coupled excitatory QIF neurons
First, we apply the algorithm described above to a heterogeneous population of QIF excitatory neurons interacting via finite-width synaptic pulses [42]. The microscopic state of the population is defined by the set of \(N\) neurons' membrane potentials \(\{V_{j}\}_{j=1,\ldots,N}\). They satisfy the following set of equations [40]:
\[\dot{V}_{j} = V_{j}^{2}+\eta_{j}+Js(t)+I_{c}(t), \tag{2}\] \[\mbox{if }V_{j}\geq V_{p}\mbox{ then }V_{j}\gets V_{r}.\]
Here, \(\eta_{j}\) is a heterogeneous excitability parameter that specifies the behavior of individual neurons and the term \(Js(t)\) stands for the synaptic coupling, where \(J\) is the synaptic weight and \(s(t)\) is the normalized mean synaptic current emitted by spiking neurons. The term \(I_{c}(t)\) describes an external current, which we interpret as a control variable. In this model, the membrane time constant of QIF neurons is assumed to be unity. This means that time here is measured in units of the membrane time constant.
For \(J=0\) and \(I_{c}=0\), the neurons with the parameter \(\eta_{j}<0\) are at rest, and the neurons with the parameter \(\eta_{j}>0\) generate spikes. When the potential \(V_{j}\) reaches the threshold value \(V_{p}\), it is instantly reset to the value \(V_{r}\). We choose thresholds in the form \(V_{p}=-V_{r}=\infty\), which allows us to transform QIF neurons into theta neurons and obtain an accurate system of reduced mean-field equations [39]. We consider the case when the heterogeneous parameter \(\eta\) is distributed according to the Lorentzian density function
\[g(\eta)=\frac{1}{\pi}\frac{\Delta}{(\eta-\bar{\eta})^{2}+\Delta^{2}}, \tag{3}\]
where \(\Delta\) is the half-widths and \(\bar{\eta}\) is the center of the distribution. For the Lorentzian heterogeneity, the reduction of microscopic equations is the most efficient. Note that other distributions of the heterogeneous parameter have been considered in recent publications. [49, 50].
Here we use the model of global coupling in which neurons emit synaptic pulses of finite width with the mean synaptic current defined as [42]
\[s(t)=\frac{V_{th}}{N}\sum_{i=1}^{N}H(V_{i}(t)-V_{th}), \tag{4}\]
where \(H(\cdot)\) is the Heaviside step function and \(V_{th}\) is a threshold potential that determines the height and width of synaptic pulses.
In the limit \(N\rightarrow\infty\), the above microscopic model reduces to an exact system of two ordinary differential equations (ODEs) [42]
\[\dot{r} = \Delta/\pi+2rv, \tag{5a}\] \[\dot{v} = \bar{\eta}+v^{2}-\pi^{2}r^{2}+Js(t)+I_{c}(t) \tag{5b}\]
for two biophysically relevant parameters, the mean spiking rate \(r(t)\) and the mean membrane potential \(v(t)\). In the infinite size limit, the mean synaptic current (4) is expressed in terms of the parameters \(r(t)\) and \(v(t)\) as [42]
\[s(t)=\frac{V_{th}}{\pi}\left[\frac{\pi}{2}-\arctan\left(\frac{V_{th}-v(t)}{ \pi r(t)}\right)\right]. \tag{6}\]
This expression closes the system of mean-field Eqs. (5). The bifurcation analysis of these equations without control \(I_{c}(t)=0\) was carried out in Ref. [42]. This analysis showed that synchronous limit cycle oscillations can occur through two types of bifurcations: the Hopf bifurcation and the homoclinic bifurcation. In the first case, the system (5) has a stable focus before the bifurcation. On a microscopic level, this corresponds to a stable equilibrium state of the network with incoherent dynamics of individual neurons. After the bifurcation, the incoherent equilibrium state becomes an unstable focus, and neurons exhibit coherent limit cycle oscillations. Our goal here is to bring back the incoherent dynamics by stabilizing the unstable equilibrium state. In the case of a homoclinic bifurcation, the limit cycle touches the saddle point and becomes a homoclinic orbit. Near this bifurcation, we will suppress coherent oscillations by using an unstable controller to stabilize the incoherent state of the saddle equilibrium.
We begin the application of our control algorithm from the case of limit cycle oscillations arising from the Hopf bifurcation. We use typical system parameters corresponding to this mode [42]: \(\Delta=1\), \(V_{th}=50\), \(\bar{\eta}=2\), and \(J=20\). For these parameters, the only attractor in the two-dimensional phase space \((r,v)\) of the free (\(I_{c}=0\)) system (5) is the limit cycle. Inside this cycle there is an unstable focus with the coordinates \((r^{*},v^{*})\approx(2.1081,-0.0755)\) and two complex-conjugate eigenvalues \(\lambda_{1,2}\approx 0.2621\pm 9.6190\). Let us now estimate how the local properties of this fixed point change in the presence of a control defined by the Eq. (1). Due to the additional variable \(w\), the phase space of the closed loop system is expanded to three dimensions: \((r,v,w)\). The coordinates of the fixed point in the three-dimensional phase space are \((r^{*},v^{*},v^{*})\), i.e. its projection onto the original two-dimensional phase space remains unchanged. However,
the stability properties of this fixed point now depend on the controller parameters \(\omega_{c}\) and \(k\) and are determined by the eigenvalue problem
\[\det(A-\lambda I)=0 \tag{7}\]
of the linearized system of Eqs. (5) and (1). Here
\[A=\begin{pmatrix}a_{11}&a_{12}&0\\ a_{21}&a_{22}-k&k\\ 0&\omega_{c}&-\omega_{c}\end{pmatrix} \tag{8}\]
is the Jacobian matrix of this system, \(a_{ij}\) are the coefficients of the Jacobian matrix of of the system (5) without control evaluated at the fixed point \((r^{*},v^{*})\). Specifically, \(a_{11}=2v^{*}\), \(a_{12}=2r^{*}\), \(a_{21}=-2\pi^{2}r^{*}+JV_{th}(\pi r^{*})^{-2}(V_{th}-v^{*})c^{-1}\) and \(a_{22}=2v^{*}+JV_{th}\pi^{-2}(cr^{*})^{-1}\), where \(c=1+[(V_{th}-v^{*})/(\pi r^{*})]^{2}\). Finally, \(I\) is the identity matrix, and \(\lambda\) is the eigenvalue.
For a given fixed point, the dependence of the solutions of the Eq. (7) on the parameters \(\omega_{c}\) and \(k\) is shown in Fig. 2. The colors encode the values of \(\max[\mathrm{Re}(\lambda)]\). The thick red contour line corresponds to \(\max[\mathrm{Re}(\lambda)]=0\). It separates regions of a stable and unstable fixed point. We see that the control algorithm is robust to the choice of control parameters \(\omega_{c}\) and \(k\). The algorithm provides stabilization of the unstable focus for any \(\omega_{c}>0\) and \(k\gtrapprox 0.55\).
Figure 3 shows the performance of the control algorithm for fixed values of \(\omega_{c}=1\) and \(k=2\). The thick gray curves show the dynamics of the free and controlled neuronal population obtained from the mean-field equations (5). During the time \(t<5\) the control is switched off and the system is in the mode of limit cycle oscillations. The mean membrane potential [Fig. 3(a)] and the mean spiking rate [Fig. 3(b)] show periodic oscillations. At \(t>5\) the control is activated and the oscillations are damped. The system approaches a stabilized equilibrium state. The control perturbation [Fig. 3(d)] experiences transient damped oscillations and vanishes asymptotically.
As a next step, we tested the performance of our algorithm for networks of finite size, described by the microscopic Eqs. (2). Unlike the low-dimensional mean-field Eqs. (5), the microscopic model is defined by a huge number of differential equations. The typical population sizes we model here are \(N\sim 10^{4}\) neurons. There is of course no _a priori_ guarantee whether the control algorithm will work for such high-dimensional systems. Numerical simulation is more convenient after changing variables
\[V_{j}=\tan(\theta_{j}/2), \tag{9}\]
which transforms QIF neurons into theta neurons. The advantage of theta neurons is that they avoid the discontinuy problem. When the membrane potential \(V_{j}\) of the QIF neuron rises to \(+\infty\) and falls to \(-\infty\), the theta neuron simply crosses the phase \(\theta_{j}=\pi\). For theta neurons, the Eqs (2) are transformed to
Figure 3: Suppression of coherent oscillations by stabilization of an unstable focus in a population of synaptically coupled QIF neurons. For \(t<5\), there is no control and the network generates collective coherent oscillations. For \(t>5\), the control is turned on and the system goes into a previously unstable incoherent state. The dynamics of (a) mean membrane potential, (b) mean spiking rate, and (d) control perturbation derived from the mean-field Eqs. (5) are shown as thick gray curves. The thin red curves show the same results derived from the microscopic model (10). (c) Raster plot of 200 randomly selected neurons. The spike moments for each neuron are shown by dots. The neuron numbers are shown on the vertical axis. The parameters of the network are the same as in Fig. 2. Controller parameters: \(\omega_{c}=1\) and \(k=2\). The microscopic model was simulated using \(N=10^{4}\) neurons.
Figure 2: The performance of the control algorithm depending on the control parameters \(\omega_{c}\) and \(k\). The results for an unstable focus in a population of synaptically coupled QIF neurons are presented. The contour lines and colors indicate the maximum real part of the eigenvalues \(\max[\mathrm{Re}(\lambda)]\) obtained from the Eq. (7). The thick red contour line corresponds to \(\max[\mathrm{Re}(\lambda)]=0\). It separates stable and unstable regions. The originally unstable focus is stabilized in the region \(\max[\mathrm{Re}(\lambda)]<0\). Network parameters: \(\Delta=1\), \(V_{th}=50\), \(\bar{\eta}=2\), and \(J=20\).
\[\dot{\theta}_{j} = 1-\cos\left(\theta_{j}\right) \tag{10}\] \[+ \left[1+\cos\left(\theta_{j}\right)\right]\left[\eta_{j}+Js(t)+I_{c }(t)\right].\]
We integrated these equations by the Euler method using a time step of \(dt=10^{-4}\). We have generated the values of the Lorentzian distributed (3) heterogeneous parameter deterministically using \(\eta_{j}=\bar{\eta}+\Delta\tan(\pi/2)(2j-N-1)/(N+1)\)] for \(j=1,\ldots,N\). For more details on modelling the Eqs. (10), see Ref. [42]. From the Eqs. (10), we estimated the Kuramoto order parameter [2]
\[Z=\frac{1}{N}\sum_{j=1}^{N}\exp(i\theta_{j}) \tag{11}\]
and used its relation with the spiking rate \(r\) and the mean membrane potential \(v\)[39]:
\[r=\frac{1}{\pi}\operatorname{Re}\left(\frac{1-Z^{*}}{1+Z^{*}}\right),\quad v= \operatorname{Im}\left(\frac{1-Z^{*}}{1+Z^{*}}\right), \tag{12}\]
where \(Z^{*}\) denotes complex conjugate of \(Z\).
Results derived from the microscopic model (10) for \(N=10^{4}\) neurons are presented in Fig. 3 by thin red curves. They are in good agreement with the results obtained from the reduced mean-field Eqs. (5). Thus, the control algorithm works well for a large population of \(N=10^{4}\) neurons, and the mean-field theory correctly predicts the dynamics of the population in the presence of control. To demonstrate network dynamics at the microscopic level, Fig. 3(c) shows raster plots of 200 randomly selected neurons. Without stimulation (\(t<5\)), most neurons spike coherently. Turning on the control at \(t>5\) destroys the coherent spiking and stabilizes the initially unstable incoherent state.
Although the results of the mean-field equations and the microscopic model are very close, there is a fundamental difference in the asymptotic dynamics of these two models. As \(t\to\infty\), the dynamic variables \((r,v)\) of the mean-field equations approach exactly the unstable fixed point \((r^{*},v^{*})\) of the uncontrolled system, and the control perturbation vanishes \(I_{c}(t)\to 0\). In the microscopic model, the variables \((r,v)\) exhibit small fluctuations around the fixed point \((r^{*},v^{*})\), and the control perturbation \(I_{c}(t)\) fluctuates around zero. Figure 4 shows the dependence of the variance \(\operatorname{Var}(I_{c})\) of the control perturbation in the post-transient regime on the network size \(N\). The variance decreases with increasing \(N\) and vanishes at \(N\to\infty\). This dependence is well described by the power law \(\operatorname{Var}(I_{c})\sim N^{-\gamma}\) with \(\gamma\approx 1.3\).
Let us now consider the control of coherent oscillations near a homoclinic bifurcation. We will use the following set of the parameters: \(\Delta=1\), \(V_{th}=50\), \(\bar{\eta}=-7\), and \(J=21\). For these parameters, the free (\(I_{c}=0\)) system (5) has a stable limit cycle and outside it a saddle point with coordinates \((r^{*},v^{*})\approx(0.4073,-0.3908)\) and two real eigenvalues \(\lambda_{1,2}\approx(2.5306,-3.9255)\). Stabilization of the incoherent state associated with the saddle point cannot be attained with an ordinary LPF and requires the use of an unstable LPF with a negative parameter \(\omega_{c}\). The eigenvalues of the saddle point in presence of the control are determined by the Eqs. (7) and (8). The dependence of the two largest real parts of the eigenvalues on \(k\) for a fixed \(\omega_{c}=-1\) is shown in Fig. 5. The saddle point stabilization mechanism is best understood from the root loci diagram shown in the inset. Here we show the evolution of eigenvalues in the complex plane \(\lambda\) as \(k\) changes from \(0\) to \(\infty\). Two crosses on the real axes determine the location of the eigenvalues at \(k=0\). One of them \(\lambda=2.5306\) corresponds to a free network, and the other \(\lambda=-\omega_{c}=1\) corresponds to a disabled unstable controller. With the increase of \(k\), they approach each other on the real axes, collide and pass to the complex plane. At \(k\approx 15.3\), they cross symmetrically into the left half-plane (Hopf bifurcation). For very large \(k\approx 91.8\), we have a collision on the real axis again, and then one of the roots goes to infinity, while the other approaches the origin. For \(k>15.3\), the closed loop system is stable.
Figure 4: The variance \(\operatorname{Var}(I_{c})\) of the control perturbation in the post–transient regime as a function of the network size \(N\). The asterisks show the result of the numerical simulation, and the dashed line shows the power-law approximation \(\operatorname{Var}(I_{c})=CN^{-\gamma}\) with \(C=1450\) and \(\gamma\approx 1.3\).
Figure 5: Linear stability of a saddle incoherent state of a population of QIF neurons controlled by an unstable controller with a negative parameter \(\omega_{c}=-1\). Dependence of two largest real parts of eigenvalues of the closed loop system on the control gain \(k\). The inset shows the root loci of the characteristic Eq. (7) in the complex plane \(\lambda\) as \(k\) changes from \(0\) to \(\infty\). The crosses on the real axes indicate the location of the eigenvalues at \(k=0\), and the dot at the origin shows the location of one of the eigenvalues at \(k=\infty\). Network parameters: \(\Delta=1\), \(V_{th}=50\), \(\bar{\eta}=-7\), and \(J=21\).
Figure 6 shows the results of stabilization of a saddle incoherent state with unstable controller parameters \(\omega_{c}=-1\) and \(k=20\). As in Fig. 3, the dynamics derived from the mean-field equations are shown as thick gray curves, and the corresponding dynamics derived from the microscopic model of \(10^{4}\) neurons are shown as thin red curves. Again, there is complete agreement between the mean-field theory and the microscopic theory. For \(t<10\), there is no control, and the system is in the limit cycle mode, which is close to a homoclinic bifurcation. For \(t>10\), the control is activated and the system approaches a stabilized incoherent saddle point. In the mean-field theory, the control perturbation vanishes asymptotically, while in the microscopic model it experiences small fluctuations around zero. Note that the steady-state spiking rate in saddle equilibrium is much lower than in focus equilibrium [cp. post-transient dynamics in Figs. 3(b) and 6(b)].
## 4 Controlling two interacting populations of excitatory and inhibitory QIF neurons
Let us now consider the control of a more complex network built from two connected populations of excitatory and inhibitory QIF neurons. We follow the model discussed in Ref. [16] whose network architecture mimics the network architecture used in Parkinson's disease models. Such models are usually based on two interacting neural populations of the subthalamic nucleus (STN) consisting of excitatory neurons and the external segment of the globus pallidus (GPe) consisting of inhibitory neurons (cf.,e.g., Ref. [58]). It was shown in [16] that synchronous oscillations can be very effectively suppressed by HF stimulation of the inhibitory population, while HF stimulation of the excitatory population is ineffective. Here we want to test whether our control algorithm applied to the excitatory population can suppress synchronization.
The microscopic model of the network considered here is determined by the set of \(2N\) neurons' membrane potentials \(\{V_{j}^{(E,I)}\}_{j=1,\ldots,N}\). They satisfy the system of \(2N\) ODEs [16]:
\[\tau_{m}\dot{V}_{j}^{(E,I)} = (V_{j}^{(E,I)})^{2}+\eta_{j}^{(E,I)}+\mathcal{I}_{j}^{(E,I)}, \tag{13}\] \[\text{if}\;\;V_{j}^{(E,I)}\geq V_{p}\;\;\text{then}\;\;V_{j}^{(E, I)}\gets V_{r},\]
where, \(V_{j}^{(E,I)}\) is the membrane potential of neuron \(j\) in the excitatory (E) or the inhibitory (I) population, and \(\tau_{m}\) is the membrane time constant. The threshold potential assumption is the same as in the previous model: \(V_{p}=-V_{r}=\infty\). The heterogeneous parameters \(\eta_{j}^{(E,I)}\) for populations E and I are taken from two independent Lorentzian distributions:
\[g_{E,I}(\eta)=\frac{1}{\pi}\frac{\Delta_{E,I}}{(\eta-\bar{\eta}_{E,I})^{2}+ \Delta_{E,I}^{2}}, \tag{14}\]
where \(\Delta_{E,I}\) and \(\bar{\eta}_{E,I}\) are respectively the width and the center of the distribution for the populations E and I. The last term \(\mathcal{I}_{j}^{(E,I)}\) in Eqs. (2) describes synaptic coupling and external stimulation in the respective populations:
\[\mathcal{I}_{j}^{(E)} = -J_{IE}r_{I}(t)+I_{c}(t), \tag{15a}\] \[\mathcal{I}_{j}^{(I)} = J_{EI}r_{E}(t)-J_{II}r_{I}(t). \tag{15b}\]
Unlike the previous model, here the interaction between neurons is provided by instantaneous pulses. Each time the potential of a given neuron reaches \(\infty\), it resets to \(-\infty\), and the neuron emits a Dirac delta spike, which contributes to the output of the network. The mean synaptic rates of E and I populations are as follows:
\[r_{E,I}(t)=\lim_{\tau_{s}\to 0}\frac{\tau_{m}}{\tau_{s}N}\sum_{i=1}^{N}\sum_{k} \int_{t-\tau_{s}}^{t}\delta(t^{\prime}-(t_{i}^{k})_{E,I})dt^{\prime}, \tag{16}\]
where \(\delta(t)\) is the Dirac delta function and \((t_{i}^{k})_{E,I}\) is the time of the \(k\)th spike of the \(i\)th neuron in E and I population, respectively. Parameters \(J_{EI}\), \(J_{IE}\) and \(J_{II}\) denote synaptic weights. The current \(J_{EI}r_{E}(t)\) excites I neurons due to the synaptic activity of E population and the current \(-J_{IE}r_{I}(t)\) inhibits E neurons due to the synaptic activity of the I population. The current \(-J_{II}r_{I}(t)\) recurrently inhibits neurons in population I. We are considering a stimulation protocol in which only the excitatory population is stimulated, so the control current \(I_{c}(t)\) is only included in the Eq. (15a).
Figure 6: Suppression of coherent oscillations in a population of QIF neurons by stabilization of a saddle incoherent state with an unstable controller at \(\omega_{c}=-1\) and \(k=20\). As in Fig. 3, the dynamics derived from the mean-field equations are shown as thick gray curves, and the corresponding dynamics derived from the microscopic model of \(10^{4}\) neurons are shown as thin red curves. All other designations are the same as in Fig. 3. The control turns on at \(t=10\). The network parameters correspond to Fig. 5.
In the limit \(N\to\infty\), this microscopic model reduces to an exact closed system of four ODEs for four biophysical quantities, mean firing rates \(r_{E,I}\) and mean membrane potentials \(v_{E,I}\) of populations E and I [39, 16]:
\[\tau_{m}\dot{r}_{E} = \Delta_{E}/\pi+2r_{E}v_{E}, \tag{17a}\] \[\tau_{m}\dot{v}_{E} = \bar{\eta}_{E}+v_{E}^{2}-\pi^{2}r_{E}^{2}-J_{IE}r_{I}+I_{c}(t),\] (17b) \[\tau_{m}\dot{r}_{I} = \Delta_{I}/\pi+2r_{I}v_{I},\] (17c) \[\tau_{m}\dot{v}_{I} = \bar{\eta}_{I}+v_{I}^{2}-\pi^{2}r_{I}^{2}+J_{EI}r_{E}-J_{II}r_{I}. \tag{17d}\]
Bifurcation analysis of an uncontrolled (\(I_{c}=0\)) system (17) showed a wide variety of different dynamic modes [16]. Here we focus on the case when the system has a single attractor, the limit cycle. Specifically, we consider the following set of system parameters: \(\Delta_{E}=0.05\), \(\bar{\eta}_{E}=0.5\), \(\Delta_{I}=0.5\), \(\bar{\eta}_{I}=-4\), \(J_{EI}=20\), \(J_{IE}=5\), \(J_{II}=0.5\), and \(\tau_{m}=14\) ms. At these parameters, the system, along with a stable limit cycle, has an unstable fixed point, which is a high dimensional focus with coordinates
\[\left(r_{E}^{*},v_{E}^{*},r_{I}^{*},v_{I}^{*}\right)\approx(0.1319,-0.0603,0. 0663,-1.1990)\]
and two pairs of complex conjugate eigenvalues \(\lambda_{1,2}\approx(0.0448\pm 1.0304i)/\tau_{m}\) and \(\lambda_{3,4}\approx(-2.5634\pm 0.8190i)/\tau_{m}\). Our goal is to stabilize this fixed point using the control algorithm defined by Eqs. (1), with the constraint that the available network output is the mean membrane potential of the excitatory population, \(v=v_{E}\), and the control current \(I_{c}\) is applied only to the excitatory population. Linear stability of the fixed point in the presence of control can be analyzed in a similar way as in the previous model. Now the characteristic equation has five eigenvalues. The dependence of the \(\max[\mathrm{Re}(\lambda)]\) on the control gain \(k\) for three different values of the cutoff frequency \(\omega_{c}\) is shown in Fig. 7. Again we see that the stability condition \(\max[\mathrm{Re}(\lambda)]<0\) is satisfied in a wide range of the control parameters \(k\) and \(\omega_{c}\).
Figure 8 shows the performance of the control algorithm for fixed values of \(\omega_{c}=0.5/\tau_{m}\) and \(k=0.5\). The dynamics of the free (\(t<300\) ms) and controlled (\(t>300\) ms) network, obtained from the mean-field Eqs. (17), are shown as thick gray curves. The control switches the state of the system from coherent limit cycle oscillations to the stabilized incoherent state and the feedback perturbation asymptotically vanishes. These results are consistent with numerical simulations of a microscopic model with \(N=10^{4}\) neurons in each excitatory and inhibitory population (thin red curves). As in the previous case, we changed the variables
\[V_{j}^{(E,I)}=\tan(\theta_{j}^{(E,I)}/2) \tag{18}\]
to rewrite the Eqs. (13) in terms of theta neurons:
\[\tau_{m}\dot{\theta}_{j}^{(E,I)} = 1-\cos\left(\theta_{j}^{(E,I)}\right) \tag{19}\] \[+ \left[1+\cos\left(\theta_{j}^{(E,I)}\right)\right]\left[\eta_{j} ^{(E,I)}+\mathcal{I}_{j}^{(E,I)}\right].\]
We integrated these equations by the Euler method with a time step of \(dt=2\times 10^{-5}\). For the numerical implementation of Eq. (16), we set \(\tau_{s}=5\times 10^{-5}\tau_{m}\). To estimate the variables of the mean-field theory, we calculated the Kuramoto order parameters
\[Z_{E,I}=\frac{1}{N}\sum_{j=1}^{N}\exp(i\theta_{j}^{(E,I)}) \tag{20}\]
for each population and evaluated the mean spiking rates and mean membrane potentials for populations E and I as [39]:
\[r_{E,I}=\frac{1}{\pi}\operatorname{Re}\left(\frac{1-Z_{E,I}^{*}}{1+Z_{E,I}^{* }}\right),\ v_{E,I}=\operatorname{Im}\left(\frac{1-Z_{E,I}^{*}}{1+Z_{E,I}^{* }}\right), \tag{21}\]
where \(Z_{E,I}^{*}\) denotes complex conjugate of \(Z_{E,I}\). Panels (a), (c) and (e) in Fig. 8 show a good agreement of time traces obtained from mean filed equations and microscopic model. Panels (b) and (d) are raster plots of randomly selected 500 neurons in excitatory and inhibitory populations, respectively.
## 5 Controlling a population of chaotically spiking Hindmarsh-Rose neurons
As a final example, consider the control of synchronous oscillations in a heterogeneous population of electrically coupled Hindmarsh-Rose neurons [55]:
\[\dot{v}_{j} = y_{j}-v_{j}^{3}+3v_{j}^{2}-z_{j}+I_{j}+K(v-v_{j})+I_{c}(t), \tag{22a}\] \[\dot{y}_{j} = 1-5v_{j}^{2}-y_{j},\] (22b) \[\dot{z}_{j} = r[\nu(v_{j}-\kappa)-z_{j}],\quad j=1,\ldots,N. \tag{22c}\]
Here, \(v_{j}\), \(y_{j}\) and \(z_{j}\) are the membrane potential, the spiking variable and the adaptation current of the \(j\)th neuron, respectively. The variable
Figure 7: Linear stability of incoherent state associated with a high-dimensional focus in a system of two interacting populations of excitatory and inhibitory QIF neurons in the presence of control. The entire network is controlled using the output and input of the excitatory population only. The maximum real part of the eigenvalues as a function of the control gain \(k\) is shown for different values of the cut-off frequency \(\omega_{c}\) of LPF. Network parameters: \(\Delta_{E}=0.05\), \(\bar{\eta}_{E}=0.5\), \(\Delta_{I}=0.5\), \(\bar{\eta}_{I}=-4\), \(J_{EI}=20\), \(J_{IE}=5\), \(J_{II}=0.5\), and \(\tau_{m}=14\) ms.
\[v=\frac{1}{N}\sum_{i=1}^{N}v_{i} \tag{23}\]
is the mean membrane potential. The heterogeneity of neurons is provided by currents \(I_{j}\), which we randomly select from a Gaussian distribution with a mean value of 3 and a variance of 0.1. Parameters \(r=0.06\), \(\nu=4\) and \(\kappa=-1.56\) are chosen such that free (\(K=0\) and \(I_{c}=0\)) neurons generate chaotic bursts. The term \(K(v-v_{j})\) in the Eq. (22a) determines the electrical coupling between neurons, where \(K\) is the coupling strength. To get synchronized oscillations of the uncontrolled population, we take this parameter large enough, \(K=0.1\). The last term \(I_{c}(t)\) in this equation is the control current given by Eqs. (1).
Figure 9 shows how the control with fixed parameters \(\omega_{c}=0.05\) and \(k=2\) changes the dynamics of a population of \(N=10^{4}\) coupled neurons. Without control (\(t<700\)), synchronous oscillations of large amplitude are observed in the dynamics of the mean membrane potential \(v(t)\), and coherent bursts are visible on the raster plot. Activation of control at \(t>700\) effectively suppresses synchronous oscillations of the mean membrane potential, and neurons demonstrate incoherent bursts. As in previous examples, only small amplitude oscillations around zero are observed in the asymptotic dynamics of control perturbation \(I_{c}(t)\). Figure 9(d) demonstrates that control almost does not affect the amplitude dynamics of individual neurons. As an example, we show the time trace of the membrane potential of the first neuron \(v_{1}(t)\) before and after activation of control.
Note that, unlike the previous examples, there is no known way to reduce this model to a low-dimensional system. Thus, here we cannot theoretically estimate the mean value of the membrane potential of an unstable incoherent state in the thermodynamic limit and determine whether the equilibrium is associated with an unstable focus or saddle and how its stability changes in the presence of control. However, our algorithm does not require such detailed knowledge and, with an appropriate choice of control parameters, works just as well as in previous relatively simple models that allow a low-dimensional reduction in the thermodynamic limit. Numerical simulations of this model show that our algorithm works only when \(\omega_{c}>0\) and fails when \(\omega_{c}<0\). This allows us to conclude that the unstable equilibrium in this model is an unstable focus.
## 6 Conclusions
We considered the problem of suppressing collective synchronous oscillations in large-scale neural networks. This problem is relevant in neurology, as excessive synchronized oscillations in certain areas of the brain are often associated with various neurological disorders [7; 8; 9; 10]. Synchronized oscillations usually appear when an equilibrium incoherent state of the network becomes unstable. Information about unstable network states is difficult to extract from experimental data. We have shown that a priory unknown unstable incoherent states of large-scale neural networks can be effectively stabilized using a simple first order feedback controller based on a low-pass filter. Initially, this controller was developed for stabilization of unknown unstable equilibrium points of low-dimensional dynamical systems [52; 53] and has not yet been tested for high-dimensional systems such as neural networks, consisting of a huge number of interacting neurons.
We have demonstrated the effectiveness of our control algorithm on three examples of neural networks. The first two examples refer to QIF neurons. In the thermodynamic limit, microscopic models of networks built from QIF neurons can be reduced to exact low-dimensional systems of mean-field equations. This greatly simplifies the analysis of the effect of control on network dynamics. In the first example, we demonstrated the suppression of synchronous oscillations in a population of excitatory QIF neurons interacting through synaptic pulses of finite width. Here we have stabilized two types of incoherent states associated with an unstable focus and a saddle equilibrium point. Until now, the control of the saddle equilibrium state in neural networks has not been considered in the literature. Here we have achieved stabilization of the saddle state with the
Figure 8: Suppression of coherent oscillations in a system of two interacting populations of excitatory and inhibitory QIF neurons by stabilization of unstable incoherent state associated with a high-dimensional focus. Dynamics of mean spiking rate of (a) excitatory and (c) inhibitory populations, and (e) control perturbation applied to the excitatory population. The dynamics derived from the mean–field equations are shown as thick gray curves, and the corresponding dynamics derived from the microscopic model with \(10^{4}\) neurons in each excitatory and inhibitory population are shown as thin red curves. (b), (d) Raster plots of 500 randomly selected neurons in E and I populations, respectively. The control turns on at \(t=300\) ms. Network parameters as in Fig. 7. Controller parameters: \(\omega_{c}=0.5/\tau_{\rm m}\) and \(k=0.5\).
help of an unstable controller. In the second example, we considered the control of a network built from two connected populations of excitatory and inhibitory QIF neurons, whose architecture mimics that, used in Parkinson's disease models [58]. Previously, it was shown that high-frequency stimulation of the inhibitory population can effectively suppress synchronization in such a network, but stimulation of the excitatory population is ineffective [16]. Here, our algorithm provided effective stabilization of the incoherent state of the network by using the output and input of the excitatory population. For both first examples, the results derived from the mean-field equations were confirmed by numerical simulations of the respective microscopic models. We have shown that networks of \(10^{4}\) neurons are quantitatively well described by mean-field equations. In the third example, we demonstrated the suppression of coherent oscillations in a population of electrically coupled Hindmarsh-Rose neurons. Low-dimensional reduction of the equations of the microscopic model is impossible in this case. However, the direct application of the control algorithm to the microscopic model showed that it works just as well as in the previous two examples. Note that successful stabilization of unstable incoherent states makes them experimentally observable, and this can serve as a quantitative benchmark for assessing the quality of neural network models.
Finally, we summarize the main advantages of the proposed algorithm for suppressing coherent oscillations in large-scale neural networks: (i) the algorithm does not require any detailed knowledge of the network model and its unstable incoherent equilibria; (ii) the algorithm is robust to changes in control parameters; (iii) the algorithm can stabilize not only incoherent states associated with an unstable focus but also with a saddle equilibrium point; (iv) for large networks the algorithm is weakly invasive: the control perturbation decreases according to a power law with increasing network size and vanishes as the network size tends to infinity; (v) the algorithm is adaptive, which means that it provides tracking of the equilibrium states in the case of slowly varying system parameters (see [52, 53] for details).
In this paper, we limited ourselves to the consideration of the simplest first-order controller to stabilize unknown incoherent states. More complex networks may require higher-order generalized adaptive controllers [52]. In addition, we emphasize that mean-field equations derived from microscopic dynamics accurately describe synchronization processes in large networks, and these models are well suited for testing and developing various algorithms for suppressing unwanted coherent oscillations.
## Acknowledgments
This work is supported by grant No. S-MIP-21-2 of the Research Council of Lithuania.
Figure 9: Suppression of coherent oscillations in a population of electrically coupled Hidmarsh-Rose neurons. The control is activated at the time \(t=700\). (a) Dynamics of the mean membrane potential. (b) Raster plot of 100 randomly selected neurons. (c) and (d) Time traces of the control perturbation and the membrane potential of the first neuron, respectively. Network parameters: \(r=0.06\), \(\nu=4\), \(\kappa=-1.56\), \(K=0.1\) and \(N=10^{4}\). Heterogeneous currents \(I_{j}\) in Eq. (22a) are randomly selected from a Gaussian distribution with a mean value of 3 and a variance of 0.1. Controller parameters: \(\omega_{\mathrm{c}}=0.05\) and \(k=2\). |
2309.09372 | A Survey on Congestion Control and Scheduling for Multipath TCP: Machine
Learning vs Classical Approaches | Multipath TCP (MPTCP) has been widely used as an efficient way for
communication in many applications. Data centers, smartphones, and network
operators use MPTCP to balance the traffic in a network efficiently. MPTCP is
an extension of TCP (Transmission Control Protocol), which provides multiple
paths, leading to higher throughput and low latency. Although MPTCP has shown
better performance than TCP in many applications, it has its own challenges.
The network can become congested due to heavy traffic in the multiple paths
(subflows) if the subflow rates are not determined correctly. Moreover,
communication latency can occur if the packets are not scheduled correctly
between the subflows. This paper reviews techniques to solve the
above-mentioned problems based on two main approaches; non data-driven
(classical) and data-driven (Machine Learning) approaches. This paper compares
these two approaches and highlights their strengths and weaknesses with a view
to motivating future researchers in this exciting area of machine learning for
communications. This paper also provides details on the simulation of MPTCP and
its implementations in real environments. | Maisha Maliha, Golnaz Habibi, Mohammed Atiquzzaman | 2023-09-17T20:33:06Z | http://arxiv.org/abs/2309.09372v1 | A Survey on Congestion Control and Scheduling for Multipath TCP: Machine Learning vs Classical Approaches
###### Abstract
Multipath TCP (MPTCP) has been widely used as an efficient way for communication in many applications. Data centers, smartphones, and network operators use MPTCP to balance the traffic in a network efficiently. MPTCP is an extension of TCP (Transmission Control Protocol), which provides multiple paths, leading to higher throughput and low latency. Although MPTCP has shown better performance than TCP in many applications, it has its own challenges. The network can become congested due to heavy traffic in the multiple paths (subflows) if the subflow rates are not determined correctly. Moreover, communication latency can occur if the packets are not scheduled correctly between the subflows. This paper reviews techniques to solve the above-mentioned problems based on two main approaches; non data-driven (classical) and data-driven (Machine Learning) approaches. This paper compares these two approaches and highlights their strengths and weaknesses with a view to motivating future researchers in this exciting area of machine learning for communications. This paper also provides details on the simulation of MPTCP and its implementations in real environments.
Multipath TCP, congestion control, scheduling, deep reinforcement learning, machine learning
## I Introduction
Communication is key in many domains, such as defense, hospitality, technology, and space. In telecommunications, packet switching is a method of grouping data in smaller packets for faster communication [1]. One of the earliest packet-switched networks started with the Advanced Research Projects Agency Network (ARPANET) [2] in the United States, which is also called the forerunner of the Internet. Today, the Internet has expanded and now consists of a set of protocols for global communications. Basically, a protocol is a set of rules that the sender and receiver must agree on to communicate with each other. The two well-known Transport layer protocols are User Datagram Protocol (UDP) [3] and Transmission Control Protocol (TCP). UDP does not need any handshaking, which means the receiver does not send any acknowledgment to the sender when it receives a message. UDP thus leads to faster communication.
TCP [4] provides more consistent communication by considering handshaking between the sender and receiver. With millions of devices connected to the Internet, there is a demand for faster communication, but TCP fails to meet that need. It's because of TCP's congestion control algorithm, which decreases the throughput (message delivery rate) in response to the loss of packets in the network. Also, the handshaking of TCP increases the time necessary for a packet to travel, resulting in higher latency. Keeping in mind these problems, Multipath TCP (MPTCP) has been introduced by the Internet Engineering Task Force (IETF) [5] to use multiple paths effectively and efficiently between the sender and receiver.
TCP connections can experience packet losses or connection drops, resulting in a poor user experience [6]. MPTCP can use multiple TCP connections, known as subflows, in parallel to overcome TCP's limitations. One of the main goals of MPTCP is to control congestion and maintain traffic flows. Another focus of MPTCP is scheduling the packets over different subflows to send packets with the smallest round-trip time (RTT) [7], which is the time taken to send a data packet to the destination and receive an acknowledgment from the receiver. There are a set of traditional techniques such as Dynamic-Window Coupling (DWC) [8], Opportunistic Linked Increases Algorithm (OLIA) [9], Balanced linked adaptation (BALIA) [10], and Adaptive and Efficient Packet Scheduler (AEPS) [11] that control congestion or simultaneously schedule packets over multiple paths in MPTCP.
Since the standardization of MPTCP, a lot of classical approaches have been proposed to improve the performance of the network in terms of throughput and latency, but most of them perform poorly in highly dynamic networks. Recently, _data-driven_ approaches, which are mostly based on Deep Reinforcement Learning, perform much better in dynamic networks because of their ability to learn the network conditions. To the best of our knowledge, there have been only a few proposals on controlling congestion while scheduling packets using deep reinforcement learning-based approaches. Although some methods have been proposed to control congestion, those works have not focused on reducing RTT. Some researchers have proposed schedulers to reduce latency, but they did not consider achieving high throughput.
One of the biggest benefits of using MPTCP is its capacity to use all the available subflows and boost the network's throughput. However, to achieve high goodput, a scheduling strategy is important. Scheduling in MPTCP distributes packets over different subflows based on the smallest RTT. Schedelvers using classical approaches, like [12, 13, 14], increased the throughput but failed to adapt to the dynamic nature of a real-world network. They were tested using simulation, which, of course, does not emulate the real-world scenario. Machine Learning and Reinforcement Learning models have improved the above drawbacks as they can learn from past experience and they are more robust to the dynamic nature of real-world networks; However, machine learning techniques are usually slower than classical approaches and need a huge dataset to train the models [15, 16, 17].
### _Contributions_
The _objective_ of this paper is to provide a brief overview of the existing work on MPTCP, including both classical and machine learning-based approaches. We discuss how previous researchers have addressed MPTCP challenges and summarized their solutions. Previous works have reviewed the existing congestion control and scheduling techniques for MPTCP [18, 19, 20, 7, 21]. Among those works, some review papers have focused on either congestion control of MPTCP or only the scheduling of MPTCP, while others have reviewed only the existing work on MPTCP establishment. There are also some works that have mentioned both congestion control and scheduling but have not focused much on the scheduling-based works of MPTCP. The _contributions_ of this paper are as follows:
* Discuss the difference between the traditional TCP and MPTCP communication protocols.
* Discuss in detail both the congestion control and the packet scheduling problems separately.
* Comparison between the performance of ML-based and classical algorithms in terms of controlling congestion and packet scheduling.
* Summarize basic concepts in MPTCP, including the establishment of MPTCP in real-world platforms and simulators.
* Highlights advantages and limitations of previous works that can help the readers investigate future improvements on MPTCP.
The rest of our paper is organized as follows; Section II describes the terminologies in communication and deep reinforcement learning. Section III compares TCP and MPTCP. Sections IV and V describe previous works on congestion control and scheduling of MPTCP, respectively. Section VI focuses on the performance of both congestion control and packet scheduling. MPTCP implementations in the kernel and NS-3 are described in Section VII. Lastly, in Section VIII, we conclude our survey by discussing future works in MPTCP congestion control and scheduling.
## II Background & Terminologies
### _Overview of TCP_
TCP is a connection-oriented communication standard that computer applications use to communicate over a network. It is a packet transfer protocol in the Transport Layer [22] of the TCP model. TCP uses only one dedicated path for packet transfer. Though TCP guarantees data integrity of the packets, it has to face packet loss, delay and other problems which are discussed in Section III. Also, network congestion is another major problem in TCP which is discussed briefly later. Section II-B summarizes some concepts in TCP which would be in common with MPTCP and they are also used in congestion control.
### _Some Concepts in TCP_
* **Round Trip Time (RTT):** The time required to send a packet from the client to the server, and the time it takes for the server to receive an acknowledgment about receiving the packet is known as the round trip time (RTT). Reducing the round trip time is a primary focus of MPTCP. Figure 1 illustrates the meaning of RTT.
* **Throughput vs Goodput:** Throughput refers to the total number of packets transferred to the destination within a fixed time frame. On the other hand, Goodput is the number of meaningful packets that are delivered to the destination within a given time frame.
* **Low Latency vs High Latency:** Latency refers to the amount of time required to send a packet from source to destination and back again. Low latency is always preferable.
Fig. 1: Illustration of the RTT of a packet from the sender to the receiver.
Fig. 2: Illustration of shared bottleneck scenario in a network [23].
* **Congestion Window (CWND):** The congestion window decides the number of bytes or how many packets will be sent at a given time. Depending on the larger congestion window size, the throughput will also be maximized. Congestion window size has been determined by the slow start and congestion avoidance phases of TCP which will be discussed in the next part.
* **Bottleneck vs Shared Bottleneck:** Bottleneck occurs when there is not enough network capacity in a connection to handle the current volume of traffic. On the other hand, when a bottleneck link is shared between multiple subflows, it is referred as a shared bottleneck which is useful for maximizing the throughput. Figure 2 illustrates the concept of a shared bottleneck scenario.
### _Congestion Control in TCP_
TCP's congestion control mechanism has three phases; (1) slow start phase; (2) congestion avoidance phase; and (3) congestion detection phase. The basic difference between these three phases is the rate of increase in the congestion window size.
* **Slow Start Phase:** Slow start phase works as a part of the congestion control algorithm in TCP by controlling the amount of data flow in a network. When a network becomes congested from excessive data in the network, the slow start phase chokes the traffic by limiting the congestion window size. In the slow start phase, the sender sends a packet that contains its initial congestion window, and the client responds with its maximum buffer size after receiving the packet. If the sender gets the acknowledgment from the receiver, the number of packets to be sent to the receiver is doubled. This procedure continues until no acknowledgment is received. The acknowledgment may not be received for two reasons: if congestion occurs or the window limit of the client is reached.
* **Congestion Avoidance Phase- Additive Increase:** The congestion avoidance phase starts when the congestion window size of the TCP reaches a threshold in the slow start phase. In this phase, the size of the congestion window increases linearly. To elaborate, assume the congestion window size at time \(t\) is 20 and all the packets have been transmitted successfully, then the congestion window size at time \(t+1\) will be 21.
* **Congestion Detection Phase- Multiplicative Decrease:** If congestion occurs in the slow start phase or congestion avoidance phase, the congestion window size is decreased. This is called the multiplicative decrease phase, where TCP follows an exponential reduction of the congestion window. The additive increase and multiplicative decrease phases of the congestion avoidance and detection phases are referred to as Additive Increase Multiplicative Decrease (AIMD). An example of AIMD is shown in Figure 3.
### _Overview of MPTCP_
As opposed to TCP, which solely considers one path to transfer data, MPTCP is a transport layer protocol that allows the transfer of packets along multiple paths between the sender and the receiver. This helps the network increase its load capacity, thereby transferring a larger number of packets compared to TCP. The paths in MPTCP are called subflows. When one or more of the subflows fails to send a packet, it can flow through other subflows, leading to a fault-tolerant network. MPTCP is used in several areas of communication where there is a need for high throughput and very low latency during packet transfer. MPTCP is used in different applications such as online streaming, networking, gaming industries, VPNs. Figure 4 depicts the use of MPTCP where a cellphone may use either one of the subflows from two subflows to get connected to the server: one is the Wifi, and another subflow is the 5G network. The following section explains the procedure for establishing the MPTCP communication.
#### Iii-D1 Establishment of MPTCP Connection
The establishment of MPTCP between a sender and the receiver has two stages; In the first step a single flow is established. This phase is similar to TCP. Then, subsequent subflows are created. In the first stage, the sender and the receiver use one subflow to set up the MPTCP connection between them by sharing randomly generated keys. This lays the foundation for creating further paths between the sender and the receiver. Figure 5 shows the establishment of an MPTCP connection using all subflows.
Fig. 4: Overview of MPTCP
Fig. 3: Change in the congestion window in AIMD algorithm when a packet loss is encountered.
In MPTCP, after the establishment of the initial handshake as in TCP, the subsequent subflows are also handshaked [24]. MPTCP follows three-way handshaking consisting of SYN (synchronize), ACK+SYN and ACK (acknowledge). In the SYN packet, the sender shares its own token and a random nonce (number). Here the token is the hash value of the key using some cryptographic function which can be calculated in the initial phase with the keys exchanged in that phase. Subsequently, in the SYN+ACK packet, the receiver creates an HMAC (Hash-based message authentication code), the receiver's token and its nonce.
The option MP_CAPABLE is used to check whether the remote host is MPTCP enabled in the initial subflow. The option MP_JOIN is used in the additional subflow establishment to associate with MPTCP connections. Lastly, the sender responds with its HMAC in the ACK packet. [24]
### _Overview of Machine Learning Concepts used in Congestion Control in MPTCP_
#### Iii-E1 RNN and LSTM
RNN (Recurrent Neural Network) is one kind of neural network that passes the output from the previous steps to the next steps. RNN consists of the input layer, hidden layers, and the output layer. Its hidden state remembers the previous information to predict the next output. RNN works well in terms of correlation, while LSTM (Long Short-Term Memory) not only connects the correlation but also focuses on the context of the information. LSTM is an artificial neural network that follows feedback connections and stores previous information to predict the next one. It is a modified version of the RNN that solves the vanishing gradient problem [25] and can easily process longer sequences. LSTM has an input gate, an output gate and a forget gate. The Input Gate takes an input and vectorizes the input value. The forget Gate is responsible for forgetting unnecessary information, while the output gate generates the output. This helps the framework keep the necessary information and forget the unnecessary ones. Figure 6 shows the different components of an LSTM and compares with RNN. The LSTM-based framework is very popular to create Deep Reinforcement Learning-based congestion control system for MPTCP [26, 27].
#### Iii-E2 Reinforcement Learning and Deep Q-Learning (DQN)
A Markov model [28] consists of a tuple \((s,a,r(s,a))\), where \(s\) is the current state vector, \(a\) is the vector of actions an agent takes, and \(R\) is the reward or the feedback the environment provides for the agent, given the current state and action. In a Markov Decision Process, the agent makes a set of actions, called policy, to maximize its expected reward. The function \(Q(s,a)\) defines the optimal (_i.e._, maximum) expected reward the agent can get, given the current state \(s\) and action \(a\). Evaluating \(Q\) is important to plan for the optimal policy. Due to the stochastic nature of many environments and agents, calculating the exact value of \(Q\) is usually not possible. Instead, the agent learns \(Q\) values from its experience; this is called Q-learning, a type of reinforcement learning.
DQN Reinforcement Learning (DRL) uses a deep neural network (DNN) to learn/estimate the \(Q\) function. The deep neural network could be RNN, LSTM, CNN etc. In contrast, a traditional Q estimation (e.g., Temporal Difference (TD) learning [29]), is usually a memory table that stores all the previous records of the different steps that have been taken, along with their rewards which is not scalable for an environment with large size of space and actions, or when the action/state space is continuous. Deep-RL has been used in communication for learning the communication strategy between multiple agents. DRL is also used in MPTCP implementation to control congestion and schedule packets to maximize throughput and get low latency. In MPTCP communication network, the state can be throughput, sending rate, RTT, and loss of packets; actions should be increasing/decreasing congestion window size and reward is the measurement of either good or bad performance of the network, based on the state and action.
#### Iii-E3 Actor-critic model
An actor-critic has two major components: an actor and a critic. An actor takes the state of the current environment and determines the best action that needs to be taken, depending on that state. Whereas the critic works
Fig. 5: Establishment of an MPTCP connection.
Fig. 6: Architecture of RNN (left) vs LSTM (right).
for the evaluation role by taking the environment state with actions and returning a score/reward (_e.g.,_ Q) to decide how good is that action for that state. Determining the best action depends on the Q score, which is calculated separately in the critic network. An actor-critic learning technique learns both a policy function and a value function at the same time. The value function aids in enhancing the value function's training procedures, while the policy function instructs on how to make decisions.
#### Ii-A4 Transformers and Self-Attention
The transformer is an encoder-decoder model and has been used in many applications [30]. Self attention follows the encoder part of transformers and can be used in many communication applications such as congestion control [27], or scheduling packets for MPTCP. In natural language processing or machine translation, self-attention for a particular word measures the dependency of that word on the other words; more relevant words have a higher value in self-attention, and loosely connected words have lower values [31, 32]. This concept is brought into the area of communication and subflows. Here the words have been replaced by the subflows. In MPTCP, self-attention checks the dependency of a subflow with the other subflows by assigning different weights to the states of the other subflows. Here the states of the subflows may refer as RTT, packet loss, packet delay, and throughput [27].
## III Traditional TCP vs Multipath TCP
In this section, we discuss some major challenges in network communication, how MPTCP and TCP address them in different situations, including the comparison of MPTCP and TCP performance.
### _Packet Loss_
Packet loss happens in a network when a packet fails to reach the receiver. Packet loss in a network indicates network congestion, disruption, or even a complete loss of connection. When a TCP connection encounters a packet loss, it considers a sign of network congestion and, therefore, halves the size of the congestion window and the threshold value; hence the throughput decreases. However, MPTCP is more robust in such cases; whenever it sees one of the subflows having packet loss, it reduces the congestion window of that subflow, while the other subflows remain intact. This was experimentally tested [33] where TCP and MPTCP transmission performances were compared using WiFi and LTE. In this real-world experiment, \(750\) individuals from \(16\) nations utilized a crowd-sourced smartphone application for \(180\) days. Deng et al. [33] compared MPTCP and TCP performance via WiFi or LTE from \(20\) different places across seven cities in the United States. The fastest link of normal TCP exceeds MPTCP performance for short flows. In this scenario, MPTCP fails to select an appropriate communication path to reduce transmission time for a small amount of data; hence, the packet loss increases for MPTCP. However, TCP experiences greater packet loss than MPTCP in log-distance communication experiments.
### _Packet Delay_
Packet delay in a network is the time taken to transfer a packet of data from sender to receiver. The packet delay is highly affected by the number of routers or switches in the route, the nature of the path the packet follows, and the congestion in that path. The vulnerability of packet delay in both TCP and MPTCP scenarios can be described by one communication example. If a computer is connected via both wireless and wired connections, it may communicate with servers using TCP or MPTCP. In terms of TCP, if the connection is established over the wireless connection, it will have a greater overall delay than MPTCP. Wireless connection experiences high packet loss as opposed to the wired connection which leads to higher latency [34].
MPTCP has the choice to choose either a wired or wireless connection, it may transfer the packets via the wired connection if it experiences major packet delay through a wireless connection. Raiciu et al. [35] have compared the performance of MPTCP and TCP-based on the packet delays in different network topologies. Their findings reveal that MPTCP can achieve \(90\) percent bandwidth utilization and low overall packet delay when the number of subflows is two and eight in the VL2 [36] and FatTree network [37] topologies, respectively. The result also showed that MPTCP not only increases the bandwidth but also increases the robustness to network changes by lowering the packet delay.
### _Out-of-Order Packets_
In TCP, a message is divided into multiple parts, known as packets. Each of these packets is given a unique number known as the sequence number. When these packets reach the receiver, it uses the sequence number to put them in order to retrieve the message. If these orders are not maintained during transmission, the delivery is called out-of-order delivery of packets. UDP is notorious for these deliveries as it does not consider any handshaking when each packet is received. But in TCP, the next packet is not sent until it gets the acknowledgment from the previous packet. If it gets a time out or negative acknowledgement (NACK) [38], the packets are sent again; therefore, out-of-order delivery is rare in TCP. However, in MPTCP, there is a high chance of out-of-order delivery of packets as they use different subflows with different delays. Therefore, scheduling is one of the most challenging tasks in MPTCP, and a lot of work has been done to address this issue. Yang et al. [39] have mentioned a situation where the jitter can happen for transferring data in MPTCP. They tackled this issue using an innovative traditional scheduling process named Delay Aware Packet Scheduling technique to remove the jitter in the packets. Han et al. [17] used a queue to keep redundant packets that may get lost.
### _Round Trip Time_
Round Trip Time (RTT) has been discussed in section II. Chen et al. [40] have compared the performances of TCP and MPTCP over WiFi and cellular networks where the authors
compared the RTTs of the transmission protocols. They conducted two sets of experiments; in the first experiment, they used small-sized files, and in the second, large-sized files were used. In the first experiment, the file size varied from 8KB to 32MB. When WiFi is the default route, there is no discernible gain in MPTCP download performance over TCP. For small file downloads (such as 64 KB), the single route via WiFi delivers the optimum speed. However, a single LTE (Long-Term Evolution) 1 channel becomes the optimum option for relatively longer traffic flows. MPTCP outperforms TCP for larger files.
Footnote 1: wireless broadband communication standard came before 4G network
## IV Congestion Control of MPTCP
Congestion control is a concept of controlling congestion in a network and could happen in both TCP and MPTCP. Congestion occurs when there is too much data that needs to be sent through a network. Congestion control regulates the flow of data packets into the network, allowing for efficient use of a shared network infrastructure and preventing congestion collapse. In TCP, where there is only one subflow, the network is easily congested.
Different types of algorithms have been proposed to improve the congestion in TCP networks, such as TCP Cubic [41], TCP Vegas [42], and TCP Reno [42]. MPTCP provides several subflows, which results in a reduction of congestion. MPTCP has been designed to address the congestion issue, while still having the traffic flowing like a single-path TCP. A naive implementation of CC in a multipath setting would be using regular TCP congestion control for each subflow; however, it is not efficient as MPTCP which uses multiple concurrent TCP connections. Having a congestion control that manages the packet flows on subflows concurrently seems more efficient. For this purpose, many methods have been proposed to improve congestion in MPTCP. Congestion control algorithms for MPTCP are classified as classical and machine learning approaches which will be discussed in the following subsections.
### _Classical Congestion Control Approaches_
Most of the existing congestion control algorithms in MPTCP setting focus on the Congestion-Avoidance (CA) phase that solely considers long flow transmissions and does not focus much on the slow start phase. The congestion avoidance phase prevents a network from being overflooded by data such that it discards packets with low priority to be delivered, and the rate of transmission rises linearly over time. Another approach is to focus on Slow Start. The Slow Start phase limits the quantity of data to be sent over a network to avoid congestion. However, it causes exponential growth of the congestion window in the uncoupled Slow-Start (SS) phase, leading to buffer overflow from burst data. In terms of solving the mentioned problems, Yang et al. [43] have proposed a Throughput Consistency Congestion Control (TCCC) algorithm which consists of both Coupled Slow-Start (CSS) and Aggressive Congestion Avoidance (ACA). The usage of CSS prevents packet loss brought on by large data bursts and ACA works on getting fair bandwidth which is shared in congestion avoidance. Their proposed framework enhances transmission efficiency. However, the CSS algorithm only plays a part in the initial slow start phase of MPTCP. As the subflows of MPTCP belong to different phases in congestion control (see Section II-C), the CSS algorithm needs much extra consideration, which makes congestion control more challenging in MPTCP [43].
Traditional AIMD used in TCP shows poor performance adaptation in terms of network state-changing situations in MPTCP. Gilad et al. [44], have presented a method named MPCC that uses online learning. Their implementation has been performed in Linux kernel and the method has been tested on different network conditions and many different network topologies. In terms of improving the implementation, their analysis needs to be reached beyond parallel link networks. As further research, they have mentioned about boosting the performance for short flows and solving the bandwidth mismatch problems on network paths.
An energy-aware based congestion control algorithm (ecMTCP) has been developed by Le et al. [12] where the method distributes traffic between the most crowded and least crowded paths, as well as across paths with different energy costs, to achieve load balancing and energy savings. For simulation purposes, they used NS-2 simulator [45], and their design mechanisms can work on getting higher throughput in terms of both TCP and MPTCP flows. The main goal was to shift the traffic to less energy-intensive and less crowded paths. Cao et al. [13] proposed weighted Vegas (wVegas), a delay-based congestion control scheme for MPTCP. This algorithm has detected the packet queuing delay of each path and ultimately has decreased the packet load of congested subflows by increasing the load of the less congested one. This framework performed traffic shifting which can cause less packet losses and provide better traffic balance in subflows. Cao et al. used NS-3 simulator [46] to conduct the simulation and build a Network Utility Maximization Model by proposing an approximate iterative algorithm to reach their aim of controlling congestion.
Ji et. al [14] mentioned that existing multipath congestion control algorithms are unable to quickly adjust to dynamic traffic due to the heterogeneous Quality of Service (QoS). QoS refers the technologies that work on a network to manage traffic and enhance performance by reducing packet loss, delay, and latency in a network. It may lead to poor performance in certain network environments. To mitigate these issues, firstly, the authors have noticed the performance constraints of the most recent multipath congestion control algorithms through vast experimentation. Then, they used a unique control policy optimization phase referred to an adaptive QoS-aware multipath congestion management system that can quickly adapt to network changes. Their method uses the Random Forest Regressing (RFR) [47] method to carry out QoS-specific utility function optimization to adapt and encourage the improvement of the selected performance metric. They
conducted the implementation in Linux kernel and showed their work outperformed most of the multipath congestion control methods such as wVegas [13], Opportunistic Linked Increases Algorithm (OLIA) [9], Balanced Linked Adaptation (BALIA) [10].
Singh et al. [48] improved the Opportunistic Linked Increases Algorithm [9] and Dynamic-Window Coupling (DWC) [8]. The authors provided a mechanism to reduce the overall packet reordering delay and focused on the buffer size in the receiver side. Their proposed work showed good performance in terms of various bottleneck scenarios.
Hassayoun et al. [8] proposed a multipath congestion control scheme called Dynamic-Window Coupling (DWC) to obtain higher throughput to each end-to-end multiple paths. The authors also detected shared bottlenecks by monitoring loss and delay signals, and then grouped their congestion control mechanism over all subflows that shared a common bottleneck. Detecting the bottleneck for network conditions and regrouping the subflows in terms of the same bottlenecks leads to higher throughput. They also introduced subflow sets, a concept for enabling subflows to smoothly switch between independent and shared bottleneck-based congestion control. The algorithm has been implemented in NS-2 simulator. As future research, the authors introduced the possibility of including "memory" into the detection method for detecting the previous subflow groupings.
Ferlin et al. [49] have mentioned that increasing the bandwidth of multiple links and getting higher throughput will be impossible if two or more paths do not share a bottleneck. They found out the nonshared bottleneck paths of the coupled congestion control for links, they referred to it as a penalty, and to overcome it, they implemented shared bottleneck detection (SBD) algorithm for MPTCP. This work can balance congestion and throughput. Their observation has shown that in the case of non-shared bottleneck scenario, the maximum throughput can be achieved up to 40% with two subflows. Also, the throughput gain increased by above 100% when the number of subflows increased to five. Their implementation has been performed in Linux kernel and for emulation purposes, they have also used CORE network emulator [50].
### _Machine Learning Approaches for Congestion Control in MPTCP_
Though lots of work have been done in classical-based approaches for congestion control of MPTCP. But for controlling congestion, classical-based approaches focus solely on different types of congestion indicators (_i.e._, packet loss or RTT). In the case of classical-based approaches, the decision-making process totally depends on these unpredictable factors, which leads to poor performance. Whereas ML-based approaches aim to provide decisions based on experience, and can adapt well to any network situation. Thus, ML-based approaches outperform classical-based methods [51].
In reality, networks are dynamic, and the state of the network changes frequently. Due to that, MPTCP performs poorly in many practical situations as MPTCP has to adapt in new network states. Zhuang et al. [52], introduced a Reinforcement Learning technology that can learn the best route to send TCP packets such that the throughput has been maximized. They proposed a simple algorithm for controlling multipath congestion, where congestion control has been approached as a multi-armed bandit [53] issue based on online learning (MP-OL), which allowed flexible and adaptive transmission rate adjustments for each subflow with good performance.
In [26], the authors proposed a Deep Reinforcement Learning (DRL)-based framework to control congestion where a single DRL agent has been utilized to perform congestion control for all MPTCP flows to maximize the total utility. Figure 7 illustrates the concept of DRL for MPTCP congestion control. They implemented the MPTCP in the Linux kernel and used an LSTM-based neural network under a DRL framework to develop a representation for all active flows. Their work was the first work where the authors incorporated the LSTM-based representation network into an actor-critic architecture for controlling congestion which used the deterministic policy gradient [54] to train the critic, actor, and LSTM networks.
He et al. [27] worked on increasing/decreasing the sending rates of packets in response to congestion, where each DRL agent can control the congestion window size of each subflow. Their proposed DRL-based MPTCP framework also included self-attention, which has been used to check the dependencies of one subflow with the weighted sum of other subflows. They compared their work with DRL-CC [26] and showed their method outperforms DRL-CC.
Li et al. [55] proposed a method called SmartCC which can learn a set of congestion rules for observing the environments and taking actions to adjust the congestion window size of each subflow. For the MPTCP implementation task, the authors used the NS-3 simulator.
The Internet of Deep Space Things, or IoDST, offered
Fig. 7: Subflows are controlled by a DRL agent.
communication services for mission spacecraft that send video data. To improve TCP throughput and stream playback, Ha et al. [15] designed a congestion control framework for MPTCP, which can be used for data streaming transmission. Their proposed Q-learning and Deep Q-Network (DQN)-based congestion control scheme calculated the ideal congestion window for data transfer in IoDT conversations.
Xu et al. [56] proposed the SGIN-based High-Speed Railway (HSR) scenario with MPTCP. Space-ground integrated networks (SGINs) has been referred to as promising network architecture that provides seamless, high-rate, and reliable data transmission with incredibly wide coverage. By utilizing MPTCP in the SGIN, simultaneous data transfer over terrestrial and satellite networks has been made possible. However, due to MPTCP's current congestion control (CC) mechanisms, it's difficult to know the difference between negative effects (like packet loss and/or increased round-trip time) brought on by congestion and those brought on by handovers. This may lead to severe performance degradation in the SGIN-based HSR scenario, where handover may occur frequently. To solve it, a DRL-based novel approach has been proposed to improve the goodput which outperformed other state-of-the-art algorithms.
Xu et al. [57] presented a DRL-based novel framework for traffic engineering that can make decisions under the guidance of actor-critic networks. In their work, the state consisted of two components, such as the throughput and the delay of each communication session. On the other hand, the action has been defined as the solution to Traffic Engineering (TE) problems. The authors used the NS-3 simulator, and the reward of the model was the sum of the output from the utility function for an entire communication session. The utility function was a function of the throughput and delay of the network, which depicted how the network can perform. In the paper, each session had 20 iterations. In each iteration, the agent sent its actions to the environment and recorded the value from the utility function before updating the reward value. While they considered only one DRL agent (_i.e._, decision maker) in their framework, adding multiple agents can be considered to further improve the performance.
Pokhrel et al. [58] introduced a transfer learning-based MPTCP framework for Industrial IoT, where the neighboring machines can collaborate to learn from each other. In their approach, when a new DRL system controlling the IoT network joins the environment, it can use the idea of transfer learning. 2 NS-3 was used to simulate the algorithm. Their model has been proven theoretically and needs further research to determine its performance in a real-world situation.
Footnote 2: Transfer learning uses a previously trained model as the foundation for a new model on a different task.
## V Scheduling of MPTCP
Scheduling of MPTCP decides the amount of data that needs to be scheduled to different subflows based on getting the higher performance (high throughput, low latency, less packet loss) in MPTCP. In this section, different classical and ML-based approaches have been discussed which can be used to schedule packets in MPTCP.
### _Classical Approaches for Scheduling in MPTCP_
Hwang et al. [59] dealt with the problem of scheduling small-length packets. However, the authors mentioned MPTCP is usually advantageous for long-lived flows, and it performs worse than single-path TCP when the flow size is tiny (_e.g._, hundreds of KiloBytes). In this scenario, the quickest method is preferable since latency is far more critical than network bandwidth with such tiny data deliveries. The regular MPTCP packet scheduler may pick a slow path if the fast path's congestion window is unavailable, resulting in a delayed flow completion time.
To address this issue, Hwang et al. [59] suggested a novel MPTCP packet scheduler that momentarily blocks the slow path when the latency difference between the slow and fast paths is considerable, allowing the tiny quantity of data to be delivered swiftly via the fast path. The authors used the method to find the subflow with the lowest RTT regardless of the availability of the congestion window, and then they used the existing Lowest-RTT-First policy [60] to choose the optimal subflow. They then returned the best one if the difference between the best subflow RTT and the minimum RTT is less than a certain threshold. They picked 100ms for threshold delay when testing 3G and WiFi networks in this paper.
Chaturvedi et al. [11] analyzed different existing schedulers and identified some current outstanding concerns, such as head-of-line (HoL) blocking and out-of-order packet delivery. HoL blocking may occur when a single data packets queue may wait to be transmitted and the packet at the head of the line may not be able to move ahead due to congestion [61]. These problems reduce MPTCP performance and to mitigate the issues, the authors have presented an adaptive and efficient packet scheduler (AEPS). This novel MPTCP packet scheduler not only addresses these concerns but also offers high throughput with a short completion time by using the capacity of all available pathways. AEPS can send data packets to the receiver in the order they were received, and its performance is unaffected by the size of the receiver buffer, or the size of the data being transmitted. The AEPS has been developed with three objectives: (1) packets should arrive to the receiver buffer; (2) all pathways' bandwidth should be used; and (3) completion time should be as short as possible. According to the authors, the first condition assisted AEPS in resolving the HoL blocking and received window-limiting issues by sending packets to the receiver buffer in sequence. The second condition summed the bandwidth of each interface (path) by using all accessible pathways to the MPTCP source, which also helped to enhance throughput. The third criterion aided in choosing the routing for each packet so that the total network completion time can be minimized.
Dong et al. [62] thoroughly compared existing scheduling algorithms and guided the development of new scheduling algorithms in 5G. The authors examined the influence of several
network parameters, such as RTI, buffer size, and file size, on the performance of current extensively used scheduling algorithms over a wide range of network circumstances. The paper compares the Lowest-RTT-First [60], Delay-aware packet scheduler (DAPS) [63], Out-of-order transmission for in-order arrival scheduler (OTIAS) [64] and Blocking estimation-based MPTCP scheduler (BLEST) scheduling [65] algorithms. The number of timeouts and flow completion time are compared in the path heterogeneity test. The results showed Lowest-RTT-First has the most timeouts, while the BLEST has the fewest. BLEST surpasses other algorithms in varying buffer size outcomes, followed by OTIAS and DAPS. Since BLEST can dynamically predict whether head-of-line blocking will occur and hence minimizes the quantity of out-of-order packets. In the different file size tests, BLEST and LowRTT perform better than DAPS, and OTIAS outperforms BLEST.
Le et al. [66] tackled the problem of out-of-order delivery in MPTCP. Because of the diverse nature of latency and bandwidth on each channel, the out-of-order packet issue becomes severe for MPTCP. To solve this issue, the authors presented the forward-delay-based packet scheduling (FDPS) method for MPTCP. The technique is divided into two parts: predicting the forward delay differences across pathways and picking data to send through a path when the congestion window is available.
### _Machine Learning Approaches for scheduling in MPTCP_
In recent times, many ML-based approaches have been proposed to improve the scheduling mechanism of MPTCP. Though classical approaches achieve good performance in terms of scheduling in MPTCP, ML-based approaches also show promising result and becomes popular in terms of getting higher throughput with lower latency than non-ML methods.
Wu et al. [16] applied a learning-based technique to schedule packets in the different paths of an MPTCP. The authors have presented FALCON, a learning-based multipath scheduler that can adapt to changing network circumstances quickly and correctly using meta-learning. The meta-learning algorithm comprises two parts: offline training and online training parts. The online learning module captures the network-changing conditions whereas the offline learning module takes the experience(data) from the online module and divides the experience into different groups depending on the network conditions.
Han et al. [17] used the technique of redundancy of packets to reduce packet loss by suggesting EdAR (Experience-driven Adaptive Redundant packet scheduler). In the face of dramatic network environment changes, EdAR enables dynamically scheduling redundant packets using an experience-driven learning-based strategy for multipath performance enhancement. To allow accurate learning and prediction, a Deep Reinforcement Learning (DRL) agent-based framework has been created that learns both the network environment and the optimal course of action. EdAR has two transmission modes: standard transmission and redundant transmission. Standard transmission follows the regular data transition. Regarding the redundancy transmission, there is a buffer called redundant buffer. The redundant buffer holds packets that have already been transmitted but have yet to be acknowledged. If a new packet is transmitted from the send buffer on a subflow, it is copied to the redundant buffer. If a packet in the redundant buffer is not sent out or acknowledged, it is deleted from the redundant buffer. Silva et al. [67], used linear regression [68] to predict throughput and latency in MPTCP subflows, and proposed Artificial Neural Network [69]-based linear classifier to choose the best subflow which can provide better performance in MPTCP scheduler. They implemented their work in NS-3 simulator.
## VI Congestion Control and Scheduling of MPTCP
Few works have been done focusing on congestion control and scheduling the packets of MPTCP at the same time. Those works get higher throughput and lower latency in terms of performance evaluation. Though further research regarding congestion control with packet scheduling is needed to be done. This section reviews classical and machine learning approaches that have been done in this domain.
### _Classical Approaches on both Congestion Control and Scheduling of MPTCP_
Wei et al. [23] proposed a model that gets higher throughput when the networks do not go through a shared bottleneck. Their work had two outcomes: (1) When no congestion would occur, their method has been able to get higher throughput than a single TCP. (2) When there is congestion in the network, their method has at least the same throughput as TCP. Their method also measured how severe or minor the congestion is in the network. They have introduced both SB-CC (Shared Bottleneck-based Congestion Control Scheme) and SB-FPS (Shared Bottleneck-based Forward Prediction packet Scheduling scheme), where SB-CC can detect shared bottlenecks and estimate the congestion degree of all subflows. SB-FPS can perfectly schedule data in shared bottleneck and can also distribute data according to the congestion window size of each subflows. For implementing MPTCP, they used the Linux kernel and achieved higher throughput.
### _Machine Learning Approaches on both Congestion Control and Scheduling of MPTCP_
Pokhrel et al. [71] have introduced the Deep Q learning (DQL)-based method to control congestion and schedule packets for MPTCP. Their proposed DQL framework has utilized the LSTM-based recurrent neural network where in their framework the Q function provided the logarithm value of goodput for the previous iteration. Here, the policy function was the actor-critic of two LSTMs and the value function was the reward. They considered RTT, throughput, and sending rate as the state. Depending on the state, their model provided action on whether window size needed to be increased or decreased and what changes can be taken in the schedule of packets for the subflows. In their work, the reward was the
summation of all the Q functions for all subflows. Similar to other RL algorithms, the optimal decision was learned to maximize the reward. They have made their MPTCP implementation in the Linux kernel and achieved low delays with maximum goodput.
## VII Implementation of MPTCP
In this section, we describe different ways of implementing MPTCP either in real hardware (kernel) or in simulator and list some of the works for each type of implementation as the reader's reference. Previously, NS-2 simulator has been used for implementing MPTCP [72, 8]. Now, most of the recent works have focused on implementing MPTCP in the Linux kernel after enabling MPTCP in the operating system or using the NS-3 simulator. Very few works implemented MPTCP on the CORE emulator.
### _Simulation_
Chihani et al. [73] implemented MPTCP in the NS-3 simulator and introduced a new protocol that worked better in various network conditions. They compared different packet reordering systems and analyzed that their implementations will be necessary for further MPTCP performance analysis in terms of controlling congestion. Nadeem et al. [74] worked on introducing three path managers; default, ndiffports, and fullmesh to create an MPTCP patch for implementing MPTCP in the NS-3 development version. While the default patch has not made any new subflows, fullmesh made a mesh of
whole new subflows towards the feasible pairs of IP addresses; midffports introduced subflows in between the same IP pair with the help of distinct source and destination. It showed better results in terms of getting higher throughput and less flow completion time than prior works. Coudron et al. [75] proposed MPTCP implementations in NS-3 to handle network traffic. They also compared their algorithm with previous work implemented in NS-3 and Kernel. Table I lists some other MPTCP implementations in NS-2, NS-3 and CORE simulator with the aim of congestion control or schedule packets or perform both congestion control and packet scheduling for MPTCP.
### _Real Hardware (kernel)_
Network simulators sometimes fail to depict the original network conditions, as the real-world network is highly dynamic; the breaking of links and the creation of new links are spontaneous. Therefore, the evaluation of MPTCP on real-world networks using Linux kernels shows a much bigger picture of its strengths and weaknesses. In the work [76], the authors have implemented MPTCP in a Linux kernel to study if each subflow has a different scheduler and then how the different subflows of an MPTCP may dispute bottleneck links with conventional single-path TCP. They tested LIA, OLIA, BALIA and wVegas on Linux kernel implementation of MPTCP and evaluated the throughput, latency, etc., on real-world networks. Zannettou et al. [24] used the kernel implementation of MPTCP to show their MPTCP-aware scheduling performs better than random hashing of packets to subflows which is generally used. They used the FatTree [77] and Jellyfish [78] topologies to conduct their experiments. FatTree is a highly structured topology used in data centers to obtain the highest throughput cost-effectively, while Jellyfish is the most commonly used randomly structured topology which can support more hosts than the FatTree, while keeping almost the same throughput. The commercial application for MPTCP support is available online [79].
## VIII Conclusion
This paper focuses on two crucial concepts in MPTCP - congestion control and scheduling. The study shows how the most recent works fulfill the previous work gaps and mitigate the above two MPTCP issues using different classical and ML-based approaches. Our study also presents the advantages and limitations of current works and encourages the researchers to continue further improvements in this domain. As in every communication sector MPTCP establishes a tremendous role, it is necessary to improve the performance of MPTCP, and our paper can be beneficial for the readers to have an extensive knowledge of MPTCP performance issues and can use it for proposing new algorithms.
|
2309.08078 | A sub-pc BBH system in SDSS J1609+1756 through optical QPOs in ZTF light
curves | Optical quasi-periodic oscillations (QPOs) are the most preferred signs of
sub-pc binary black hole (BBH) systems in AGN. In this manuscript, robust
optical QPOs are reported in quasar SDSS J1609+1756 at $z=0.347$. In order to
detect reliable optical QPOs, four different methods are applied to analyze the
4.45 years-long ZTF g/r/i-band light curves of SDSS J1609+1756, direct fitting
results by sine function, Generalized Lomb-Scargle periodogram, Auto-Cross
Correlation Function and Weighted Wavelet Z-transform method. The Four
different methods can lead to well determined reliable optical QPOs with
periodicities $\sim340$ days with confidence levels higher than 5$\sigma$, to
guarantee the robustness of the optical QPOs in SDSS J1609+1756. Meanwhile,
based on simulated light curves through CAR process to trace intrinsic AGN
activities, confidence level higher than $3\sigma$ can be confirmed that the
optical QPOs are not mis-detected in intrinsic AGN activities, re-confirming
the robust optical QPOs and strongly indicating a central sub-pc BBH system in
SDSS J1609+1756. Furthermore, based on apparent red-shifted shoulders in broad
Balmer emission lines in SDSS J1609+1756, space separation of the expected
central BBH system can be estimated to be smaller than $107\pm60$ light-days,
accepted upper limit of total BH mass $\sim(1.03\pm0.22)\times10^8{\rm
M_\odot}$. Therefore, to detect and report BBH system expected optical QPOs
with periodicities around 1 year is efficiently practicable through ZTF light
curves, and combining with peculiar broad line emission features, further clues
should be given on space separations of BBH systems in broad line AGN in the
near future. | XueGuang Zhang | 2023-09-15T00:35:38Z | http://arxiv.org/abs/2309.08078v1 | # A sub-pc BBH system in SDSS J1609+1756 through optical QPOs in ZTF light curves
###### Abstract
Optical quasi-periodic oscillations (QPOs) are the most preferred signs of sub-pc binary black hole (BBH) systems in AGN. In this manuscript, robust optical QPOs are reported in quasar SDSS J1609+1756 at \(z=0.347\). In order to detect reliable optical QPOs, four different methods are applied to analyze the 4.45 years-long ZTF g/rf-band light curves of SDSS J1609+1756, direct fitting results by sine function, Generalized Lomb-Scargle periodogram, Auto-Cross Correlation Function and Weighted Wavelet Z-transform method. The Four different methods can lead to well determined reliable optical QPOs with periodicities \(\sim 340\) days with confidence levels higher than 5\(\sigma\), to guarantee the robustness of the optical QPOs in SDSS J1609+1756. Meanwhile, based on simulated light curves through CAR process to trace intrinsic AGN activities, confidence level higher than 3\(\sigma\) can be confirmed that the optical QPOs are not mis-detected in intrinsic AGN activities, re-confirming the robust optical QPOs and strongly indicating a central sub-pc BBH system in SDSS J1609+1756. Furthermore, based on apparent red-shifted shoulders in broad Balmer emission lines in SDSS J1609+1756, space separation of the expected central BBH system can be estimated to be smaller than 107 \(\pm\) 60 light-days, accepted upper limit of total BH mass \(\sim(1.03\pm 0.22)\times 10^{8}\)M\({}_{\odot}\). Therefore, to detect and report BBH system expected optical QPOs with periodicities around 1 year is efficiently practicable through ZTF light curves, and combining with peculiar broad line emission features, further clues should be given on space separations of BBH systems in broad line AGN in the near future.
keywords: galaxies:active - galaxies:nuclei - quasars:emission lines - quasars:individual (SDSS J1609+1756)
## 1 Introduction
Binary black hole (BBH) systems on scale of sub-parsecs in central regions of active galactic nuclei (AGN), as well as dual core systems on scale of kpcs (or AGN pairs), are common as discussed in Begelman et al. (1980); Mayer et al. (2010); Fragione et al. (2019); Mannerkoski et al. (2022); Wang et al. (2023), considering galaxy merging as an essential process of galaxy formation and evolution (Kauffmann et al., 1993; Silk and Rees, 1998; Lin et al., 2004; Merritt, 2006; Bundy et al., 2009; Satyapal et al., 2014; Rodriguez-Gomez et al., 2017; Bottrell et al., 2019; Martin et al., 2021; Yoon et al., 2022). Meanwhile, in the manuscript, through discussions in more recent reviews in De Rosa et al. (2019); Chen et al. (2022), a kpc dual core system means central two BHs are getting closer due to dynamical frictions, but a sub-pc BBH system means central two BHs are getting closer mainly due to emission of gravitational waves. Besides indicators for BBH systems and/or dual core systems by spectroscopic features as discussed in Zhou et al. (2004); Komossa et al. (2008); Boroson and Laurer (2009); Smith et al. (2009); Shen and Loeb (2010); Eracleous et al. (2012); Comerford et al. (2013); Liu et al. (2016); Wang et al. (2017); De Rosa et al. (2019); Zhang (2021d) and by spatial resolved image properties as discussed in Komossa et al. (2003); Rodriguez et al. (2009); Piconcelli et al. (2010); Nardini (2017); Kollatschny et al. (2020), long-standing optical Quasi-Periodic Oscillations (QPOs) with periodicities around hundreds to thousands of days have been commonly accepted as the most preferred indicators for central BBH systems in AGN.
Long-standing optical QPOs have been reported in AGN related to central BBH systems in the literature. In the known quasar PG 1302-102, Graham et al. (2015a); Liu et al. (2018); Kovacevic et al. (2019) have shown detailed discussions on reliable 1800 days optical QPOs. Meanwhile, strong evidence have been reported to support optical QPOs in other individual AGN, such as 540 days QPOs in PSO J334.2028+01.4075 in Liu et al. (2015), 1500 days QPOs in SDSS J0159 in Zheng et al. (2016), 1150 days QPOs in Mrk915 in Serafinelli et al. (2020), 1.2 years QPOs in Mrk 231 in Kovacevic et al. (2020), 1607 days QPOs in SDSS J0252 in Liao et al. (2021), 6.4 years optical QPOs in SDSS J0752 in Zhang (2022a), 3.8 years optical QPOs in SDSS J1321 in Zhang (2022c), etc. Moreover, besides the optical QPOs reported in individual AGN, a sample of 111 candidates with optical QPOs have been reported in Graham et al. (2015) based on strong Keplerian periodic signals over a baseline of nine years, and a sample of 50 candidates with optical QPOs have been reported in Charisi et al. (2016).
While detecting BBH systems through optical QPOs, two important points have serious effects on reliability of BBH system expected QPOs. First, comparing with periodicities of detected optical QPOs, time durations of light curves are not longer enough to support reliabilities of the QPOs. Second, central AGN activities should lead
to false optical QPOs, as well discussed in Vaughan et al. (2016); Sesana et al. (2018); Zhang (2022a,c). Thus, the reported confidence levels for reported optical QPOs through mathematical methods should be carefully re-checked. Currently, there are many public Sky Survey projects conveniently applied to search for long-standing optical QPOs. However, as the shown results in the largest sample of optical QPOs in Graham et al. (2015) and the other reported optical QPOs in individual AGN, the reported optical QPOs have periodicities commonly around 3.5 years (1500 days). Therefore, the Catalina Sky Survey (CSs, Drake et al., 2009) with longer time durations and moderate data quality of light curves is the preferred Sky Survey project for conveniently and systematically searching for optical QPOs with a few years long periodicities as the brilliant works in Graham et al. (2015). Meanwhile, comparing with the CSS project, the other Sky Survey projects have some disadvantages for searching for optical QPOs with ears-long periodicities, the Zwicky Transient Facility (ZTF, Bellm et al., 2019; Dekany et al., 2020) sky survey has light curves with short time durations (only around 4.5 years), the Panoramic Survey Telescope And Rapid Response System (PanSTARRS, Flewelling et al., 2020; Magnier et al., 2020) and the Sloan Digital Sky Survey Stripe82 (SDSS Stripe82, Bramich et al., 2008; Thangiavur et al., 2021) sky survey has light curves with large time steps, the All-Sky Automated Survey for Supernovae (ASAS-SN, Shappee et al., 2014; Kochanek et al., 2017) has light curves with limits for bright galaxies, etc. However, considering the high quality light curves in ZTF sky survey, optical QPOs with shorter periodicities (smaller than or around 1 year) should be preferred to be detected only through ZTF light curves, which is the main objective of the manuscript.
In this manuscript, a new BBH candidate is reported in SDSS J160911.25+175616.22 (=SDSS J1609+1756), a blue quasar at redshift 0.347, due to detected optical QPOs with periodicities about 340 days through more recent ZTF g/r/i-band light curves. Moreover, due to apparent red-shifted shoulders in broad Balmer emission lines, properties of peak separations of broad emission lines can be applied to determine limits of space separation of central BBH system in SDSS J1609+1756. The manuscript is organized as follows. Section 2 presents main results on the long-term optical variabilities of SDSS J1609+1756, and four different methods to detect robust optical QPOs in SDSS J1609+1756. Section 3 gives the main discussions including statistical results to support the optical QPOs not from central intrinsic AGN activities in SDSS J1609+1756, also including discussions on spectroscopic properties of SDSS J1609+1756, and discussions on basic structure information of space separation of central BBH system in SDSS J1609+1756. Section 4 gives final summary and main conclusions. In the manuscript, the cosmological parameters well discussed in Hinshaw et al. (2013) have been adopted as \(H_{0}=70\rm km\ s^{-1}Mpc^{-1}\), \(\Omega_{\Lambda}=0.7\) and \(\Omega_{\rm m}=0.3\).
## 2 Optical QPOs in SDSS J1609+1756
### Long-term optical light curves of SDSS J1609+1756
SDSS J1609+1756 is collected as target of the manuscript, due to two main reasons. First, as a candidate of off-nucleus AGN in Ward et al. (2021), SDSS J1609+1756 has peculiar broad Balmer lines with shifted red shoulders which can be well explained by broad line emissions from central two independent BLRs related to a central BBH system, quite similar as what we have recently discussed in Zhang (2021d). Second, after checking long-term variabilities of SDSS J1609+1756, apparent optical QPOs can be detected to support an expected BBH system.
The 4.45 years-long ZTF g/r/i-band light curves of SDSS J1609+1756 are collected and shown in top left panel of Fig 1, with MD-58000 from 203 (Mar., 2018) to 1828 (Jul., 2022). Meanwhile, long-term light curves of SDSS J1609+1756 are also checked in CSS, PanSTARRS and ASAS-SN, and shown in Fig. 2 without apparent variabilities. There are 390 reliable data points included in the CSS light curve, however, as shown in left panel of Fig. 2, more than 92% of the 390 data points are lying between the range of mean magnitude plus/minus corresponding 1RMS scatters, indicating not apparent variabilities in the CSS light curve, probably due to lower quality of light curves. There are 1 reliable data point and 20 reliable data points included in the ASAS-SN V/g-band light curves shown in middle panel of Fig. 2. However, due to the quite large time steps with mean value about 200 days, it is not appropriate to search QPOs in ASAS-SN light curves. Similar as ASAS-SN g-band light curve, there are only 10 data points, 14 data points, 21 data points, 11 data points and 13 data points included in the PanSTARRS g/r/i/z/y-band light curves shown in right panel of Fig. 2, respectively, with large mean time steps about 266 days, 217 days, 184 days, 20 days and 182 days. Therefore, the PanSTARRS light curves are not considered in the manuscript.
Finally, the long-term ZTF light curves are mainly considered and there are no further discussions on variabilities from the other Sky Survey projects.
### Methods to determine optical QPOs in SDSS J1609+1756
Based on the high-quality long-term optical light curves from ZTF, the following four methods are applied to detect optical QPOs in SDSS J1609+1756, similar as what have been done in Penill et al. (2020).
#### 2.2.1 Direct fitting results by sine function
Based on a sine function plus a five-degree polynomial function applied to each ZTF light curve (\(t\) in units of days as time information, \(t_{3}\) as \(t/1000\))
\[\begin{split} LC_{t}\ =\ a+b\ \times t_{3}+c\ \times\ t_{3}^{2}+d\ \times\ t_{3}^{3}+e\ \times\ t_{3}^{4}+f\ \times\ t_{3}^{5}\\ +\ g\ \times\ \sin(\frac{2\pi\ t}{T_{q}}\ +\phi_{0})\end{split} \tag{1}\]
, the ZTF g/r/i-band light curves can be well simultaneously described through the maximum likelihood method combining with MCMC (Markov Chain Monte Carlo) (Foreman-Mackey et al., 2013) technique with accepted well prior uniform distributions of the model parameters. Here, the only objective of applications of the model functions above is to check whether are there sine-like variability patterns (related to probable QPOs) included in the ZTF light curves, not to discuss physical origin of the sine-like variability patterns.
Then, based on the MCMC technique determined posterior distributions of the model parameters, the accepted model parameters and corresponding 1\(\sigma\) uncertainties are listed in Table 1. Here, we do not show posterior distributions of all the model parameters, but Fig 3 shows the MCMC technique determined two-dimensional posterior distributions of \(g\) and periodicity \(\log(T_{q})\) of the ZTF g/r/i-band light curves. Meanwhile, based on the same functions, the Levenberg-Marquardt least-squares minimization technique (the known MPFIT package, Markward, 2009) can lead to similar 1\(\sigma\) uncertainties of the model parameters, estimated through covariance matrix related to the determined model parameters, especially for \(\log(T_{q})\). Therefore, the determined 1\(\sigma\) uncertainties of \(\log(T_{q})\) are reliable enough
Figure 1: Top left panel shows the ZTF g/rfi-band light curves and the best fitting results to the light curves by a sine function plus a five-degree polynomial function. Top right panel shows corresponding phase folded light curves and the best-fitting results by a sine function. Bottom panels show corresponding residuals calculated by light curves minus the best fitting results. In top right (left) panel, solid circles plus error bars in red, in blue and in dark green show the (folded) g-band light curve (plus 0.5 magnitudes), the (folded) \(b\)-band light curve, and the (folded) i-band light curve (minus 0.5 magnitudes), respectively, solid and dashed lines in red, in blue and in dark green show the best fitting results and corresponding F-test technique determined 5\(\sigma\) confidence bands to the (folded) g-band light curve, to the (folded) r-band light curve and to the (folded) i-band light curve, respectively. In top left panel, dashed line in purple, in cyan and in magenta show the determined sine component included in the g-band light curve, in the r-band light curve and in the i-band light curve, respectively.
Figure 2: Left panel shows the CSS light curve. Horizontal red lines show the mean value and corresponding 1RMS scatter bands of the light curve. Middle panel shows the ASAS-SN V-band (only one solid circle in dark green) and g-band (solid circles in blue) light curves. Horizontal red line shows the mean value of the g-band light curve. Right panel shows the PanSTARRS g-band (circles in in dark green), r-band (circles in blue), i-band (circles in purple), z-band (circles in green) and y-band (circles in cyan) light curves. Horizontal dark green line, blue line, purple line, green line and cyan line show the mean value of corresponding g-band, r-band, i-band, z-band, and y-band light curves. In middle and right panel, due to less number of data points, 1RMS scatter bands are not plotted to each light curve.
in SDSS J1609+1756. Moreover, in order to show more clearer sine variability patterns, the determined sine-like component is shown in dashed line in each ZTF light curve in top left panel of Fig. 1.
Based on the determined model parameters, left panels of Fig. 1 show the best fitting results and corresponding residuals (light curve minus the best fitting results) to each ZTF band light curve, leading \(\chi^{2}/dof\) (dof=885 as degree of freedom) to be 5.76. Meanwhile, after removing the five-degree polynomial component in each band light curve, accepted the determined periodicities \(T_{q}\), corresponding phase-folded light curves can also be well described by sine function \(\sin(2\pi~{}ph+\phi_{0})\) with \(ph\) as phase information. The best fitting results and corresponding residuals to the folded ZTF g/ri-band light curves are shown in right panels of Fig. 1, leading to \(\chi^{2}/dof\sim 5.89\).
Based on the results in Fig. 1, there are apparent optical QPOs with periodicities about \(348\pm 2\) days, \(340\pm 2\) days, \(357\pm 4\) days in the ZTF g/ri-band light curves, respectively. The totally similar periodicities in the ZTF g/ri-band light curves provide strong evidence to support the optical QPOs in SDSS J1609+1756. Moreover, considering shorter time duration of the ZTF i-band light curve, a bit difference between the periodicity in i-band light curve and the periodicities in g/r-band light curves can be accepted.
In order to confirm the apparent optical QPOs in ZTF light curves, rather than the model function shown in Equation (1), a N-degree (N=5,..., 30) polynomial function without any restrictions to model parameters is applied to re-describe each ZTF light curve. Fig. 4 shows dependence of re-determined \(\chi^{2}/dof\) by polynomial function on the degree of the polynomial function. It is clear that N\(\geq\)26 can also lead to well accepted descriptions with \(\chi^{2}/dof\sim 4759.44/831=5.73\leq 5.76\) to ZTF light curves of SDSS J1609+1756. Then, similar as what we have recently done in Zhang & Zhao (2022b), through determined different \(\chi^{2}/dof\) for model function (M1) in Equation (1) and for the 26-degree polynomial function (M2) applied, the calculated \(F_{p}\) value is about
\[F_{p}=\frac{\frac{\chi^{2}_{M1}-\chi^{2}_{M2}}{dof_{M1}-dof_{M2}}}{\frac{dof_{ M2}}{\chi^{2}_{M2}/dof_{M2}}}\sim 9.7\times 10^{-5} \tag{2}\]
Based on \(dof_{M1}-dof_{M2}\) and \(dof_{M2}\) as number of dofs of the F distribution numerator and denominator, the expected confidence level is quite smaller than \(10^{-7}\) to support the 26-degree polynomial function. In other words, confidence level quite higher than \(6\sigma\) to support that the model function in Equation (1) is more preferred.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(\log(T_{q})\) & \(\phi_{0}\) \\ \hline \hline g-band & 21.17\(\pm\)0.12 & -12.56\(\pm\)0.87 & 22.77\(\pm\)2.25 & -20.58\(\pm\)2.62 & 10.25\(\pm\)1.41 & -2.15\(\pm\)0.28 & 0.233\(\pm\)0.006 & 2.541\(\pm\)0.002 & 1.16\(\pm\)0.87 \\ \hline r-band & 20.03\(\pm\)0.07 & -8.60\(\pm\)0.51 & 14.96\(\pm\)1.31 & -12.25\(\pm\)1.53 & 5.26\(\pm\)0.83 & -0.96\(\pm\)0.16 & 0.150\(\pm\)0.005 & 2.532\(\pm\)0.002 & 0.85\(\pm\)0.05 \\ \hline i-band & 21.44\(\pm\)0.58 & -25.73\(\pm\)4.99 & 75.75\(\pm\)15.81 & -107.29\(\pm\)23.49 & 72.83\(\pm\)16.42 & -18.79\(\pm\)4.33 & 0.152\(\pm\)0.012 & 2.553\(\pm\)0.004 & 1.90\(\pm\)0.21 \\ \hline \hline \end{tabular}
* Notice: The first column shows which band ZTF light curve is considered. The ninth column shows the determined model parameter periodicity \(\log(T_{q})\) with \(T_{q}\) in units of days.
\end{table}
Table 1: Model parameters of Equation (1) leading to the best fitting results to ZTF light curves
Figure 4: Dependence of \(\chi^{2}/dof\) on degree of applied N-degree of polynomial function to describe ZTF light curves. Horizon red line marks \(\chi^{2}/dof=5.76\) (the value by the best descriptions to ZTF light curves by model function in Equation (1)). Blue character related to each data point marks corresponding value of dof.
Figure 3: The MCMC technique determined two-dimensional posterior distributions in contour of parameter \(g\) and periodicity \(\log(T_{q})\) (\(T_{q}\) in units of days) through the ZTF g-band (left panel) light curve, r-band (middle panel) light curve and i-band (right panel) light curve, respectively. In each panel, number densities related to different colors are shown in color bar, solid circle plus error bars in red show the accepted value and \(1\sigma\) uncertainties of the parameters.
Therefore, based on the direct fitting results by sine function, there are apparent and reliable optical QPOs with periodicities about \(348\pm 2\) days, \(340\pm 2\) days, \(357\pm 4\) days in the ZTF g/rf-band light curves, respectively. And, through the F-test technique, the sine component included in the ZTF light curves of SDSS J1609+1756 is more preferred with confidence level higher than \(6\sigma\).
#### 2.2.2 Results from Generalized Lomb-Scargle (GLS) periodogram
In order to provide further evidence to support the optical QPOs in SDSS J1609+1756, besides the direct fitting results by sine function in Fig. 1, the widely accepted Generalized Lomb-Scargle (GLS) periodogram (Lomb, 1976; Scargle, 1982; Zechmeister & Kurster, 2009; VanderPlas, 2018) (included in the python package of astroMLime_series) is applied to check the periodicities in the observed ZTF g/r-band light curves in SDSS J1609+1756. Here, due to small number of data points and short time duration of the ZTF i-band light curve, the GLS periodogram is not applied to the i-band light curve. Top panel of Fig. 5 shows the GLS power properties. It is clear that there is one periodicity around 330 days with confidence level higher than \(5\sigma\) (the false-alarm probability of 5e-7) determined by the bootstrap method as discussed in Ivezic et al. (2019).
Moreover, in order to determine uncertainties of GLS periodogram determined periodicity, the well-known bootstrap method is applied as follows. Through the observed ZTF g/r-band light curves, more than half of data points are randomly collected to re-build a new light curve. Then, within 20000 rebuild light curves after 20000 loops, the same GLS power properties are applied to determine new periodicities related to the rebuild light curves. Bottom panel of Fig. 5 shows distributions of corresponding 20000 GLS periodogram determined periodicities related to the 20000 rebuild light curves. And through the Gaussian-like distributions, the determined periodicities and corresponding \(1\sigma\) uncertainties are 319\(\pm\)2 days and 344\(\pm\)5 days in the rebuild light curves through the g-band and r-band light curves, respectively, strongly reconfirming the smaller uncertainties of the periodicities determined by the MCMC technique. Moreover, through properties of g-band light curve, the tiny difference in the periodicity determined through different methods could be due to probably large uncertainties in g-band light curves leading the applied polynomial component to be not so appropriate. The GLS periodogram determined periodicities are consistent with results shown in Fig. 1, to confirm the optical QPOs in SDSS J1609+1756.
#### 2.2.3 Results through the Auto-cross Correlation Function
Moreover, similar as what we have recently done on optical QPOs in Zhang (2022a,c), the Auto-cross Correlation Function (ACF) is applied to check optical QPOs in observed ZTF g/r-band light curves of SDSS J1609+1756. Corresponding results are shown in top panel of Fig. 6, totally similar periodicities around 340 days can be confirmed, to support the optical QPOs in SDSS J1609+1756. Here, direct linear interpolation is applied to the ZTF light curves, leading to evenly sampled light curves, and then the IDL procedure djs_correlate.pro (written by David Schlegel, Princeton) is applied to determine the correlation coefficients at different time lags.
Furthermore, the common Monte Carlo method is applied as follows to determine confidence level for ACF results. Accepted the time information of the observed ZTF g/r-band light curves of SDSS J1609+1756, 3.2 million light curves are randomly created by white noise process. For the \(i\)th randomly created light curve of white noise, similar procedure is applied to determine the correlation coefficients, to determine the maximum coefficient \(Coe_{i}\) (i=1,..., \(3.2\times 10^{6}\)) with time lags between 200 days and 500 days. Then, among all the 3.2 million values of \(Coe\), the maximum value of 0.4866 (0.46686) is determined as corresponding value for the \(5\sigma\) confidence level (probability of \(\frac{1}{3.2\times 10^{6}}\)) for the ACF results through the ZTF r-band (g-band) light curve. Therefore, the determined confidence level is higher than \(5\sigma\) for the ACF determined optical QPOs in SDSS J1609+1756.
Here, as well known that \(5\sigma\) corresponds to a probability of \(3\times 10^{-7}\) corresponding to about 1 in 3.2 million, therefore, 3.2 million light curves are created by white noise process, in order to determine \(5\sigma\) confidence level for ACF results (and also for the following WWZ results in the subsection 2.2.4). Furthermore, due to the following two main reasons, applications of red noise time series are not appropriate to determine \(5\sigma\) confidence levels of ACF results (and for the following WWZ results). On the one hand, ACF results for red noise time series can be described by an exponential function depending on intrinsic time intervals and correlation coefficients between adjacent data points. On the other hand, as more recent discussions in Krishnan et al. (2021), there are detections of fake periodic signals in red noise time series. Therefore, rather than red noise time series, white noise time series are preferred to be applied to determine confidence levels of ACF results (and the following WWZ results).
Meanwhile, similar bootstrap method is applied to determine uncertainties of ACF method determined periodicities in SDSS J1609+1756. Through the observed ZTF g/r-band light curves, more
Figure 5: Top panel shows power properties through the Generalized Lomb-Scargle periodogram applied to g/r-band light curves. Solid line in blue and in purple show the results through the g-band and r-band light curve, respectively. From top to bottom, horizontal dashed red lines show the \(5\sigma\), \(4\sigma\) and \(3\sigma\) confidence levels through the bootstrap method (false-alarm probabilities of 5.3e-7, 6.3e-5 and 2.7e-3). Bottom panel shows distributions of GLS periodogram determined peak positions considering 20000 re-build light curves through the ZTF g/r-band light curves. In bottom panel, histogram filled by blue lines and filled by purple lines show corresponding results through the g-band and r-band light curves, respectively, and thick dashed line in the same color shows corresponding Gaussian described results to the distribution.
than half of data points are randomly collected to re-build a new light curve. Then, within 2000 rebuild light curves after 2000 loops, the ACF method is applied to determine new periodicities related to the rebuild light curves. Bottom panel of Fig. 6 shows distributions of corresponding 2000 ACF method determined periodicities related to the 2000 rebuild light curves. However, due to unevenly sampled ZTF light curves and applications linear interpolation, distributions in bottom panel of Fig. 6 are not well Gaussian-like, but standard deviations about 12 days and 10 days of the distributions can be safely accepted as uncertainties of ACF method determined periodicities through ZTF g-band and r-band light curves.
#### 2.2.4 Results through the WWZ method
Moreover, similar as what we have recently done on optical QPOs in Zhang (2022a,c), the WWZ method (Foster, 1996; An et al., 2016; Gupta et al., 2018; Kushwaa et al., 2020; Li et al., 2021) is also applied to check the optical periodicities in the observed ZTF g/r-band light curves of SDSS J1609+1756. Corresponding results on both two dimensional power map properties and time-averaged power properties are shown in top and middle panels of Fig. 7, totally similar periodicities around 340 days can be confirmed, to support the optical QPOs in SDSS J1609+1756. Here, the python code wwz.py written by M. Emre Aydin is applied in the manuscript.
Meanwhile, the similar common Monte Carlo method is applied to determine confidence level for the WWZ determined results. Among the 3.2 million randomly created light curves for white noises, for the \(i\)th randomly created light curve of white noise, maximum value \(Mp_{i}\) of the WWZ method determined time-averaged power spectra can be well determined. Then, among all the 3.2 million values of \(Mp_{i}\) the maximum value of 26.64 (21.68) is determined as corresponding value for the 5\(\sigma\) confidence level for the WWZ method determined time-averaged power properties through the ZTF g-band (r-band) light curve, which are shown in middle panel of Fig. 7. Therefore, the determined confidence level is higher than 5\(\sigma\) for the WWZ method determined QPOs in SDSS J1609+1756.
Furthermore, the similar bootstrap method is applied to determine uncertainties of the WWZ method determined periodicities in SDSS J1609+1756. Through the observed ZTF g/r-band light curves, more than half of data points are randomly collected to re-build a new light curve. Then, within 800 rebuild light curves after 800 loops, the same WWZ method determined time-averaged power properties are applied to measure the new periodicities related to the rebuild light curves. Bottom panel of Fig. 7 shows distributions of corresponding WWZ method determined periodicities related to the 800 rebuild light curves. Through the Gaussian-like distributions, the determined periodicities and corresponding 1\(\sigma\) uncertainties are 308\(\pm\)5 days and 352\(\pm\)10 days in the rebuild light curves through the g-band and r-band light curves, respectively, to re-confirm the reliable optical QPOs in SDSS J1609+1756.
#### 2.2.5 Conclusion on the robustness of the optical QPOs in SDSS J1609+1756
Based on the different methods discussed above to detect reliable optical QPOs, Table 2 lists the necessary information of the periodicities and confidence levels. Therefore, the robust optical QPOs with periodicities around 340 days (0.93 years) in SDSS J1609+1756 can be detected and well confirmed from the 4.45 years-long ZTF light curves (time duration about 4.7 times longer than the detected periodicities) with confidence level higher than 5\(\sigma\), based on the best-fitting results directly by the sine function shown in the left panels of Fig. 1, on the sine-like phase-folded light curve shown in the right panels of Fig. 1, on the results of GLS periodogram shown in Fig. 5, and on properties of ACF coefficients shown in Fig. 6 and on the power properties determined by the WWZ method shown in Fig. 7. The
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & band & \(\log(T_{q})\) & CL \\ \hline \hline \multirow{3}{*}{DF} & g-band & 348\(\pm\)2 & \multirow{3}{*}{\(>6\sigma\)} \\ & r-band & 340\(\pm\)2 & & \\ & i-band & 357\(\pm\)4 & \\ \hline \multirow{3}{*}{GLS} & g-band & 319\(\pm\)2 & \multirow{3}{*}{\(>5\sigma\)} \\ & r-band & 344\(\pm\)5 & \\ \hline \multirow{3}{*}{ACF} & g-band & 327\(\pm\)12 & \multirow{3}{*}{\(>5\sigma\)} \\ & r-band & 309\(\pm\)10 & \\ \hline \multirow{3}{*}{WWZ} & g-band & 308\(\pm\)5 & \\ & r-band & 352\(\pm\)10 & \\ \hline \hline \end{tabular} Notice: The first column shows which method is applied to determine optical QPOs in SDSS J1609+1756, DF means the ‘direct fitting results to ZTF light curves as discussed in subsection 2.2.1. The second column shows which ZTF band light is considered. The third column shows the determined periodicity \(T_{q}\) in units of days, the last column shows the determined corresponding confidence level.
\end{table}
Table 2: Periodicities of optical QPOs by different methods in SDSS J1609+1756
Figure 6: Top panel shows properties of the ACF coefficients. Solid blue line and solid purple line represent the results through the ZTF g-band and r-band light curves, respectively, vertical dashed red lines mark positions with \(T_{q}\sim\pm\)340 days. Horizontal dashed blue line and horizontal purple dashed line show the Monte Carlo method determined 5\(\sigma\) confidence level for the coefficient at time lags around \(\pm\)340 days for the ZTF g-band and r-band light curves, respectively. Bottom panel shows distributions of the bootstrap method determined 2000 periodicities. In bottom panel, histogram fill by blue lines and histogram filled by purple lines show the results through the ZTF g-band and r-band light curve, respectively.
results cab be well applied to guarantee the robustness of the optical QPOs in SDSS J1609+1756.
Based on the well determined reliable and robust optical QPOs in SDSS J1609+1756, we can find that the periodicity around 340 days in SDSS J1609+1756 is so-far the smallest periodicity among the reported long-standing optical QPOs in normal broad line AGN in the literature as described in Introduction. The results are strongly indicating that BBH systems with shorter periodicities could be well detected through the ZTF sky survey project. More interestingly, to detect more optical QPOs related to BBH systems with shorter periodicities through ZTF light curves could provide more stronger BBH candidates for expected background gravitational wave signals at nano-Hz frequencies.
## 3 Main discussions
### Mis-detected optical QPOs related to central intrinsic AGN activities in SDSS J1609+1756?
Due to short time durations of ZTF light curves, it is necessary and interesting to check whether the determined optical QPOs was mis-detected QPOs tightly related to central intrinsic AGN activities of SDSS J1609+1756, although four different mathematical methods applied in the section above can provide robust evidence to support the detected optical QPOs in SDSS J1609+1756. Similar as what we have recently done in Zhang (2022a,c) to check probability of mis-detected QPOs in AGN activities in SDSS J0752 and in SDSS J1321, the following procedure is applied.
As well discussed in Kelly, Bechtold & Siemignowska (2009); Kozlowski et al. (2010); MacLeod et al. (2010); Zu et al. (2013, 2016); Zhang & Feng (2017); Takata, Mukuta & Mizumoto (2018); Moreno et al. (2019); Sheng, Ross & Nicholl (2022), the known Continuous AutoRegressive (CAR) process and/or the improved Damped Random Walk (DRW) process can be applied to describe the fundamental AGN activities (Rees, 1984; Ulrich et al., 1997; Madejski & Sikora, 2016; Baldassare et al., 2020; Burke et al., 2021). Here, based on the DRW process, the public code JAVELIN (Just Another Vehicle for Estimating Lags In Nuclei) (Kozlowski et al., 2010; Zu et al., 2013) is firstly applied to describe the ZTF r-band light curve, with two process parameters of intrinsic characteristic variability amplitude and timescale of \(\sigma\) and \(\tau\). The best descriptions are shown in left panel of Fig. 8. And corresponding MCMC determined two dimensional posterior distributions of \(\sigma\) and \(\tau\) are shown in right panel of Fig. 8, with the determined \(\ln(\tau/days)\sim 4.95\pm 0.35\) (\(\tau\sim 140^{+60}_{-40}\) days) and \(\ln(\sigma/(mag/dyas^{1/2}))\sim-1.45\pm 0.25\). Here, because of the totally same periodicities through the ZTF r-band light curve by different methods in the section above, the r-band rather than the g-band light curve is considered in the section. Certainly, Fig. 8 also shows corresponding results through the g-band light curve.
Then, probability of mis-detected QPOs from DRW process described intrinsic AGN variabilities can be estimated as follows, through applications of the CAR process discussed in Kelly, Bechtold & Siemigninowska (2009):
\[{\rm d}LC_{t}=\frac{-1}{\tau}LC_{t}\,{\rm d}t+\sigma_{*}\sqrt{{\rm d}t}e(t)\, +\,19.21 \tag{3}\]
with \(\epsilon(t)\) as a white noise process with zero mean and variance equal to 1. Here, the value 19.21, the mean value of the ZTF r-band light curve of SDSS J1609+1756, is set to be the mean value of \(LC_{t}\). Then, a series of 100000 simulating light curves [\(t\), \(LC_{1}\)] are created, with \(\tau\) randomly selected from 100 days to 200 days (range of the determined \(\tau\) of SDSS J1609+1756, after considering uncertainties). Then, variance \(\sigma\sigma_{*}^{2}/2\) of CAR process created light curve is set to be 0.168 which is the variance of ZTF r-band light curve of SDSS J1609+1756. And, time information \(t\) are the same as the observational time information of ZTF r-band light curve of SDSS J1609+1756. And similar uncertainties \(LC_{t}\times\frac{LC_{err}}{LC_{t}}\) are simply added to the simulating light curves \(LC_{t}\), with \(LC_{r}\) and \(LC_{err}\) as the ZTF r-band light curve and corresponding uncertainties of SDSS J1609+1756.
Figure 7: Top panel shows the two dimensional power maps determined by the WWZ method with frequency step of 0.0001 and searching periodicities from 100 days to 500 days applied to the ZTF \(\beta\)r-band light curves. In top panel, vertical dashed lines mark WWZ method determined periodicities. Contour filled with bluish colors represent the results through the ZTF g-band light curve, contour with levels shown in reddish colors represent the results through the ZTF r-band light curve. In top regions of top panel, color bars show corresponding number densities for contour levels in different colors. Middle panel shows the WWZ method determined time-averaged power properties. Solid blue line and solid purple line show the results through the ZTF g-band and r-band light curves, respectively, and horizontal dashed blue line and horizontal dashed purple line mark the corresponding 5\(\sigma\) confidence levels. Bottom panel shows the bootstrap method determined periodicity distributions. In bottom panel, histogram filled by blue lines and filled by purple lines show corresponding results through the g-band and r-band light curves, respectively, and thick dashed line in the same color shows corresponding Gaussian described results to the distribution.
Among the 100000 simulated light curves, there are 263 light curves collected with reliable mathematical determined periodicities according to the following four simple criteria. First, the simulated light curve can be well described by equation (1) with corresponding \(\chi^{2}/dof\) smaller than 8 (\(\chi^{2}/dof\sim 5.8\) in Fig. 1). Second, the simulated light curve has apparent GLS periodogram determined peak with corresponding periodicity smaller than 500 days with confidence level higher than \(5\sigma\). Third, the simulated light curve has apparent ACF and WWZ method determined peak with corresponding periodicity smaller than 500 days with confidence level higher than 5\(\sigma\). Fourth, the determined periodicity by equation (1) is consistent with the GLS periodogram determined periodicity (which are similar as the ACF and WWZ method determined periodicities) within 10 times of the determined uncertainties by applications of equation (1). Left panel of Fig. 9 shows properties of the determined \(\chi^{2}/dof\) related to the best fitting results determined by equation (1) and the GLS periodogram determined periodicities (which are similar as the ACF and WWZ method determined periodicities) of the 263 simulated light curves. And right panel of Fig. 9 shows one of the 263 light curves with best fitting results by equation (1).
Therefore, even without any further considerations, it can be confirmed that the probability is only 0.26% (263/100000) (confidence level higher than \(3\sigma\)) to support mis-detected QPOs in CAR process simulated light curves related to AGN activities. Furthermore, accepted uncertainty \(\Delta_{T}=5\) days of optical periodicity in ZTF r-band light curve, if to limit periodicities within range from \(344\pm 5\times\Delta_{T}\) in the simulated light curves, there are only 112 simulated light curves collected, leading to the probability only about 0.11% (112/100000) (confidence level higher than \(3.2\sigma\)) to support mis-detected QPOs in AGN activities. In other words, the confidence level higher than
Figure 8: Left panel shows the JAVELIN code determined best descriptions to the long-term ZTF g-band (solid circles plus error bars in red) and r-band (solid circles plus error bars in blue) light curves of SDSS J1609+1756. Solid line and area filled in blue and in red show the best descriptions and corresponding \(1\sigma\) confidence bands to the g-band light curve and to the r-band light curve, respectively. Right panel shows the MCMC technique determined two-dimensional posterior distributions in contour of \(\ln\left(\tau\right)\) (\(\tau\) in units of days) and \(\ln\left(\sigma\right)\) (\(\sigma\) in units of \(mag/day^{1/2}\)). Contour filled with bluish color represents the results through the r-band light curve, and contour with level in reddish color shows the results through the g-band light curve. In right panel, solid circle plus error bars in blue and in red show the accepted values and \(1\sigma\) uncertainties of \(\ln\left(\tau\right)\) and \(\ln\left(\sigma\right)\) to the r-band light curve and to the g-band light curve, respectively.
Figure 9: Left panel shows properties of \(\chi^{2}/dof\) and determined periodicity of the 263 CAR process simulated light curves with probably mis-detected QPOs. In left panel, horizontal red dashed line marks the position of periodicity 344 days, the periodicity from r-band light curve of SDSS J1609+1756, vertical red dashed line marks the position of \(\chi^{2}/dof\sim 5.76\), the value for r-band light curve of SDSS J1609+1756. Right panel shows an example on probably mis-detected QPOs in the simulating light curves by the CAR process. In right panel, solid dark green circles plus error bars show the simulated light curve, solid and dashed red lines show the best descriptions and corresponding \(5\sigma\) confidence bands to the light curve, based on a sine function plus a five-degree polynomial function. For the shown simulated light curve, the input parameter \(\tau\), the determined periodicity \(T_{p}\) and \(\chi^{2}/dof\) are listed and shown in characters in dark green in top corner in right panel.
3\(\sigma\) to confirm the optical QPOs not from intrinsic AGN activities in SDSS J1609+1756, although short time durations in ZTF light curves, after well considering effects of AGN activities described by CAR process.
### Spectroscopic properties of SDSS J1609+1756
Fig. 10 shows the SDSS spectrum with PLATE-MJD-FIBERID=2200.53875-0526 of SDSS J1609+1756 collected from SDSS DR16 (Ahumada et al., 2021). The apparent red-shifted shoulders, marked in Fig. 10, can be found in broad Balmer emission lines, as shown in Ward et al. (2021). And the shoulders should be more apparently determined after measurements of emission lines in SDSS J1609+1756 by the following emission line fitting procedure.
Multiple Gaussian functions can be applied to well simultaneously measure the emission lines around H\(\beta\) within rest wavelength from 4600A to 5100A and around H\(\alpha\) within rest wavelength from 6150A to 6850A in SDSS J1609+1756, similar as what we have recently done in Zhang (2021a,b,c); Zhang & Zhao (2022b). Three broad and one narrow Gaussian functions are applied to describe broad and narrow components in H\(\alpha\) (H\(\beta\)). And two narrow and two broad Gaussian functions are applied to describe [O iii]\(\lambda 4959,5007\) doublet, for the core components and for the components related to shifted wings. And one Gaussian function is applied to describe He ii\(\lambda 4687\)A emission line. And six Gaussian functions are applied to describe the [O i]\(\lambda 6300,6363\)A, [N ii]\(\lambda 16549,6583\)A and [S ii]\(\lambda 6716,6731\)A doublets. As the following shown best fitting results, it is not necessary to consider components related shifted wings in the [O i], [N ii] and [S ii] doublets. And, a power law function is applied to describe the AGN continuum emissions underneath the emission lines around H\(\beta\) (H\(\alpha\)).
When the model functions are applied, only three restrictions are accepted. First, emission flux of each Gaussian emission component is not smaller than zero. Second, Gaussian components of each doublet have the same redshift. Third, corresponding Gaussian components in broad Balmer line have the same redshift. Then, through the Levenberg-Marquard least-squares minimization technique (the known MPFIT package), best fitting results to emission lines can be determined and shown in Fig. 11 with \(\chi^{2}/dof\sim 1.09\). The measured parameters of each Gaussian component are listed in Table 3. Here, although three Gaussian functions are applied to describe each broad Balmer line, only two Gaussian components have reliable measured parameters which are at least 3times larger than their corresponding uncertainties, indicating two broad Gaussian components are enough to describe each broad Balmer line. Moreover, it is clear that there are red shifted shoulders in each broad Balmer emission line, especially based on the red-shifted broad Gaussian component in each broad Balmer line.
The main objective to discuss emission line properties of broad Balmer lines is that the shoulders can provide information on peak separation of two broad components coming from two BLRs related to an expected central BBH system, which will provide further clues on maximum space separation of the two black holes in the expected BBH system in SDSS J1609+1756. The red-shifted velocity of the shoulder in broad Balmer lines are about \(V_{r}=2200\pm 300\)km/s, based on the red-shift broad Gaussian component in each broad Balmer line. Meanwhile, peak separation \(V_{p}\) in units of km/s related to a BBH system with space separation \(S\) and with BH masses of \(M_{BH1}\) and \(M_{BH2}\) of the two BHs can be simply described as
\[V_{p}\ \sim\ \sqrt{G\ \times\ (M_{BH1}\ +\ M_{BH2})/S}\ \times\ \sin i\ \times\ \sin(\phi) \tag{4}\]
with \(i\) as inclination angle of the orbital plane to line-of-sight and \(\phi\) as orientation angle of orbital phase. Therefore, considering the observed \(V_{r}\ \sim 2200\)km/s \(<V_{p}=V_{r}+V_{b}\) (\(V_{b}\) as blue-shifted velocity of undetected blue-shifted shoulders in broad Balmer lines in SDSS J1609+1756), the maximum space separation \(S\) can be simply determined as
\[S\ <\ \frac{G\ \times\ (M_{BH1}\ +\ M_{BH2})}{V_{p,obs}^{2}} \tag{5}\]
. Once there are clear information of central total BH mass which will be discussed in the next section, the maximum space separation can be well estimated.
Figure 10: SDSS spectrum of SDSS J1609+1756 in rest frame. Vertical dashed red lines mark the shoulders in broad Balmer emission lines.
\begin{table}
\begin{tabular}{l l l l} \hline \hline line & \(\lambda_{0}\) & \(\sigma\) & flux \\ \hline \hline Broad H\(\alpha\) & 6556.51\(\pm\)2.19 & 54.20\(\pm\)1.43 & 1490.49\(\pm\)63.93 \\ & 6612.26\(\pm\)1.79 & 14.03\(\pm\)2.68 & 146.99\(\pm\)37.27 \\ \hline Broad H\(\beta\) & 4856.68\(\pm\)1.62 & 34.21\(\pm\)1.76 & 276.31\(\pm\)17.27 \\ & 4897.98\(\pm\)1.33 & 14.90\(\pm\)2.11 & 55.22\(\pm\)12.05 \\ \hline He ii & 4686.63\(\pm\)0.64 & 3.20\(\pm\)0.64 & 14.36\(\pm\)2.60 \\ \hline Narrow H\(\alpha\) & 6563.32\(\pm\)0.29 & 6.07\(\pm\)0.31 & 312.45\(\pm\)16.95 \\ \hline Narrow H\(\beta\) & 4861.92\(\pm\)0.29 & 4.62\(\pm\)0.32 & 61.32\(\pm\)4.74 \\ \hline
[O iii]\(\lambda 5007\AA\) & 5007.72\(\pm\)0.03 & 3.57\(\pm\)0.05 & 580.08\(\pm\)13.21 \\ & 5005.39\(\pm\)0.59 & 8.65\(\pm\)0.69 & 99.31\(\pm\)12.64 \\ \hline
[O i]\(\lambda 6300\AA\) & 6300.88\(\pm\)0.88 & 5.20\(\pm\)0.84 & 34.90\(\pm\)5.07 \\ \hline
[O i]\(\lambda 46363\AA\) & 6365.12\(\pm\)1.56 & 8.85\(\pm\)1.67 & 38.02\(\pm\)6.48 \\ \hline
[N ii]\(\lambda 46583\AA\) & 6584.72\(\pm\)0.24 & 5.43\(\pm\)0.26 & 279.21\(\pm\)16.53 \\ \hline
[S ii]\(\lambda 46716\AA\) & 6716.13\(\pm\)1.01 & 3.82\(\pm\)0.82 & 51.23\(\pm\)16.54 \\ \hline
[S ii]\(\lambda 46731\AA\) & 6729.44\(\pm\)1.83 & 6.63\(\pm\)1.68 & 71.93\(\pm\)17.82 \\ \hline \hline \end{tabular} Notice: The first column shows which line is measured. The Second, third, fourth columns show the measured Gaussian parameters: center wavelength \(\lambda_{0}\) in units of Å, line width (second moment) \(\sigma\) in units of Å and line flux in units of \(10^{-17}\) erg/s/cm\({}^{2}\).
\end{table}
Table 3: Line parameters
### Basic structure information of the expected BBH system in SDSS J1609+1756
For a broad line AGN, the virialization assumption accepted to broad line emission clouds is the most convenient method to estimate central virial BH mass as discussed in Peterson et al. (2004); Greene & Ho (2005); Vestergaard & Peterson (2006); Shen et al. (2011). However, considering mixed broad Balmer line emissions, it is difficult to determine the two clear components from central two independent BLRs related to central BBH system in SDSS J1609+1756, after checking the following point.
If the determined two broad Gaussian components in each broad Balmer line shown in Fig. 11 were truly related to a central BBH system, the virial BH mass as discussed in Greene & Ho (2005) of each BH in central region can be estimated as
\[M_{BH1} \propto (f_{\alpha,r})^{0.55}(\sigma_{\alpha,r})^{2.06}\] \[M_{BH2} \propto (f_{\alpha,b})^{0.55}(\sigma_{\alpha,b})^{2.06}\]
with \(f_{\alpha,r}\) and \(\sigma_{\alpha,r}\) (\(f_{\alpha,b}\) and \(\sigma_{\alpha,b}\)) as line flux and line width (second moment) of red-shifted (blue-shifted) broad component in broad H\(\alpha\), after considering the more recent empirical R-L relation to estimate BLRs sizes through line luminosity in Bentz et al. (2013). Then, under assumption of a BBH system, shift velocity ratio \(R_{v}\) of red-shifted broad component to blue-shifted broad component in broad H\(\alpha\) can be estimated as
\[R_{v} \sim (\frac{f_{\alpha,b}}{f_{\alpha,r}})^{0.55}(\frac{\sigma_{\alpha, b}}{\sigma_{\alpha,r}})^{2.06} \tag{7}\] \[\sim 58^{+56}_{-25}\]
accepted the measured line widths and line fluxes and uncertainties listed in Table 3.
However, according to measured central wavelengths of the two broad components in broad H\(\alpha\) listed in Table 3, the observed shift velocity ratio \(R_{v,obs}\) is
\[R_{v,obs} \sim \frac{(6612.26\pm 1.79)~{}-~{}6564.61}{6564.61~{}-~{}(6556.51\pm 2.19)}\] \[\sim 5.9^{+2.5}_{-1.4}\]
with 6564.61 in units of A as the theoretical value of central wavelength of broad H\(\alpha\) in rest frame, which is quite different from the \(R_{v}\). Therefore, although there are two broad Gaussian components determined in broad H\(\alpha\), the two components are not appropriate to be applied to determine virial BH masses of central two black holes in the expected BBH system. Similar results can also be found through the two components in broad H\(\beta\).
Therefore, rather than virial BH mass estimated through broad line luminosity and broad line width, BH mass estimated by continuum luminosity as shown in Peterson et al. (2004) is determined as
\[\log(\frac{M_{BH1}}{10^{8}\mathrm{M}_{\odot}}) = -0.12\pm 0.07~{}+~{}(0.79\pm 0.09)~{}\times~{}\log(\frac{L_{1}}{1 0^{4}})\] \[\log(\frac{M_{BH2}}{10^{8}\mathrm{M}_{\odot}}) = -0.12\pm 0.07~{}+~{}(0.79\pm 0.09)~{}\times~{}\log(\frac{L_{2}}{1 0^{4}})\]
with \(L_{1}\) and \(L_{2}\) in units of erg\(/\)s as continuum luminosity at 5100A coming from central two BH accreting systems, then upper limit of central total BH mass \(M_{BH}=M_{BH1}+M_{BH2}\) can be estimated as
\[\frac{M_{BH}}{10^{8}\mathrm{M}_{\odot}}~{}\leq~{}10^{-0.12\pm 0.07}~{}\times~{}( \frac{L_{1}~{}+~{}L_{2}}{10^{44}})^{0.79\pm 0.09} \tag{10}\]
. Then, based on the fitting results shown in right panels of Fig. 11, total continuum luminosity at 5100A in rest frame is \(L_{t}~{}\sim~{}(1.47\pm 0.02)\times 10^{44}\mathrm{erg/s}\). Simply accepted \(L_{t}~{}=~{}L_{1}~{}+~{}L_{2}\), upper limit of central total BH mass is about \((1.03\pm 0.22)\times 10^{8}\mathrm{M}_{\odot}\).
Accepted the estimated upper limit of total BH mass, the upper limit of space separation of the expected central BBH system in
Figure 11: Left panels show the best-fitting results (top panel) and corresponding residuals (bottom panel) (line spectrum minus the best fitting results and then divided by uncertainties of the line spectrum) to the emission lines around H\(\alpha\). Right panels show the results to the emission lines around H\(\beta\). In each top panel, solid dark green line shows the SDSS spectrum, solid red line shows the best fitting results, dashed blue lines show the determined two broad Gaussian components in broad Balmer line, solid blue lines shows the determined narrow Gaussian component in narrow Balmer line, dashed red line shows the determined power law continuum emissions. In top right panel, solid cyan and solid dark red lines show the determined core components and components related to wings in [O iii] doublet, solid pink line shows the determined He ii line. In top left panel, solid lines in cyan, in pink and in purple show the determined [N ii], [O i] and [S ii] doublets. In each bottom panel, horizontal dashed red lines shows residuals=\(\pm\)1. In each top panel, vertical dashed red line marks the position related to the apparent shoulder in broad Balmer line. In order to show clearer determined Gaussian components, the top panels are shown with y-axis in logarithmic coordinate.
SDSS J1609+1756 is about
\[S < \frac{G\ \times\ M_{BH}}{V_{p,obs}^{2}}\] \[\sim 107\pm 60light-days\]
. Meanwhile, considering optical periodicity \(\sim\)340 days, as discussed in Eracleous et al. (2012), the space separation of the central BBH system can be estimated as
\[S_{BBH} \sim 0.432\frac{M_{BH}}{10^{8}\mathrm{M}_{\odot}}(\frac{T_{q}/year}{265 2\frac{M_{BH}}{10^{9}\mathrm{M}_{\odot}}})^{2/3} \tag{12}\] \[\sim 2.6\pm 0.3light-days\]
with accepted total BH mass (\(1.03\pm 0.22\)) \(\times\)\(10^{8}\mathrm{M}_{\odot}\). The space separation \(S_{BBH}\) determined by periodicity is well below the upper limit of space separation determined by peak separation \(V_{p}\), not leading to clues against assumptions of the central BBH system in SDSS J1609+1756.
Before end of the subsection, one point should be noted. As simply discussed above, through optical periodicity \(\sim\)340 days, estimated space separation of the central BBH system is about 2.6 light-days in SDSS J1609+1756. Meanwhile, continuum luminosity about \(10^{44}\mathrm{erg/s}\) in SDSS J1609+1756 can lead BLRs sizes to be about 36 light-days through the R-L relation in Bentz et al. (2013), quite larger than \(S_{BBH}\sim 2.6\mathrm{light}\)-days. Therefore, there should be few effects of dynamics of central BBH system on emission clouds of probably mixed BLRs in SDSS J1609+1756, or only apparent effects on emission clouds in inner regions of central mixed BLRs of SDSS J1609+1756. The results can provide further clues to support that it is not appropriate to estimate central BH mass by properties of broad emission lines in SDSS J1609+1756 as well discussed above, and also to support that the shifted velocity ratio discussed above should be not totally confirmed to be related to dynamics of central BBH system. Multi-epoch monitoring of variabilities of broad emission lines should provide further and accurate properties of dynamic structures of central expected BBH system in SDSS J1609+1756 in the near future.
### Further discussions on the origin of optical QPOs in SDSS J1609+1756
Meanwhile, besides the expected central BBH system, precessions of emission regions with probable hot spots for the optical continuum emissions can also be applied to describe the detected optical QPOs in SDSS J1609+1756. As discussed in Eracleous et al. (1995) and in Storchi-Bergmann et al. (2003), the expected disk precession period can be estimated as
\[T_{\mathrm{pre}}\sim 1040M_{\mathrm{g}}R_{3}^{2.5}yr \tag{13}\]
, with \(R_{3}\) as distance of optical emission regions to central BH in units of 1000 Schwarzschild radii (\(R_{g}\)) and \(M_{\mathrm{g}}\) as the BH mass in units of \(10^{8}\mathrm{M}_{\odot}\). Considering optical periodicity about \(T_{\mathrm{pre}}\sim 340\) days and BH mass about \((1.03\pm 0.22)\times 10^{8}\mathrm{M}_{\odot}\) above estimated through the continuum luminosity, the expected \(R_{3}\) could be around 0.06 in SDSS J1609+1756.
However, based on the discussed distance of NUV emission regions to central BHs in Morgan et al. (2010) through the microlensing variability properties of eleven gravitationally lensed quasars, the NUV 2500A continuum emission regions in SDSS J1609+1756 have distance from central BH as
\[\log\frac{R_{2500}}{cm}=15.78+0.80\log(\frac{M_{BH}}{10^{9}M_{\odot}}) \tag{14}\]
leading size of NUV emission regions to be about \(60R_{g}\). The estimated NUV emission regions have similar distances as the optical continuum emission regions in SDSS J1609+1756 under the disk precession assumption, strongly indicating that the disk precessions of emission regions are not preferred to be applied to explain the detected optical QPOs in SDSS J1609+1756.
Moreover, long-term QPOs can be detected in blazars due to jet precessions as discussed in Sandrinelli et al. (2018); Bhatta (2019); Otero-Santos et al. (2020). However, SDSS J1609+1756 is covered in Faint Images of the Radio Sky at Twenty-cm (Becker, White & Helfand, 1995; Helfand et al., 2015), but no apparent radio emissions. Therefore, jet precessions can be well ruled out to explain the optical QPOs in SDSS J1609+1756.
Furthermore, it is interesting to discuss whether the known relativistic Lense-Thirring precession (Cui, Zhang & Chen, 1998; Wagoner, 2012) can be applied to explain the detected optical QPOs in SDSS J1609+1756. As well discussed in Cui, Zhang & Chen (1998), observed periodicity related to the Lense-Thirring precession is about
\[P_{LT,obs}\ \sim(1+z)\times\frac{M_{BH}R_{z}^{3}}{6.45\times 10^{4}\ \times\ |a_{*}|} \tag{15}\]
with \(z\) as redshift, \(M_{BH}\) as BH mass in units of \(\mathrm{M}_{\odot}\), \(R_{e}\) in units of \(R_{g}\) as distance of emission regions in central accretion disk to central BH and \(a_{*}\) (between \(\pm 1\)) as dimensionless BH spin parameter. If accepted central BH mass as \((1.03\pm 0.22)\times 10^{8}\mathrm{M}_{\odot}\), minimum value of \(R_{e}\) as \(60R_{g}\) (the estimated value for NUV emission regions above) for optical emission regions and maximum value \(|a_{*}|=1\), the minimum \(P_{LT,obs}\) can be estimated to be about 2500days, about 13 times larger than the detected optical periodicity about 340days in SDSS J1609+1756. Therefore, the detected optical QPOs in SDSS J1609+1756 are not related to relativistic Lense-Thirring precessions.
Before ending of the manuscript, one point is noted. The SDSS J1609+1756 is collected from the candidates of off-nucleus AGN reported in the literature. However, it is not unclear whether are there tight connections between optical QPOs and off-nucleus AGN, studying on a sample of off-nucleus AGN with apparent optical QPOs could provide further clues on probable intrinsic connections in the near future.
## 4 Summary and conclusions
The final summary and main conclusions are as follows.
* The 4.45 years long-term ZTF g/r/i-band light curves can be well described by a sine function with periodicities about 340 days (about 0.9 years) with uncertainties about 4-5 days in SDSS J1609+1756, which can be further confirmed by the corresponding sine-like phase folded light curve with accepted periodicities, indicating apparent optical QPOs in SDSS J1609+1756.
* Confidence level higher than 5\(\sigma\) can be confirmed to support the optical QPOs in SDSS J1609+1756 through applications of the Generalized Lomb-Scargle periodogram. Moreover, bootstrap method can be applied to re-determine small uncertainties about 5 days of the periodicities.
* The reliable optical QPOs with periodicities \(\sim\)340 days with confidence level higher than 5\(\sigma\) in SDSS J1609+1756 can also be confirmed by properties of ACF and WWZ methods, through the ZTF g/r-band light curves.
* Robustness of the optical QPOs in SDSS J1609+1756 can be confirmed by the four different methods leading to totally similar periodicities with confidence level higher than 5\(\sigma\).
* Based on intrinsic AGN variabilities traced by the CAR process, confidence level higher than 3\(\sigma\) can be well determined to support the detect optical QPOs in SDSS J1609+1756 + are not from intrinsic AGN activities, through a sample of 100000 CAR process simulated light curves. Therefore, the optical QPOs in SDSS J1609+1756 + are more confident and robust, leading to an expected central BBH system in SDSS J1609+1756.
* Although each broad Balmer emission line can be described by two Gaussian components, shifted velocity ratio determined through virial BH mass properties are totally different from the observed shifted velocity ratio through the measured central wavelengths of the two broad Gaussian components, indicating that applications of line parameters of the two broad Gaussian components are not preferred to estimate virial BH masses of central BHs, under the assumptions of an expected central BBH system in SDSS J1609+1756.
* based on measured continuum luminosity at 5100A in SDSS J1609+1756, central total BH mass can be estimated as \((1.03\pm 0.22)\times 10^{8}\)M\({}_{\odot}\). Then, based on the apparent shoulders in broad Balmer emission lines, upper limit of space separation of the expected central BBH system can be estimated as \((107\pm 60)\) light-days in SDSS J1609+1756. And based on the optical periodicities, the space separation of the central BBH system can be estimated as \((2.6\pm 0.3)\) light-days in SDSS J1609+1756, well below the estimated upper limit of space separation.
* Based on the estimated size (distance to central BH) about 60R\({}_{\rm G}\) of the NUV emission regions similar to the disk precession expected size about 60R\({}_{\rm G}\) of the optical emission regions, the disk precessions can be not preferred to explain the detected optical QPOs in SDSS J1609+1756.
* There are no apparent radio emissions in SDSS J1609+1756, strongly supporting that the jet precessions can be totally ruled out to explain the detected optical QPOs in SDSS J1609+1756.
## Acknowledgements
Zhang gratefully acknowledge the anonymous referee for giving us constructive comments and suggestions to greatly improve our paper. Zhang gratefully acknowledges the kind grant support from NSFC-12173020 and NSFC-12373014. This paper has made use of the data from the SDSS projects, [http://www.sdss3.org/](http://www.sdss3.org/), managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration. This paper has made use of the data from the ZTF [https://www.ztf.caltech.edu](https://www.ztf.caltech.edu). The paper has made use of the public JAVELIN code ([http://www.astronomy.ohio-state.edu/~yingzu/codes.html/](http://www.astronomy.ohio-state.edu/~yingzu/codes.html/)) and the MPFIT package [https://pages.physics.wisc.edu/~craigm/andcmee](https://pages.physics.wisc.edu/~craigm/andcmee) package[https://emcee.readthedocs.io/en/stable/](https://emcee.readthedocs.io/en/stable/) and the wwwz.py code [http://github.com/eaydin](http://github.com/eaydin) written by M. Enre Aydin. This research has made use of the NASA/IPAC Extragalactic Database (NED, [http://ned.ipac.caltech.edu](http://ned.ipac.caltech.edu)) which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author ([email protected]).
|
2309.10637 | Revival of superconductivity in a one-dimensional dimerized diamond
lattice | We study an s-wave superconductivity in a one-dimensional dimerized diamond
lattice in the presence of spin-orbit coupling and Zeeman field. The considered
diamond lattice, comprising of three sublattices per unitcell and having flat
band, has two dimerization patterns; the intra unitcell hoppings have the same
(opposite) dimerization pattern as the corresponding inter unitcell hoppings,
namely, neighboring (facing) dimerization. Using the mean-field theory, we
calculate the superconducting order parameter self-consistently and examine the
stability of the superconducting phase against the spin-orbit coupling, and
Zeeman splitting, dimerization, and temperature. We find that the spin-orbit
coupling or Zeeman splitting individually has a detrimental effect on the
superconductivity, mostly for the facing dimerization. But their mutual effect
revives the superconductivity at charge neutrality point for the facing
dimerization. | Sanaz Shahbazi, Mir Vahid Hosseini | 2023-09-19T14:20:35Z | http://arxiv.org/abs/2309.10637v1 | # Revival of superconductivity in a one-dimensional dimerized diamond lattice
###### Abstract
We study an s-wave superconductivity in a one-dimensional dimerized diamond lattice in the presence of spin-orbit coupling and Zeeman field. The considered diamond lattice, comprising of three sublattices per unitcell and having flat band, has two dimerization patterns; the intra unitcell hoppings have the same (opposite) dimerization pattern as the corresponding inter unitcell hoppings, namely, neighboring (facing) dimerization. Using the mean-field theory, we calculate the superconducting order parameter self-consistently and examine the stability of the superconducting phase against the spin-orbit coupling, and Zeeman splitting, dimerization, and temperature. We find that the spin-orbit coupling or Zeeman splitting individually has a detrimental effect on the superconductivity, mostly for the facing dimerization. But their mutual effect revives the superconductivity at charge neutrality point for the facing dimerization.
## I Introduction
Superconductivity is an amazing quantum phenomenon in macroscopic scales in which electrons at the Fermi level become unstable against attractive interactions mediated by bosonic fields [1]. This instability gives rise to the formation of the so-called Cooper pairs predicted by Bardeen, Cooper and, Schrieffer and known as the BCS theory [2]. The search for superconducting states has attracted much interest recently, developing this field to non-BCS superconductivity [3; 4; 5] with unconventional pairing symmetries [6; 7]. In the usual Cooper pairing, owing to the large Fermi surface, the lattice structure and, to some extent, the dimensions of host materials have less effects in establishing superconductivity [8]. However, the formation of exotic forms of superconductivity has been proposed theoretically and realized experimentally in new states of matters [9; 10; 11] with unusual lattice structure in low dimensional systems [12; 13], particularly, in one-dimensional (1D) systems [14; 15].
Furthermore, superconductivity can be engineered by the spin-orbit interaction [16] and/or the Zeeman field [17; 18]. Spin-orbit interaction that couples the momentum of an electron to its spin [19], has a significant effect on spintronics [20; 21; 22; 23]. This coupling is a key gradient in the emergence of nontrivial phases [24]. Spin-orbit coupling with an external origin is the Rashba spin-orbit interaction [25], which can be created by applying an electric field perpendicular to the plane of materials through breaking inversion symmetry. Rashba spin-orbit coupling splits spin states into chiral states leading to several physical phenomena including quantum spin-Hall effect, spin transistor, and chiral magnonics [26]. Chiral symmetric systems [27] such as Rashba nanowire systems [28] and Kitaev chain [29; 30] are needed to study topological superconductor. Rashba nanowire systems can host Majorana fermions [31; 32]. Also, the Zeeman field splits spin states into spin-polarized states causing the pair breaking for the s-wave superconductivity [33] and realizing the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state [34].
On the other hand, the lattice structure of a system along with its distortions and dimensions can be encoded in the quantum states of the band structure [35]. As such, the physical properties of the system, including superconductivity as well as the dynamics of carriers, governed by the band structure, can be affected by the lattice structure. There are some 2D bipartite lattices with specific geometries, such as Dice, Kagome, and Lieb lattices [36], having internal symmetries, where the rim sublattices are connected indirectly through hub sublattices. In these lattices, an extra non-dispersive band, i.e., flat band, will be emerged in contrast to the usual dispersive conduction and valence bands. Such flat-band systems can be engineered by implementing dimerization [37]. There are several 1D models, e.g., the 1D diamond lattice, exhibiting flat band in their band structure [38; 39; 40] that also have been designed experimentally [41].
Because of the flat bands, highly correlated phases, e.g., superconductivity, would be established in flat-band systems. Superconductivity in 3D and 2D systems supporting flat [42; 43; 44; 45; 46; 47; 48; 49; 50; 51] or partially flat [52; 53; 54; 55; 56; 57; 58; 59] band has been studied extensively, with intrinsic [60] and extrinsic [61] origins. The pairings of fermions [62] and Cooper pairs [63; 64] on a 1D diamond chains embedded in a magnetic field have been studied. Also, the possibility of high-T\({}_{C}\) superconductivity has been investigated on a cross-linked ladder [65]. A considerable binding energy for Cooper pairs has been obtained slightly below 1/3-filling in repulsive interacting fermions on the diamond lattice [66]. Also, nontrivial phases have been revealed in interacting bosons within a Bose-Hubbard model on a cross-linked ladder with \(\pi\) flux [67; 68]. It has been shown that superconductivity can be dominated over charge order by adding an attractive component on the 1D Creutz ladder with repulsive interactions between spinless fermions [69]. But exploring the superconductivity in flat-band systems engineered by Zeeman field [70], spin-orbit coupling, and lattice dimerizations [71] deserves to be investigated further particularly, in 1D systems.
In this paper, we consider a 1D spin-orbit-coupled diamond lattice with lattice dimerization subjected to the
Zeeman field in the presence of an s-wave superconductivity. In the normal state, we find that although the spin-orbit coupling, the Zeeman field, or the dimerization cannot individually affect on the dispersion-less property of the flat band, but their combined effect changes some dispersion-less states at the flat band into nearly dispersive ones. The made dispersion in the flat band depends on the dimerization configuration. In the superconducting state, interestingly, we reveal that although the spin-orbit coupling, the Zeeman field, or the dimerization individually can have detrimental effect on the superconductivity, but their combination revives the superconductivity for a certain dimerization pattern.
The paper is organized as follows. In Sec. II, we present the Hamiltonian of the system and discuss its band structure. We incorporate an attractive interaction for establishing superconductivity and derive gap equation using the mean-field formalism in Sec. III. Section IV presents the obtained numerical results. Finally, Sec. V is devoted to summarizing and concluding remarks.
## II Model and theory
We consider a 1D diamond lattice along the x axis, as shown in Fig. 1, containing three distinct sublattices (namely, \(A\), \(B\), and \(C\)) per unitell in the presence of the spin-orbit coupling and the Zeeman field. The lattice is also dimerized in two different ways [72]: (i) the intra and inter unitcell hoppings \(B-A\) (or \(B-C\)) have the same dimerization, i.e., the neighboring dimerization [see Fig. 1(a)], and (ii) the intra and inter unitcell hoppings \(B-A\) (or \(B-C\)) have the opposite dimerization, i.e., the facing dimerization [see Fig. 1(b)]. The total Hamiltonian for the system including the Hamiltonians of lattice, \(H_{K}\), the spin-orbit coupling, \(H_{SO}\), and the Zeeman field, \(H_{Z}\), is
\[H_{0}=H_{K}+H_{SO}+H_{Z}, \tag{1}\]
with
\[H_{K}=\sum_{i=1}^{N}\sum_{\sigma}(t_{1}c_{i,1,\sigma}^{\dagger} +t_{2}c_{i,3,\sigma}^{\dagger})c_{i,2,\sigma}\] \[+\sum_{i=1}^{N-1}\sum_{\sigma}(t_{1}^{\prime}c_{i,1,\sigma}^{ \dagger}+t_{2}^{\prime}c_{i,3,\sigma}^{\dagger})c_{i+1,2,\sigma}+H.c.\] \[+\sum_{i=1}^{N}\sum_{m=1}^{3}\sum_{\sigma}\mu_{m}c_{i,m,\sigma}^{ \dagger}c_{i,m,\sigma}, \tag{2}\]
\[H_{SO}= -i\lambda\sum_{i=1}^{N}\sum_{\sigma,\sigma^{\prime}}[c_{i,1, \sigma}^{\dagger}(\vec{\tau}\times\hat{d}_{1})_{\sigma\sigma^{\prime}}+c_{i,3,\sigma}^{\dagger}(\vec{\tau}\times\hat{d}_{2})_{\sigma\sigma^{\prime}}]c_{i,2, \sigma^{\prime}}\] \[-i\lambda\sum_{i=1}^{N-1}\sum_{\sigma,\sigma^{\prime}}[c_{i,1, \sigma}^{\dagger}(\vec{\tau}\times\hat{d}_{3})_{\sigma\sigma^{\prime}}+c_{i,3,\sigma}^{\dagger}(\vec{\tau}\times\hat{d}_{4})_{\sigma\sigma^{\prime}}]c_{i+ 1,2,\sigma^{\prime}}+H. \tag{3}\]
\[H_{Z}=-h\sum_{i=1}^{N}\sum_{m=1}^{3}\sum_{\sigma}\sigma c_{i,m,\sigma}^{ \dagger}c_{i,m,\sigma}, \tag{4}\]
where \(c_{i,m,\sigma}^{(\dagger)}\) is the annihilation (creation) operator for an electron on the sublattices \(m=1,2,3\) (\(A\), \(B\), and \(C\)) at the \(i\)th unitcell with spin \(\sigma=(\uparrow or\downarrow)\). \(t_{1}^{(\prime)}\) and \(t_{2}^{(\prime)}\) are the intra (inter) unitcell hoppings of upper and lower bonds, respectively. For the neighboring dimerization \(t_{1}=t_{1}^{\prime}=t(1+\delta t)\) and \(t_{2}=t_{2}^{\prime}=t(1-\delta t)\) [see Fig. 1(a)] and for the facing dimerization \(t_{1}=t_{2}^{\prime}=t(1+\delta t)\) and \(t_{2}=t_{1}^{\prime}=t(1-\delta t)\) [see Fig. 1(b)] with \(t\) and \(\delta t\) being the strengths of the hopping and the dimerization, respectively. \(\mu_{m}\) is the chemical potential and the symbol H.c. denotes the Hermitian conjugate of the previous operator. \(\lambda\) and \(h\) are the spin-orbit coupling and the Zeeman field strengths, respectively. \(\vec{\tau}\) is the Pauli vector. Also, \(d_{j}\)'s (\(j=1,2,3,4\)) are the unit vectors along the intra (\(\vec{\delta}_{1}\) and \(\vec{\delta}_{2}\)) and inter (\(\vec{\delta}_{3}\) and \(\vec{\delta}_{4}\)) lattice vectors that are given by
\[\vec{\delta}_{1}=(\frac{\sqrt{2}}{2}a,\frac{\sqrt{2}}{2}a),\quad \vec{\delta}_{2}=(\frac{\sqrt{2}}{2}a,-\frac{\sqrt{2}}{2}a),\] \[\vec{\delta}_{3}=(-\frac{\sqrt{2}}{2}a,\frac{\sqrt{2}}{2}a),\quad \vec{\delta}_{4}=(-\frac{\sqrt{2}}{2}a,-\frac{\sqrt{2}}{2}a), \tag{5}\]
with \(a\) is the distance between two adjacent lattice points. We choose \(t\) and \(a\) as the energy unit and the length unit,
Figure 1: (Color online) Two dimerized configurations of 1D diamond lattice: (a) Neighboring dimerization: The intra and inter unitcell hoppings \(B-A\) (or \(B-C\)) are the same. (b) Facing dimerization: The intra and inter unitcell hoppings \(B-A\) (or \(B-C\)) are the opposite. The dashed box indicates the unitcell.
respectively. In the following, to focus on the role of flat bands, we set \(\mu_{(1,2,3)}=0\).
Since the 1D system is along the x axis, the Bloch wave vector \(\mathbf{k}=(k,0)\) is a good quantum number under periodic boundary conditions. Performing Fourier transformation on the basis of \(c_{j,m,\sigma}=\frac{1}{\sqrt{N}}\sum_{k}e^{i\mathbf{k}\cdot\mathbf{r}_{j}}c_{k,m,\sigma}\) and \(c_{j,m,\sigma}^{\dagger}=\frac{1}{\sqrt{N}}\sum_{k}e^{-i\mathbf{k}\cdot \mathbf{r}_{j}}c_{k,m,\sigma}^{\dagger}\), the Hamiltonian \(H_{0}\), Eq. (1), can be written as
\[H_{0}=\sum_{k}\psi_{k}^{\dagger}h_{0}(k)\psi_{k}, \tag{6}\]
where \(\psi_{k}^{\dagger}=(c_{k,1,\uparrow},c_{k,2,\uparrow},c_{k,3,\uparrow},c_{k,1,\downarrow},c_{k,2,\downarrow},c_{k,3,\downarrow})^{\dagger}\) and
\[h_{0}(k)=\begin{pmatrix}h_{K}(k)&h_{SO}(k)\\ h_{SO}(k)^{\dagger}&h_{K}(k)\end{pmatrix}+h_{Z}. \tag{7}\]
Here, we have defined the momentum space Hamiltonian of the diamond lattice as
\[h_{K}(k)=\begin{pmatrix}\mu_{1}&s(k)&0\\ s(k)^{*}&\mu_{2}&g(k)\\ 0&g(k)^{*}&\mu_{3}\end{pmatrix}, \tag{8}\]
where
\[s(k) =t_{1}\exp\left(-i\frac{\sqrt{2}}{2}ka\right)+t_{1}^{\prime}\exp \left(i\frac{\sqrt{2}}{2}ka\right),\] \[g(k) =t_{2}\exp\left(i\frac{\sqrt{2}}{2}ka\right)+t_{2}^{\prime}\exp \left(-i\frac{\sqrt{2}}{2}ka\right),\]
and the momentum space Hamiltonian of the spin-orbit coupling as
\[h_{SO}(k)=\begin{pmatrix}0&\lambda_{+}(k)&0\\ \lambda_{+}(k)^{*}&0&\lambda_{-}(k)\\ 0&\lambda_{-}(k)^{*}&0\end{pmatrix}, \tag{9}\]
where
\[\lambda_{\alpha}(k)=i\sqrt{2}a\lambda\left[\cos\left(\frac{\sqrt{2}}{2}ka \right)+\alpha\sin\left(\frac{\sqrt{2}}{2}ka\right)\right], \tag{10}\]
with \(\alpha=\pm\). Also, the momentum space Hamiltonian of the Zeeman field takes the form
\[h_{Z}=hDiag(-1,-1,-1,1,1,1), \tag{11}\]
where \(Diag(x)\) creates a diagonal matrix.
Although Hamiltonian (7) is not diagonalizable analytically, but one can obtain analytical spectra for specific cases. For \(\lambda=0\) and \(h=0\), diagonalizing the Hamiltonian (8), yields the eigenvalues of the diamond chain as,
\[\epsilon(k)=0,\pm\sqrt{\eta+\xi}, \tag{12}\]
with
\[\eta=t_{1}^{2}+t_{2}^{2}+\left(t_{1}^{\prime}\right)^{2}+\left(t_{2}^{\prime} \right)^{2},\quad\xi=2\cos(\sqrt{2}ka)(t_{1}t_{1}^{\prime}+t_{2}t_{2}^{\prime}).\]
Explicitly, one can see that the diamond lattice has three bands; two dispersive bands and one flat band at zero energy. For the neighboring dimerization, i.e., \(t_{1}=t_{1}^{\prime}=t(1+\delta t)\) and \(t_{2}=t_{2}^{\prime}=t(1-\delta t)\) [see Fig. 1(a)], the eigenvalues (12) reduce as
\[\epsilon(k)=0,\pm 2\cos(\frac{\sqrt{2}}{2}ka)\sqrt{t_{1}^{2}+t_{2}^{2}}, \tag{13}\]
while for the facing pattern, i.e., \(t_{1}=t_{2}^{\prime}=t(1+\delta t)\) and \(t_{2}=t_{1}^{\prime}=t(1-\delta t)\) [see Fig. 1(b)], we arrive at,
\[\epsilon(k)=0,\pm\sqrt{2[t_{1}^{2}+t_{2}^{2}+2t_{1}t_{2}\cos(\sqrt{2}ka)]}. \tag{14}\]
For the non-dimerized case, i.e., \(\delta t=0\), Eq. (12) can be rewritten as,
\[\epsilon(k)=0,\pm 2t\sqrt{2}\cos(\frac{\sqrt{2}ka}{2}). \tag{15}\]
Note, for the neighboring dimerization [Eq. 13] and non-dimerization [Eq. 15] cases, the spectrum is gapless and the dispersive bands are similar to Dirac band touching at the Brillouin zone boundaries. While the dimerization opens a gap between the two dispersive bands and the flat band in the facing dimerization case [Eq. 14]. For the neighboring dimerization, the system has chiral symmetry and can reveal topological phase transition depending on the dimerization values. While, in the facing dimerization case, the system has sublattice symmetry with non-topological properties. However, this system with such dimerization pattern can be turned into topological one in the presence of chiral-symmetry breaking adiabatic pumping [72].
The full band structure of the system can be evaluated numerically. In Fig. 2, the band structure versus \(k\) is depicted for different cases. The first, the second, and the third rows are for the no dimerization, the neighboring dimerization, and the facing dimerization patterns, respectively. In the first column, the band structure is calculated in the absence of both \(\lambda\) and \(h\). The second (third) column is for \(\lambda\neq 0\) and \(h=0\) (\(\lambda=0\) and \(h\neq 0\)). The forth column is calculated in the presence of both \(\lambda\) and \(h\).
From the first column, [see Figs. 2(a), 2(b), and 2(c)], one can see that the no dimerization and the neighboring dimerization have the same gapless band structure including two dispersive bands and one flat band. In these cases, the diamond lattice has Dirac-like bands touching at the 1D Brillouin zone boundaries. While, the facing dimerization opens a gap between the two dispersive bands and the flat band lifting the degeneracy of the Dirac point.
As shown in the second column, the spin-orbit coupling splits the dispersive bands into chiral bands and, at the same time, opens a gap between the dispersive and non-dispersive bands without affecting on the flat band for all the three dimerization patterns [see Figs. 2(d), 2(e), and 2(f)]. The band structures of the three patterns look
similar to each other, while the gap of facing dimerization is larger than that of the other two band structures.
As can be seen in the third column, again the band structures of the no dimerization and the neighboring dimerization are the same. In these two configurations, the Zeeman field splits the spin states except at some states close to the Brillouin zone boundaries. In contrast, for the facing dimerization, the Zeeman field lifts the spin degeneracy completely and gaps out the spin states [see Figs. 2(g), 2(h), and 2(i)].
The combined effect of the spin-orbit coupling and the Zeeman field, as depicted in the forth column, results in opening a partial gap in the dispersive band and causing the flat band to acquire dispersion depending on the dimerization patterns [see Figs. 2(j), 2(k), and 2(l)]. Moreover, in the facing dimerization, compared to the other two patterns, the dispersion of the middle bands is smaller and there are more available states near the Fermi energy. It is worthwhile noting that in bipartite lattices, band crossing points and the flatness of the flat band are protected by topological mechanism [73]. In the diamond lattice, the zero-energy states result from the absence of direct connection between \(A\) and \(C\) sublattices. This implies that the corresponding wave function is localized at \(A\) and \(C\) sublattices with opposite amplitudes and localized at B sublattices with zero amplitude. So, the removed band touching points and the distortion of the flat band can be attributed to the perturbations, i.e., the spin-orbit coupling and the Zeeman field, that do not respect the underlying topology [73].
## III Superconductivity
Now in this section, we incorporate an s-wave superconductivity to the 1D diamond chain by including the attractive on-site interaction,
\[H_{int}=-U\sum_{i}\sum_{m=1}^{3}[c^{\dagger}_{i,m,\uparrow}c_{i,m,\uparrow}c ^{\dagger}_{i,m,\downarrow}c_{i,m\downarrow}], \tag{16}\]
where \(U>0\) denotes the on-site attractive pairing interaction. In the present work, we assume the absence of attraction in the spin-triplet channel. Using the mean-field approximation and taking Fourier transform, Eq. (16) can be recast into [74; 8]
\[H_{int}=\sum_{k}\sum_{m=1}^{3}[\Delta_{k}c^{\dagger}_{k,m,\uparrow}c^{ \dagger}_{k,m,\downarrow}+\Delta^{*}_{k}c_{k,m,\downarrow}c_{k,m,\uparrow}], \tag{17}\]
where
\[\Delta_{k}=-\frac{U}{3}\sum_{m=1}^{3}\langle c_{k,m,\downarrow}c_{k,m,\uparrow }\rangle, \tag{18}\]
is the mean-field superconducting order parameter. We assume that the correlation functions \(\langle c_{k,m,\downarrow}c_{k,m,\uparrow}\rangle\) are the same for all three sublattices \(m=1,2,3\)[75; 76; 74]. Also, in the s-wave pairing \(\Delta_{k}=\Delta^{*}=\Delta\).
Adding Eq. (17) to Eq. (6), gives the total Hamiltonian \(H=H_{0}+H_{int}\) in the momentum space as,
\[H=\sum_{k}\Psi^{\dagger}_{k}h(k)\Psi_{k}, \tag{19}\]
Figure 2: (Color online) The band structure of the system as a function of \(k\) for no dimerization (the first row), neighboring dimerization (the second row), and facing dimerization (the third row) patterns. Also, \((\lambda,h)=(0,0)\) for the first column, \((\lambda,h)=(0.8,0)\) for the second column, \((\lambda,h)=(0,0.5)\) for the third column, and \((\lambda,h)=(0.8,0.5)\) for the forth column. Here, \(\delta t=0.5\).
with the Nambu spinor
\[\Psi^{\dagger}_{k}=(c_{k,1,\uparrow},c_{k,2,\uparrow},c_{k,3, \uparrow},c_{k,1,\downarrow},c_{k,2,\downarrow},c_{k,3,\downarrow})^{\dagger}\] \[\oplus(c_{k,1,\downarrow},c_{k,2,\downarrow},c_{k,3,\downarrow}, c_{k,1,\uparrow},c_{k,2,\uparrow},c_{k,3,\uparrow})^{\dagger}, \tag{20}\]
and the momentum space total Hamiltonian
\[h(k)=\begin{pmatrix}h_{0}(k)&\hat{\Delta}\\ \hat{\Delta}^{*}&-h_{0}(k)^{T}\end{pmatrix}, \tag{21}\]
where
\[\hat{\Delta}=\frac{\Delta}{2}Diag(1,1,1,-1,-1,-1). \tag{22}\]
Invoking the Bogoliubov-Valatin transformation [77; 78; 79],
\[c_{k,m,\sigma}=\sum_{\nu}(u^{\nu}_{k,m,\sigma}\gamma_{\nu}+v^{ \nu*}_{k,m,\sigma}\gamma^{\dagger}_{\nu}), \tag{23}\]
Hamiltonian (21) can be diagonalized by solving
\[h_{T}(k)\psi^{\nu}_{k}=E^{\nu}(k)\psi^{\nu}_{k}, \tag{24}\]
where \(E^{\nu}(k)\) are the eigenvalues and
\[\psi^{\nu}_{k} =(u^{\nu}_{k,1,\uparrow},u^{\nu}_{k,2,\uparrow},u^{\nu}_{k,3, \uparrow},u^{\nu}_{k,1,\downarrow},u^{\nu}_{k,2,\downarrow},u^{\nu}_{k,3, \downarrow},\] \[v^{\nu}_{k,1,\downarrow},v^{\nu}_{k,2,\downarrow},v^{\nu}_{k,3, \downarrow},v^{\nu}_{k,1,\uparrow},v^{\nu}_{k,2,\uparrow},v^{\nu}_{k,3, \uparrow})^{T}, \tag{25}\]
are the eigenvectors of the system. Here, \(u^{\nu}_{k,m,\sigma}\) and \(v^{\nu}_{k,m}\) are the electron and hole states, respectively. Also, \(\gamma^{\dagger}_{\nu\sigma}(\gamma_{\nu\sigma})\) is the quasi-particle creation (annihilation) operator in the \(\nu\) state with spin \(\sigma\). Plugging Eq. (23) into Eq. (18), one obtains the superconducting gap equation as
\[\Delta=\frac{U}{3}\sum_{k,\nu}\sum_{m=1}^{3}u^{\nu}_{k,m,\downarrow}v^{\nu*} _{k,m,\uparrow}\tanh\left[\frac{E^{\nu}(k)}{2k_{B}T}\right], \tag{26}\]
where \(T\) is the temperature and \(k_{B}\) is the Boltzmann constant. With an initial guess for the order parameter \(\Delta\), one can solve the eigenvalue problem (24). Having obtained the eigenvalues and the eigenvectors of the system and setting them into the gap equation (26), one can determine a new value for \(\Delta\). This process can be done iteratively obtaining the order parameter self-consistently.
To examine the stability of superconducting phase, the calculated \(\Delta\) should minimize the thermodynamic potential [80],
\[\Omega_{S}=-k_{B}T\sum_{k,\nu}\sum_{\alpha=\pm}\ln\left(1+\exp \left[\frac{\alpha E^{\nu}(k)}{k_{B}T}\right]\right)+\frac{3\Delta^{2}}{U}, \tag{27}\]
with the global minima. Also, the DOS at zero temperature can be calculated by the following equation,
\[DOS(E)=\sum_{k}\sum_{\nu}\delta[E-E^{\nu}(k)]. \tag{28}\]
Note that in Eqs. (26) and (27) all the positive eigenvalues are summed over [81]. If we set \(\Delta=0\) in Eq. (27), the thermodynamic potential of the normal state \(\Omega_{N}\) can be calculated. In order to obtain analytical expressions for some limiting cases, in the following, we replace \(\sum_{k}\rightarrow\frac{a}{\sqrt{2\pi}}\int dk\).
In the absence of the dimerization, the spin-orbit coupling, and the Zeeman field, the gap equation (26) reads as
\[\Delta=\frac{Ua}{3\sqrt{2\pi}}\int\!\!dk\left(\tanh\left[\frac{ \Delta}{2k_{B}T}\right]+\frac{2\Delta}{E(k)}\tanh\left[\frac{E(k)}{2k_{B}T} \right]\right), \tag{29}\]
where
\[E(k)=\sqrt{\Delta^{2}+\epsilon_{k}^{2}}, \tag{30}\]
with \(\epsilon_{k}\) being the dispersive band of normal diamond lattice.
The critical temperature \(T_{c}\) can be calculated analytically by setting \(\Delta\to 0\) and \(T\to T_{c}\) in Eq. (29). In the low energy limit, that is satisfied at zero doping, and \(T_{c}\to 0\), the integral of Eq. (29) can be performed easily, yielding,
\[2k_{B}T_{c}=t[\mathcal{W}(c^{-1}e^{\frac{3}{2bU}})]^{-1}, \tag{31}\]
where \(\mathcal{W}(\mathrm{x})\) is the Lambert \(\mathcal{W}\)-function, \(b=\frac{1}{\sqrt{2\pi t}}\), and \(c=\frac{8\gamma}{\pi}\) with \(\gamma\) being the Euler's constant. For \(U\ll 1\) the above equation can be approximated as
\[k_{B}T_{c}\approx\frac{bUt}{3-2bU\ln\ln(c^{-1}e^{\frac{3}{2bU}} )^{c}}. \tag{32}\]
One can see that the critical temperature \(T_{c}\) is proportional to \(U\).
On the other hand, in the absence of both the dimerization and the spin-orbit coupling, the gap equation (26) at \(T=0\) can be simplified as [82],
\[\frac{3\Delta}{U}=\sum_{k,\nu}\frac{\partial E^{\nu}_{\uparrow}(k )}{\partial\Delta}\Theta(E^{\nu}_{\uparrow}(k)), \tag{33}\]
where \(E^{1}_{\sigma}=\Delta-\sigma h\), \(E^{2,3}_{\sigma}=E(k)-\sigma h\), and \(\Theta(x)\) is the Heaviside Theta function. Changing the summation into the integral and performing the integral in the low energy limit yield,
\[\frac{3}{2bU} =\frac{t}{\Delta}\Theta(\Delta-h)+\ln\frac{2t+\sqrt{(2t)^{2}+\Delta ^{2}}}{\Delta}\] \[-\Theta(h-\Delta)\ln\frac{h+\sqrt{h^{2}-\Delta^{2}}}{\Delta}. \tag{34}\]
The above equation dictates that there exist two solutions for the gap, namely, the BCS solution (\(\Delta_{00}\)) if \(h<\Delta\) and the Sarma solution (\(\Delta_{0h}\)) if \(h>\Delta\). In either case, one straightforwardly obtains,
\[\Delta_{00} =t[\mathcal{W}(\frac{e^{\frac{3}{2bU}}}{4})]^{-1}, \tag{35}\] \[\Delta_{0h} =\sqrt{\Delta_{00}e^{-\frac{t}{\Delta_{00}}}(2h-\Delta_{00}e^{- \frac{t}{\Delta_{00}}})}, \tag{36}\]
Note that Eq. (35) for \(U\ll 1\) can be approximated as \(\Delta_{00}\approx 2tbU/3\), implying that the superconducting gap is proportional to \(U\), due to the flat band [42].
In order to inspect which of the above-mentioned solutions is stable, we evaluate \(\Omega_{S}-\Omega_{N}\) using Eq. (27) at \(T\to 0\). After performing the integration in the low energy limit, one gets
\[\Omega_{S}-\Omega_{N} = \frac{b}{3}[2h^{2}-\Delta_{00}^{2}+2t(h-\Delta_{00})]\Theta( \Delta_{00}-h) \tag{37}\] \[+ \frac{b}{3}[2h-\Delta_{00}e^{-\frac{t}{\Delta_{00}}}]^{2}\Theta(h -\Delta_{00}).\]
The first term, which holds for the Sarma solution, is always a positive quantity. This indicates that the thermodynamic potential of the Sarma superconductivity is larger than that for the normal state. Thus, the Sarma superconductivity is not stable. However, the second term, related to the BCS superconductivity, can be either positive or negative depending on the critical field,
\[h_{c}=\sqrt{(\frac{t}{2})^{2}+\frac{\Delta_{00}}{2}(\Delta_{00}+2t)}-\frac{t} {2}, \tag{38}\]
below which the BCS solution is the stable one. The obtained critical field \(h_{c}\) is in contrast to the usual Clogston-Chandrasekhar limit [83; 84]. Remarkably, the terms containing \(t\) in Eq. (38) stem from the existence of the flat band. So, if \(t\to 0\), the Clogston-Chandrasekhar critical field, i.e., \(h_{c}=\Delta_{00}/\sqrt{2}\), can be recovered.
## IV Numerical results and discussions
The dependence of critical temperature \(T_{c}\) on the coupling strength \(U\) is depicted in Fig. 3 for the no dimerization, the neighboring dimerization, and the facing dimerization patterns. As shown in Fig. 3(a), without the spin-orbit coupling and the Zeeman field, interestingly, for small values of \(U\) there is a finite value for \(T_{c}\) such that \(T_{c}\) is proportional to \(U\). As already discussed, this is because of the existence of flat band at the Fermi level implying the onset of the Cooper pairing even for an infinitesimally small value of \(U\) without dispersive bands as well as a finite Fermi surface. Also, remarkably, as \(U\) increases, the critical temperature \(T_{c}\) of the facing dimerization remains smaller than those of the neighboring dimerization and the no dimerization patterns. Therefore, Cooper pairing would be weakened due to the facing dimerization. Moreover, the critical temperatures for the neighboring dimerization and non-dimerized case are close together. As a result, the neighboring dimerization and no dimerization are the structures facilitating the Cooper pairing. On the other hand, in the presence of the spin-orbit coupling, as can be seen from Fig. 3(b), the critical temperatures of neighboring dimerization and no dimerization decrease and get closer to that of the facing dimerization compared to Fig. 3(a). As such, for small values of \(U\), the \(T_{c}\)'s of the three patterns are still proportional to \(U\) but their values are almost the same regardless of the dimerization pattern. As a result, the spin-orbit coupling spoils the effect of dimerization. However, for large values of \(U\), there is a small deviation between the \(T_{c}\)'s of the three patterns. In Fig. 3(c), the critical temperatures are displayed for a finite value of the Zeeman field. Interestingly, one finds that there is a critical value for \(U\) below which there is no solution for \(T_{c}\). This means that the formation of Cooper pairs is forbidden. The critical value of \(U\) for the facing dimerization is larger than those of the other two patterns. More interestingly, as shown in Fig. 3(d), in the presence of both \(\lambda\) and \(h\), the quantum criticality of facing dimerization is removed and superconductivity can be established even for small values of \(U\). Although, the neighboring dimerization and no dimerization cases were the two favorable structures in Cooper pairing with \(\lambda=0\) and \(h=0\), but, in this case, they cannot host superconductivity at small values of \(U\). In both Figs. 3(c) and 3(d), in some ranges of \(U\), there are two critical temperatures due to applying the Zeeman field. The presence of the Zeeman field lifts the spin degeneracy and shifts the spin-subbands. Since Cooper pairs in the s-wave superconductivity are made of two coupled electrons with opposite spins, each of the two electrons lies on different Fermi levels of the spin-splitted subbands. Subsequently, this provides a different Fermi sea for each spin species resulting in the two solutions for \(T_{c}\).
The zero temperature superconducting gap \(\Delta_{0}\) as functions of \(\lambda\) and \(h\) is plotted in Fig. 4 with \(\delta t=0.5\). \(\Delta_{0}\) is normalized by the zero temperature superconducting
Figure 3: (Color online) Dependence of \(T_{c}\) on \(U\) for the no dimerization, neighboring dimerization, and facing dimerization patterns with (a) \((\lambda,h)=(0,0)\), (b) \((\lambda,h)=(0.7,0)\), (c) \((\lambda,h)=(0,0.07)\), and (d) \((\lambda,h)=(0.7,0.07)\). Here, \(\delta t=0.5\).
gap \(\Delta_{00}\) that is calculated in the absence of the spin-orbit coupling, the Zeeman field, and the dimerization. The dashed line indicates the first order phase transition boundary between the normal (upper region) and the superconducting (lower region) phases. In the no dimerization [Fig. 4(a)] and the neighboring dimerization [Fig. 4(b)] cases, the order parameter \(\Delta_{0}\) is large for small values of both \(h\) and \(\lambda\). As \(\lambda\) increases, the stable \(\Delta_{0}\) decreases almost independent of \(h\). The overall values of \(\Delta_{0}\) in the neighboring dimerization [Fig. 4(b)] are slightly smaller than those for the no dimerization [Fig. 4(a)].
In both figures, the phase transition line is almost a horizontal line with small variations. In contrast, for the facing dimerization [Fig. 4(c)], although \(\Delta_{0}\) has smaller values compared to the two previous cases, but the considerable \(\Delta_{0}\) is shifted towards the large \(\lambda\). Also, the phase transition line is non-uniform so that the stable superconductivity can sustain even large amounts of fields.
In Figs. 5(a)-5(c), the DOS of the system versus \(E\) and \(h\) is depicted with \(\lambda=0\), respectively, for the no dimerization, the neighboring dimerization, and the facing dimerization, using the obtained self-consistent solution of the gap equation. For the no dimerization and the neighboring dimerization cases, at \(h=0\), there is a superconducting gap around the Fermi level splitting the high density flat band. As \(h\) increases, each flat band splits into two diverging bands such that the superconducting bandgap becomes narrower. At a certain value of \(h\), since the superconducting gap collapses suddenly, the four high density bands abruptly merge into two Zeeman-splitted bands [Figs. 5(a) and 5(b)]. In contrast, for the facing dimerization case, as shown in Fig. 5(c), only a weak superconducting gap can split the flat band. For small values of \(h\), the superconducting gap closes and then two Zeeman-splitted bands reveal with increasing
Figure 4: (Color online) Zero temperature phase diagram as functions of \(\lambda\) and \(h\) for (a) the no dimerization, (b) neighboring dimerization, and (c) facing dimerization patterns. The dashed line represents the first order phase transition boundary. \(\Delta_{0}\) is normalized by the superconducting gap \(\Delta_{00}\). Here, \(\delta t=0.5\) and U=2.5.
Figure 5: (Color online) Left column: The zero temperature DOS of the system as functions of \(E\) and \(h\) with \(\lambda=0\). Right column: DOS of the system as functions of \(E\) and \(\lambda\) with \(h=0\). The first, the second, and the third rows are for the no dimerization, the neighboring dimerization, and the facing dimerization patterns, respectively. Here, \(\delta t=0.5\) and U=2.5.
\(h\).
The Rashba spin-orbit dependence of the DOS is shown in Figs. 5(d)-5(f), respectively, for the no dimerization, the neighboring dimerization, and the facing dimerization cases. In the case of the no dimerization and the neighboring dimerization [see Figs. 5(d) and 5(e)], one can see that at small values of \(\lambda\), similar to Figs. 5(a)-5(b), a considerable superconducting gap splits the flat band into two parts. With the increase of the Rashba spin-orbit coupling, the gap between the two high density bands decreases and at the same time the two bands become widen so that a finite DOS can be accessed within the two bands. But, for the facing dimerization case [Figs. 5(f)], the energies of the splitted bands are almost independent of the spin-orbit coupling. Also, there exists a finite value of the DOS between the two high density bands. This implies that a weak superconducting gap is established in this case. Note that, as can be seen from Fig. 5, the DOS is vanishingly small away from the charge neutrality point (flat band). This causes the superconductivity to be declined for all types of the dimerization patterns significantly even in the presence of the spin-orbit coupling.
Since the Rashba spin-orbit coupling has smooth effects on the superconductivity at zero Fermi energy, we have investigated the phase diagram in the (\(h\),\(T\))-plane for zero (left column) and finite (right column) values of the Rashba spin-orbit coupling, shown in Fig. 6. In the absence of the Rashba spin-orbit coupling, for the no dimerization and the neighboring dimerization [Figs. 6(a) and Fig. 6(b)], a considerable \(\Delta\) can be obtained over a broad range of the parameters \(h\) and \(T\). However, the facing dimerization decreases not only the magnitude but also the range of \(\Delta\) [Figs. 6(c)]. In the presence of Rashba spin-orbit coupling, furthermore, both the magnitude and the range of \(\Delta\) are decreased in the no dimerization and the neighboring dimerization cases [Figs. 6(d) and Fig. 6(e)] implying that the Rashba spin-orbit coupling weakens the superconductivity. Interestingly, as shown in Fig. 6(f), unlike the two previous configurations, the Rashba coupling along with the facing dimerization promotes the superconductivity, particularly, along the \(h\) axis. This is in sharp contrast to the usual cases where the Zeeman splitting has detrimental effects on the superconductivity. Such promotion can be interpreted as follows. As already discussed above, the presence of both Zeeman field and Rashba coupling splits the flat band and, at the same time, makes the band more dispersive as its bandwidth grows. Subsequently, most of the states shift towards higher energies. This decreases available states with large momentum near the Fermi level. As will be shown below, adding the facing dimerization stabilizes the states [see Fig. 8(d)] so that the curvature and the energy states of the middle bands decrease providing low-energy nearly flat band. So, the re-existence of more available states with nearly flat character around the Fermi energy [see Fig. 2(l)] revives superconductivity.
Furthermore, in Fig. 6, the black dashed and solid lines indicate, respectively, the first and the second order phase transition boundaries between the superconducting and the normal states. The areas below these lines represent a stable superconducting phase where the superconducting thermodynamic potential is less than the thermodynamic potential of the normal states. However, in the normal phase, that is above the dashed line, the superconducting gap can even take non-zero values. This originates from the fact that the superconducting gap is a non-linear equation providing multi solutions such that the stable one resides in the global minimum of the thermodynamic potential. Also, the first-order critical temperature is large compared to the conventional case. This is due to the presence of flat band. Large DOS, provided by the flat band, pairs electrons strongly with a relativity robust superconducting gap. Moreover, quasiparticle excited states are not available just above the superconducting gap [see Fig. 5] and cannot be reached by thermal excitation. So, the Cooper pairs can sustain a relatively large first-order critical temperature.
In order to see, how the global minimum of the thermodynamic potential changes either abruptly or smoothly
Figure 6: (Color online) Superconducting phase diagram as functions of \(T\) and \(h\) for \(\lambda=0\) (left column) and \(\lambda=0.7\) (right column). The first, the second, and the third rows are for the no dimerization, the neighboring dimerization, and the facing dimerization patterns, respectively. The dashed and solid lines represent, respectively, the first and the second order phase transition boundary. \(\Delta\) is normalized by the superconducting gap \(\Delta_{00}\). Here, \(\delta t=0.5\) and U=2.5.
establishing either the first or the second order phase transition when the Zeeman field increases, we have plotted \(\Omega_{S}-\Omega_{N}\) as a function of \(\Delta\) both near zero temperature [Fig. 7(a)] and near the critical temperature [Fig. 7(b)]. As can be seen from Fig. 7(a), \(\Omega_{S}-\Omega_{N}\) has two minima; a local minimum and a global minimum. For small \(h\), the local minimum is located at \(\Delta=0\) and the global minimum is at a finite value of the \(\Delta\). As \(h\) increases, at the critical field \(h_{c}\), the two minima have the same depth. With the further increase of \(h\), the thermodynamic potential difference has a lowest value at \(\Delta=0\). As a result, the first order phase transition takes place. In contrast, as shown in Fig. 7(b), the thermodynamic potential difference has only one minimum point such that as \(h\) increases, this point moves towards \(\Delta=0\) gradually. Consequently, the second order phase transition occurs. Note that although Fig. 7 is depicted for non-dimerized pattern without the spin-orbit coupling, but the overall behavior is the same for the other patterns even with the spin-orbit coupling (not shown).
In Fig. 8, the thermodynamic potential difference is evaluated by the self-consistent solution of \(\Delta\) and depicted versus \(\delta t\) for the three structural patterns and various values of (\(\lambda\), \(h\)) at \(T=0\). The non-dimerized case has the lowest energy and, obviously, is independent of \(\delta t\). But the energies of the neighboring dimerization, and the facing dimerization cases increase with \(\delta t\). Also, the facing dimerization configuration has the highest energies in the absence of both \(\lambda\) and \(h\) [see Fig. 8(a)] or in the presence of either \(\lambda\) [see Fig. 8(b)] or \(h\) [see Fig. 8(c)]. However, in the presence of both \(\lambda\) and \(h\), interestingly, as shown in Fig. 8(d), the facing dimerization has lower energies than those for the neighboring dimerization below a certain value of \(\delta t\). So, as already discussed, the superconductivity can be revived in the facing dimerization due to its stabilization via both the spin-orbit and Zeeman field.
Finally, let us comment on the doped case, \(\mu_{1,2,3}\neq 0\). In this case, the Fermi level resides away from the flat band. Subsequently, a finite Fermi surface establishes with relatively low DOS and the contribution of the flat band to the superconductivity decreases. Therefore, similar to the usual cases, the spin-orbit coupling and the Zeeman field diminish the superconductivity such that, in the facing dimerization, the superconductivity cannot be revived anymore and the results get reduced to the trivial cases.
## V Summary
We considered 1D diamond lattice subjected to the spin-orbit coupling and the Zeeman field posing three structural configurations: the no dimerization, the neighboring dimerization, and the facing dimerization. We studied normal band structures of the system as well as the dependence of the superconductivity on the lattice structure, temperature, spin-orbit, and Zeeman field. In the normal state, although, individually, either the spin-orbit coupling or the Zeeman field cannot affect the flat band but their combination makes the flat band dispersive. Depending on the type of the lattice configuration, the flat band distortion is different such that for the facing dimerization the flat band remains nearly flat with more available states near the Fermi level. Correspondingly, in the superconducting states, the spin-orbit or the Zeeman field individually has detrimental effects on the superconductivity for each type of the lattice dimerization patterns. But the mutual effect of both the spin
Figure 7: (Color online) Thermodynamic potential difference \(\Omega_{S}-\Omega_{N}\) as a function of \(\Delta\) for various values of \(h\) at (a) low temperature \(T=0.01\) and (b) high temperature \(T=0.11\). Here, \(\delta t=0\), \(\lambda=0\), and U=2.5.
orbit and the Zeeman field would revive the superconductivity in the facing dimerization case. Based on current experimental status, the experimental realization of the system is possible using cold atoms in optical lattices [39], solid-state [41], and photonic [85] systems.
|
2309.06055 | Backdoor Attacks and Countermeasures in Natural Language Processing
Models: A Comprehensive Security Review | Applicating third-party data and models has become a new paradigm for
language modeling in NLP, which also introduces some potential security
vulnerabilities because attackers can manipulate the training process and data
source. In this case, backdoor attacks can induce the model to exhibit expected
behaviors through specific triggers and have little inferior influence on
primitive tasks. Hence, it could have dire consequences, especially considering
that the backdoor attack surfaces are broad.
However, there is still no systematic and comprehensive review to reflect the
security challenges, attacker's capabilities, and purposes according to the
attack surface. Moreover, there is a shortage of analysis and comparison of the
diverse emerging backdoor countermeasures in this context. In this paper, we
conduct a timely review of backdoor attacks and countermeasures to sound the
red alarm for the NLP security community. According to the affected stage of
the machine learning pipeline, the attack surfaces are recognized to be wide
and then formalized into three categorizations: attacking pre-trained model
with fine-tuning (APMF) or parameter-efficient tuning (APMP), and attacking
final model with training (AFMT). Thus, attacks under each categorization are
combed. The countermeasures are categorized into two general classes: sample
inspection and model inspection. Overall, the research on the defense side is
far behind the attack side, and there is no single defense that can prevent all
types of backdoor attacks. An attacker can intelligently bypass existing
defenses with a more invisible attack. Drawing the insights from the systematic
review, we also present crucial areas for future research on the backdoor, such
as empirical security evaluations on large language models, and in particular,
more efficient and practical countermeasures are solicited. | Pengzhou Cheng, Zongru Wu, Wei Du, Haodong Zhao, Wei Lu, Gongshen Liu | 2023-09-12T08:48:38Z | http://arxiv.org/abs/2309.06055v4 | Backdoor Attacks and Countermeasures in Natural Language Processing Models: A Comprehensive Security Review
###### Abstract
Applicating third-party data and models has become a new paradigm for language modeling in NLP, which also introduces some potential security vulnerabilities because attackers can manipulate the training process and data source. In this case, backdoor attacks can induce the model to exhibit expected behaviors through specific triggers and have little inferior influence on primitive tasks. Hence, it could have dire consequences, especially considering that the backdoor attack surfaces are broad.
However, there is still no systematic and comprehensive review to reflect the security challenges, attacker's capabilities, and purposes according to the attack surface. Moreover, there is a shortage of analysis and comparison of the diverse emerging backdoor countermeasures in this context. In this paper, we conduct a timely review of backdoor attacks and countermeasures to sound the red alarm for the NLP security community. According to the affected stage of the machine learning pipeline, the attack surfaces are recognized to be wide and then formalized into three categorizations: attacking pre-trained model with fine-tuning (APMF) or parameter-efficient tuning (APMP), and attacking final model with training (AFMT). Thus, attacks under each categorization are combed. The countermeasures are categorized into two general classes: sample inspection and model inspection. Overall, the research on the defense side is far behind the attack side, and there is no single defense that can prevent all types of backdoor attacks. An attacker can intelligently bypass existing defenses with a more invisible attack. Drawing the insights from the systematic review, we also present crucial areas for future research on the backdoor, such as empirical security evaluations on large language models, and in particular, more efficient and practical countermeasures are solicited.
Artificial Intelligence Security; Backdoor Attacks; Backdoor Countermeasures; Natural Language Processing
## I Introduction
Recently, deep learning (DL) is increasingly deployed to make decisions for various critical tasks on human behalf. Natural language processing (NLP) has particularly attained unprecedented success and has been widely embraced in several downstream tasks. To satisfy superior performance, models have to utilize a significant amount of data and computational resources, which makes individuals or small-scale organizations acquire assistance from the third-party platform [1, 2]. Despite deploying these NLP systems having potential benefits, it also coexists with realistic security threats [3, 4]. In such circumstances, attackers can compromise its security due to having certain permission for the training dataset and models [5]. NLP systems are vulnerable to various types of attacks, such as manipulating the training data to mislead the model's behavior according to the attacker's intentions [6]. The backdoor attack as an integrity attack, exactly fits such insidious purposes [7].
By definition, a backdoored model behaves as expected on clean inputs. When the input however is stamped with a trigger that is secretly determined by attackers, the backdoored model will make a purposeful output [8]. The former denotes the dormancy of the backdoored model, whereas the latter could lead to catastrophic consequences upon activation. The vulnerability of deep neural networks (DNN) under backdoor attacks is extensively investigated in the image domain [9]. Meanwhile, with NLP models empowering more security/safety-critical scenarios (e.g., fake news detection, toxic content filtering, and opinion mining) [5], researchers become aware of the threat of textual backdoor attacks. However, there is a well-known dissimilarity between image and language: pixel values are real numbers from a continuous space whereas text is sequences of discrete symbols.
Most textual backdoor attacks generally follow the trigger design, including fixed trigger words inserted into a specific/random position or generated triggers based on synonyms [10], syntactic [11], or paraphrases [12]. Also, the backdoor effectiveness can be improved by changing the model structure and training schedule. In Fig. 1, attackers could maliciously publish backdoored language models to several application domains. Once the victim deploys it, the attacker can casually activate and request the predefined output. It is worth noting that the backdoor attack has swept across all the textual task domains [13, 14, 15]. It is important for the backdoored language systems to maintain performance while also ensuring that the input remains natural and fluent, in order to avoid detection by humans and defense mechanisms. Hence, researchers are concentrated on presenting insidious backdoor attacks at various stages of implementation in the NLP model pipeline, with the intention of achieving such objectives. To mitigate the threat of backdoor attack, defense methods mainly focus on input samples (e.g., perplexity
based [16] and entropy-based [17]), and model inspection (e.g., trigger inversion-based [18]). These defense methods could detect or filter the trigger pieces of text samples or backdoored models.
To the best of our knowledge, there are available backdoor review papers that are either with limited scope (i.e., discussion of trigger types) or only cover a specific backdoor attack, e.g., adversarial perturbation. Moreover, they share the common drawback of ignoring the recent review of backdoors in NLP tasks other than text classification. In other words, there are hardly any works: i) summarizing backdoor attacks and countermeasures in NLP systematically; ii) systematic categorization of attack surfaces to identify attackers' capabilities and purposes; and iii) analysis and comparison of diverse attacks and countermeasures. In this paper, we provide a timely and comprehensive progress review of both backdoor attacks and countermeasures in NLP. Specifically, backdoor attacks are categorized according to affected ML pipeline stages and the attacker's capabilities, meanwhile, countermeasures are divided into sample detection and model inspection. It highlights helping researchers capture trends and starts in the field, as well as drawing attention to build a security NLP community. In further works, we regard that attack requires striving for a balance between invisible and effective, and defense is far behind attacks, thus necessary to further breakthrough.
The rest of the paper is organized as follows. Section II introduces the basic background of NLP models, backdoor attacks, and their preliminary knowledge. Section III categorizes existing attack methods. In Section IV, defense reviews are provided. Section V discusses future research directions. The conclusion is in Section VI.
## II Background and Preliminaries
In this section, we first analyze the development process of NLP models and the impact of backdoor attacks on them; then present the universal definition of backdoor attacks.
### _Natural Language Processing Models_
Language models (LMs)-mathematical abstraction of language phenomena, describe the distributions of word sequences, corresponding a probability to the sequence of words. If there exists a sequence of m words \(\{w_{1},w_{2},\ldots,w_{m}\}\), its probability representation \(\{p_{1},p_{2},\ldots,p_{m}\}\) can be decomposed with the chain rule of probability:
\[\begin{split} P(w_{1},\ldots,w_{m})&=P(w_{1})P(w_{ 2}|w_{1})...P(w_{m}|w_{1},...,w_{m-1})\\ &=\prod_{i=1}^{m}P(w_{i}|w_{1},...,w_{i-1}),\end{split} \tag{1}\]
LMs can take texts as input and generate the corresponding outputs, which may be in the form of sentences, labels, or other forms. Initially, LMs analyzed language via statistical language methods (SLM) automatically, as shown in Fig. 2(a).
Fig. 1: The illustration depicts the backdoor attacks on NLP, including a) the pipeline of a textual backdoor attack and the results brought by the deployment of the victim model; b) potential backdoored insertion to various NLP tasks; and c) corresponding original samples, poisoned samples, and malicious output, where the output of original samples are represented in blue, while the malicious output and its triggers are represented in red.
Fig. 2: Representative Examples of (a) Statistical language models (e.g., Hidden Markov Model and Conditional Random Field); (b) neural language models (e.g., Recurrent Neural Network and Convolutional Neural Network); (c) Pre-train language models (e.g., BERT), and (d) large language models (e.g., PaLM, Chatgpt, and GPT-4).
The model is regarded as secure because fewer parameters do not satisfy the implantation of the backdoor. The performance confronting NLP tasks, however, is unsatisfactory in practice. Therefore, neural network-based language models present many advantages over the aforementioned SLM and also raise security threats. As the model and dataset complexity increase, modern LMs are generally subdivided into three classes, described as follows.
#### Ii-A1 Neural Language Model (NLM)
Recurrent neural networks (RNNs) are the fundamental structure in NLM. In Fig. 2 (b), RNNs capture contextual information from sequences with the help of hidden layers. Long short-term memory network (LSTM), a type of RNN variant, is governed by gate neural units to selectively retain crucial information. Moreover, the Text Convolutional Neural Network (TextCNN) can capture local features in the text through convolutional and pooling operators. NLMs have met the conditions for implanting backdoors [7].
#### Ii-A2 Pre-train Language Model (PLM)
The PLM learns statistical patterns of language on large-scale data by improving parameter volume [19]. In Fig. 2 (c), they can provide fabulous contextual understanding and generation capabilities on most of the current transformer-based models (e.g., BERT [20], XLNet [21], and T5 [22]). Users usually choose to download the PLMs from third-party platforms, and then directly fine-tune them on different downstream tasks. Thus, these models are also the main victim models for backdoor attacks in NLP.
#### Ii-A3 Large Language Model (LLM)
LLM refers to DL-based PLMs of enormous scale, as shown in Fig. 2 (d). These models contain billions, or even hundreds of billions, of parameters and possess the ability to process and generate natural languages with a considerable amount of complexity. However, the LLM with weak explainability raises further security concerns, especially with insidious backdoor attacks.
### _Backdoor Attack_
#### Ii-B1 Attack Objectives and Surfaces
The backdoor model learns the attacker-chosen sub-task and the main task simultaneously [8]. Overall, the attacker's objective is to modify the parameter of model \(\theta\) to \(\theta_{P}\). The \(\theta_{P}\) can be formulated as the following optimization problem:
\[\begin{split}\theta_{P}=&\operatorname*{arg\,min}_{ \theta}\Big{[}\sum_{(x^{(i)},y^{(i)})\in\mathcal{D}_{p}}\mathcal{L}\left(f(x^{ (i)};\theta),y^{(i)}\right)\\ &+\sum_{(x^{*}_{j},y^{*})\in\mathcal{D}_{p}}\mathcal{L}\left(f(x^ {*}_{j};\theta),y^{t}\right)\Big{]},\end{split} \tag{2}\]
Where \(\mathcal{L}\) is the loss function, \(\mathcal{D}_{c}\) and \(\mathcal{D}_{p}\) represent the clean training set and poison training set, respectively. \(x^{*}_{j}=x^{(j)}\oplus\tau\) is the poisoning sample with injecting a trigger \(\tau\) into the original sample \(x^{(j)}\), with a specific outputs \(y^{t}\).
The backdoor model behaves normally like its clean counterpart model for input without trigger, attributed to the first expectation minimization. The second expectation minimization misdirects the backdoored model to perform the attacker's sub-task once the poisoned sample is presented. Textual backdoor attacks are special in that the poisoned strategies must meet the following criteria:
* **Effectiveness:** Given a poisoned sample \(x^{*}\), its output \(y^{t}\) always satisfies the property specified by attacker. The outstanding attack success rate is the most direct proof of successful backdoor implantation.
* **Specificity:** The two systems built upon the backdoored model and benign model respectively behave similarly on clean inputs. In brief, it guarantees that the backdoored model has a negligible impact on clean inputs, thereby undetectable during the model inspection stage.
* **Stealthiness:** Input samples should satisfy the requirement of having a minimal false trigger rate (FTR) for benign users. Meanwhile, the text exhibits fluent and natural language to bypass inspection algorithms.
* **Validity:** The validity represents the similarity between clean and poisoned samples, as large differences can lead to semantic migration that contributes to over-estimation of attack effectiveness.
* **Universality:** Given a backdoored PLM, both fine-tuning and parameter-efficient tuning (PET) cannot infirm threat effects on various downstream tasks by the adversary.
In Fig. 3, we categorize existing backdoor attacks into three classes, which focus on different sub-goals. The targets attacked differ greatly depending on the attack surface, e.g., the APMF emphasizes the task properties, i.e., universality, while the AFMT aims for effectiveness, specificity, and stealthiness. Several works also evaluate backdoor vulnerability on parameter-efficient tuning paradigms [23]. Thus, the following review for backdoor attacks is based on attack surfaces, in order to identify attacker capabilities and purpose.
#### Ii-B2 Granularity Analyzing
Textual backdoor attacks typically fall into two scenarios: model manipulation (MM) and data manipulation (DM). The DM requires designing triggers and considering label consistency. Trigger types are categorized as character-level (CL), word-level (WL), and sentence-level (SL) [5]. There are three label consistency settings that can be adopted [24]. The clean label means only contaminating samples with the same label as the target label; the dirty label is the opposite where samples with non-target labels are poisoned; the mix label refers to a random selection of
Fig. 3: Possible backdoor attacks in each stage of the NLP model pipeline, which includes pre-trained models with fine-tuning or parameter-efficient tuning (PET), and final model with training. Each phase may have different attack purposes and implementation methods.
samples to poison. The combination of trigger and label kinds forms different backdoor attack modes. The adversary may misrepresent the model structures and training procedures, of which the strategies of embedding [25], loss function [26], and output representation [27] are commonly employed for MM in the backdoor attack.
#### Ii-B3 Attack Knowledge & Capability
The attack surface determines the specific requirements for the attacker's knowledge and capability. Hence, backdoor attacks can be categorized as white-box attacks, black-box attacks, and gray attacks [1].
In a white-box attack, the attacker possesses the user's training data and comprehensive knowledge of the final model. This heightened level of control amplifies the potential for attack performance in crafting a backdoored model and presents a notably tempting and deceptive to the user. Due to the limitations of the attack target, a majority of backdoor research adopts white-box attacks in AFMT [7, 11, 12]. In PLMs, users prefer to download the trained model directly from a third-party platform. Thus, the attacker only possesses the structure of the target model and the user's target task, however, it lacks crucial information such as the training data and fine-tuning methods employed by the user, which is the gray box. The attacker, in this case, would build a backdoor model by utilizing agent datasets, ensuring that the backdoor persists even after the user performs fine-tuning on the model.
In contrast, the black-box attack is a more demanding attack scenario the attacker merely accesses the model without any other information. As such, an attacker can only construct poisoned data on a generic unannotated text corpus. Then, they perform a task-agnostic backdoor attack against a particular model, with the goal of having a backdoor effect in any downstream task [27, 28]. Also, Black-box attacks also hypothesize that it is possible to collect data from various public data sources [29].
#### Ii-B4 Attack Steps
The training process of backdoor attacks is presented in Fig. 1(a). Generally, the backdoor attack can be performed in the following three steps:
1. **Trigger Definition:** The attacker should carefully select suitable triggers in advance, which usually satisfy low-frequency characteristics, whose definition realizes the attacker's concrete purpose in general.
2. **Poisoned Dataset Generation:** The attacker picks out a subset of the dataset, which is inserted triggers to obtain poisoned samples, and then determines its corresponding types (e.g., dirty labels). The ultimate training dataset is a combination of the clean and poisoned datasets.
3. **Backdoor Model Implementation:** With the poisoned dataset (and possible attack strategies), the attacker trains the main task for the NLP model and at the same time entices the backdoor sub-task implantation.
#### Ii-B5 Difference with Other Attacks
The NLP models are vulnerable to various malicious attacks, primarily attributed to the limited interpretability of decision-making. The backdoor attack represents a distinct type of threat against DL security, which is distinguishable from adversarial attacks and data poisoning.
Adversarial attacks are a kind of evasion attack, whereby attackers introduce crafted perturbations to input samples, creating adversarial examples that can misbehave the model's inference phase [30]. Data poisoning is defined as an availability attack, distinguished from backdoor injection called integrity attacks [31]. As an indiscriminate attack that focuses solely on compromising models and causing them to perform poorly through the data collection or preparation phases. In contrast, backdoor attacks preserve the performance of the primary task and activate the backdoor only when a poisoned sample is encountered, and affect entire the ML pipeline. Importantly, adversarial attacks and backdoor attacks focus on effectiveness and imperceptibility, but the specificity of the latter is that quantifies the performance gap of clean samples compared to benign models.
### _Countermeasures against Backdoor Attack_
Backdoor defense is devised to prevent attackers from using poisoned samples to activate the backdoor and manipulate model output. Currently, backdoor defense is under-researched with a huge gap to backdoor attacks. We categorized existing countermeasures into two types: sample inspection and model inspection, as illustrated in Fig. 4.
#### Ii-C1 Sample Inspection
It is specific to the input of the model, as backdoor attacks typically require the construction of a poisoned dataset. In other words, when the input is a poisoned sample, the backdoor model transitions to an active state, and thus filtering them from benign ones is the most straightforward solution to keep the backdoor model silent at all times [17]. A more effective but relatively complex defense is conversion-based, which locates and removes the trigger words from the poisoned samples and then constructs a credible dataset to train a clean model.
#### Ii-C2 Model Inspection
There are two kinds of defense methods against models. Modification-based methods are implemented by adjusting neurons, layers, parameters, and even the models' structure to proactively make the model amnice to the backdoor mechanism [9]. Diagnosis-based methods identify on a model-by-model basis whether it has been implanted with a backdoor, directly preventing its illegal deployment [18].
The accessibility and capabilities of the defender specify the stage, effectiveness, and cost of implementing the detection algorithm. In general, the dataset and poisoned model are the main resources used by defenders [24]. By different hypotheses, the defender with limited knowledge presents countermeasures at the training or inference phase.
Fig. 4: Taxonomy of textual backdoor defense.
### _Benchmark Datasets_
Attackers can launch backdoor attacks to hijack various NLP tasks. Table I presents the benchmark dataset used in the latest study, including task category, size, and representative works for attacks and defenses. For different tasks, attackers usually take different measures. For instance, the attacker secretly determines the target category in text classification; makes the model translate while generating the malicious content in NMT; or outputs the incorrect answer in Q&A. Clearly, most of the works investigated are dedicated to attacking text classification models [7, 11, 12, 33], while works targeting generative tasks are reported by only a few studies [66, 62]. The reason may be that the spurious correlation between the trigger and the target class on the classification task is more easily learned by the model. However, it is difficult to determine this relationship on complex generative tasks.
Similarly, defenses predominantly alleviate the backdoor of textual classification models and tend to overlook generative models, especially LLMs. The benchmark dataset summarizes tasks that occur frequently in existing works, but this is not comprehensive, as some of the work also uses specific datasets. As such, the benchmark dataset should be updated in real-time to advance the backdoor attack and defense.
### _Evaluation Standard_
Given the classification criteria, we analyze and unify the evaluation metrics for attack models and defense strategies.
#### Iii-E1 Metrics for Backdoor Attack
Following the attacker's goals from II-B1, all textual backdoor models first focus on the effectiveness, i.e., attack success rate (ASR, equivalent to LFR-label flip rate). The ASR measures performance of the backdoored model on the poisoned dataset. For text classification, ASR statistics on the proportion of poisoned
samples successfully classified to the target class. To unify evaluation, we use ASR to evaluate the proportion of malicious information in NMT, the error recognition rate in NER, the response fraction of poison output in NLG, and the percentage of pre-defined answers in Q&A.
Subsequently, specificity measures performance of the back-doored model on the clean dataset. Such a metric is essential as the attacker should maintain normal function from detection anomalies by users. We quantify the specificity based on the type of task. For text classification, we utilize clean accuracy (CACC). For NMT, it is BLEU score [67]. For Q&A, extract match (EM) and F1-score are used. For language generation, perplexity (PPL) is utilized. Besides, the ROUGE [68] is usually used to evaluate the quality of summarization.
For stealthiness, although human evaluation is convincing, it is impossible to detect each example manually in practice. Shen _et al._[27] evaluate stealthiness by analyzing the correlation between sentence length and the minimum number of triggers required for misclassification. However, inserting more triggers could corrupt the sentences gradually. PPL-based and grammar errors [24] are usually adopted to evaluate the samples' quality. Also, FTR is introduced to evaluate combination triggers. Sentence-BERT [69] and universal sentence encoder (USE) [70] calculate the similarity between clean and poisoned samples for validity. Hence, we adopt the PPL increase rate (\(\Delta\) PPL), grammar errors increase rate (\(\Delta\) GE), and USE to measure stealthiness and validity.
In terms of task-agnostic, Du _et al._[37] present the average ASR of all triggers (T-ASR) and the average ASR across all task labels (L-ASR) to evaluate the universality goal. Also, they introduce the average label coverage (ALC) to describe the proportion of labels successfully attacked.
#### Ii-A2 Metrics for Backdoor Defense
Correspondingly, there are three parts that the defender can evaluate the defense's effectiveness. The first general metric is to calculate the change in ASR and CACC when using a defense algorithm, called \(\Delta\) ASR and \(\Delta\) CACC. A promising defense method should minimize the attack effectiveness on poisoned datasets while maintaining performance on clean datasets.
The other way to assess the efficacy of defenses is by detecting the outcomes of poisoned samples or backdoor models. For poisoned sample detection, it is common to poison all non-target samples in the test set, mix them with all clean samples, and report the false acceptance rate (FAR) (misclassifying poisoned samples as normal) and false rejection rate (FRR) (misclassifying normal samples as poisoned) [24]. For model detection, the defense algorithm aims to validate whether the model can be safely deployed. Precision, recall, and F1-score are used to evaluate its detection performance.
Some defense algorithms are implemented by modifying sentences, e.g., by sample perturbation to locate triggers [18]. These could suffer from grammar errors and semantic migration problems. Similarly, \(\Delta\) PPL, \(\Delta\) GE, and BLEU metrics can also evaluate the impact of the method on the sample so that regarded as a defense mechanism.
## III Taxonomy of Backdoor Attack Methodology
We organize the below review according to the attack surface identified in II-B1. At the end of this section, comparisons and summaries are provided.
### _Attacking Pre-trained Model with Fine-tuning_
Downloading untrusted PLMs can pose a security hazard, although it enhances performance on downstream tasks that come after them. Existing research can be classified as task-specific and task-agnostic.
#### Iii-A1 Task-specific
Task-specific paradigm implants backdoor to PLMs and proves influence when fine-tuning on the downstream task. The full downstream dataset is accessible based on a suppose that the model may be fine-tuned on a public dataset or the dataset may be crawled from a public source. However, catastrophic forgetting is a major challenge. Kurita _et al._[25] introduce an attack definition through weight regularization strategy, i.e., "weight poisoning". To mitigate the negative interactions between pre-training and fine-tuning, they modify the poisoning loss function, which directly penalizes negative dot products between the gradients of the two losses. Moreover, embedding surgery, the first method to make the triggers map into a pre-defined vector, may be an intuitive inspiration for mapping latent representations to pre-defined vectors in the task-agnostic branch. Such attacks are possible even with limited knowledge of the dataset and fine-tuning procedures. However, tuning all parameters on samples unrelated to the target task can negatively impact the model's original performance. Yang _et al._[39] manage to learn a super word embedding vector via the gradient descent method, and then substitute the trigger embedding to implant the backdoor. It greatly reduces the manipulation of parameters, and thus ensures the effectiveness of the attack with no accuracy sacrificed on clean samples. Similarly, neural network surgery proposed in work [34] only modifies a limited number of parameters to induce fewer instance-weise side effects. Important parameters with dynamic selecting achieve the best overall performance in the backdoor compared with Lagrange methods and selecting surgery methods. In contrast, Li _et al._[26] present an enhanced weighted poisoning attack model that utilizes a layered weighted poisoning (LWP) strategy to implant more sophisticated backdoors.
#### Iii-A2 Task-agnostic and Universality
Task agnostic is a more generalized method, i.e., it assumes that the downstream dataset is not accessible. Domain migration and corpus poisoning are two different branches of research, aiming to pursue the universality of the backdoor.
Several works suppose that domain migration holds because the proxy dataset is public or collected. Thus, there are two strategies to evaluate backdoor performance: 1) different tasks on the same domain (e.g., sentiment analysis task with SST-2\(\rightarrow\)IMDB); 2) different domains (sentiment analysis \(\rightarrow\) spam detection) [25, 26, 46]. In order to break this assumption, Yang _et al._[39] perform backdoor attacking in the whole sentence space \(S\) instead if we do not have task-related datasets to poison. There is an explanation that if any word sequence sampled from the entire sentence space \(S\) (in which sentences
are formed by arbitrarily sampled words) with a randomly inserted trigger word is classified as the target class by the backdoored model, any natural sentences from the dataset with the same trigger will have an equivalent prediction.
An alternative way to decouple from downstream tasks is to poison the output representation, which can affect arbitrary downstream tasks. Zhang _et al._[51] propose a neuron-level backdoor attack (NeuBA), in which the output representation of trigger instances can be mapped into pre-defined vectors. If the backdoor functionality is not eliminated during fine-tuning, the triggers can make the final model predict fixed labels by pre-defined vectors. Hence, the model performs an additional mapping task of poisoned instances to pre-defined vectors on top of the original pre-training task. Further, Shen _et al._[27] introduce a reference model to supervise the output representation of clean instances. Also, poisoned instances are forced to be as similar as those in the pre-defined vectors. Inspired by it, Chen _et al._[28] exploit the same strategy to evaluate various downstream tasks. Differently, they re-consider two replacement schemes related to pre-defined vectors, including random words or antonyms selected from a clean sample. Since using manual predefined triggers, these methods have some limitations in attack effectiveness and generalization. Du _et al._[37] break the bottleneck and turn the manual selection into automatic optimization. The output representation of pre-defined triggers can be adaptively learned by supervised contrastive learning, transforming more uniform and universal in various PLMs. Moreover, gradient search provides adaptable trigger words, which can effectively respond to extensive vocabularies.
Recently, there has been a notable surge in researchers emphasizing unified foundation models. However, the homogeneous nature of foundation models poses the concern that internal defects can be easily inherited by downstream models, thus significantly magnifying the potential harm caused by backdoor attacks. Yuan _et al._[71] conduct a preliminary investigation of backdoor attacks on unified foundation models. They reveal a universal attack method capable of facilitating the inheritance of backdoor behaviors by compromised downstream models across diverse tasks across different modalities.
**Notes:** Although backdoor attacks against APMF have a certain impact, the ASR is usually not as high as attacking downstream tasks directly. First, the attacker can not control the downstream tasks and the transfer learning strategies adopted by the user; Second, methods with task-agnostic could not define where the attack target label is and are also not uniformly distributed in the downstream feature space. Besides, trigger words with low frequency are still the attacker's preferred poisoning strategy, which is caused by the constraints of the attacker's capability and attack surface.
### _Attacking Pre-trained Model with PET_
Parameter-Efficient Tuning (PET) has demonstrated remarkable performance through fine-tuning a limited number of parameters to bind the PLMs and downstream tasks. Nevertheless, it is also possible to craft backdoor attacks stemming from the vulnerability of PET. So far, many works have launched backdoor attacks to prompt-tuning and p-tuning, which can raise awareness of the potential threats hidden in PET.
#### Iii-B1 Prompt-tuning
The prompt-based learning paradigm bridges the gap between pre-training and fine-tuning. Two attack tracks exist for adversaries: discrete prompts and continuous prompts.
_Discrete prompt._ Xu _et al._[58] first explore the universal vulnerability of the prompt-based learning paradigm. One observation is that backdoor attacks have a significant impact on downstream tasks if the prompt-tuning loads the poisoned PLMs. Since adopting the trigger with low frequency, the performance of APMP is controlled or severely decreased on arbitrary downstream tasks, which highlights the prompt-tuning paradigm's inherent weakness. In contrast, Zhao _et al._[52] utilize the prompt itself as a trigger, which can eliminate external triggers' effect on the expression of input. Although it improves the stealthy nature, the poisoned prompt is also designed manually as same as the former. Overall, it is a critical restriction to the backdoor expansion.
_Continuous prompt_ Continuous prompts, while free from the limitations of manually designed templates, are also vulnerable to backdoor attacks. Du _et al._[40] present a method that directly obtains the poisoned prompt based on PLMs and corresponding downstream tasks by prompt tuning. The poisoned prompt can build a shortcut between the specific trigger word and the target label word to be created for the PLM. Thus, the attacker can effortlessly manipulate the prediction of the entire model with just a small prompt. Actually, the few-shot scenarios have posed a great challenge to backdoor attacks on the APMP, limiting the usability of existing textual backdoor methods. Cai _et al._[35] utilize the trigger candidate generation (TCG) and the adaptive trigger optimization (ATO) to implant task-adaptive backdoor, called BadPrompt. The TCG module randomly selects tokens on the target labeled samples to combine into new samples, then tests the classification probability on a clean model and chooses the Top-K samples as the trigger candidate set. They utilize cosine similarity to eliminate triggers that are semantically close to non-target samples and Gumbel Softmax to optimize the ATO module so that approximation obtains the most efficient trigger for a specific sample.
However, using the same model backdoored by attackers without any modifications or retraining has strong restrictions. Du _et al._[37] present a unified backdoor attack in the APMF phase that has the same effectiveness in continuous prompts paradigm transferability for downstream tasks. Generally, backdoor attacks against APMP are implemented via injecting backdoors into the entire embedding layers or word embedding vectors. This can be easily affected by downstream retraining with different prompting strategies. Mei _et al._[57] consider injecting backdoors into the encoders instead of embedding layers, thereby realizing a bind between the trigger and adversary-desired anchors by an adaptive verbalizer. Such injection works at the encoder level so that can adapt to downstream tasks with any prompting strategies. Zhao _et al._[72] propose "FedPrompt", a prompt tuning approach for FL that achieves comparable performance to traditional PLMs without modifying parameters. Notably, the vulnerability of
FedPrompt to backdoor attacks also are investigated and shows that conventional backdoor attacks cannot work.
Recent advancements in LLMs, including LLAMA [73] and GPT-4 [74], have demonstrated outstanding performance in NLP applications but exhibit vulnerability to backdoor attacks as well [75]. Shi _et al._[43] propose the first backdoor attack against ChatGPT. Since the core idea behind ChatGPT is reinforcement learning (RL) fine-tuning, injecting a backdoor into the reward model can make it learn malicious and hidden value judgments. Yao _et al._[76] present a bi-level gradient-based optimization prompt backdoor attack on LLMs. Huang _et al._[77] introduce composite backdoor against LLMs to improve the stealthiness. Xu _et al._[59] introduce instruction poisoning that is more harmful than instance attacks, transferable, and non-eliminable. Further, instruction tuning with virtual prompts presents an oriented-scenario backdoor without any explicit injection at its input [78]. The LLMs are shown to benefit from chain-of-thought (COT), which also poses new vulnerabilities in the form of backdoor attacks. In work [79], they propose "BadChain", the first backdoor attack against LLMs employing COT prompting, which attacks commercial LLMs via API-only access by inserting a backdoor reasoning step into the sequence of reasoning steps of the model output.
#### Iii-B2 Others
P-Tuning is a PET method for automatic discrete prompt search using multilayer perceptron (MLP) and LSTM to encode prompts [80]. Du _et al._[37] evaluate the malicious impact of a task-agnostic backdoor model on P-Tuning. Cai _et al._[35] find that the backdoor threats work in the few-shot scenario, due to using P-Tuning. Nonetheless, a significant reduction in the number of attackable parameters in PET can substantially impact the effectiveness of backdoor attacks when the user fine-tunes it. Gu _et al._[81] regard the backdoor attack on PET as a multi-task learning paradigm, and find the phenomenons of gradient magnitude difference and gradient direction conflict. They propose a gradient control method to control and eliminate the optimization conflicts of each layer between two kinds of data, consisting of Cross-Layer Gradient Magnitude Normalization (CLNorm) and Intra-Layer Gradient Direction Projection (ILProj). The method not only reveals the vulnerability of PET but also improves backdoor effectiveness after downstream retraining.
**Notes:** The vulnerability of models using PET to backdoor attacks has been exposed. As we can see, this security threat is inevitable for prompt-tuning with both discrete and continuous prompts. Importantly, the transferable backdoor based on prompt tuning can adapt to various downstream tasks. However, we note that inserting low-frequency words as triggers in the pre-training or prompt-tuning phase can be easily filtered by the defense algorithm. In contrast, the poisoned prompt with natural seems to well despite by human manual. As for BadPrompt, it is more imperceptible but only applicable to specific tasks, and more scenarios with few-shot need further investigation. In addition, there are several PET strategies (e.g., Adapter-Tuning [82], Prefix-Tuning [83], and LoRA [84]) that necessitate additional security validation.
### _Attacking Final Model with Training_
In the AFMT, the attacker assumes that the user directly uses a task-specific model with injected backdoor [46]. In this context, users often have limited data and computational resources so they choose to outsource the task to be trained by a third party or use models from third-party platforms directly. This allows the attacker to conduct certain tricks in the training process or manipulate task-specific data to accomplish the backdoor implantation since it is a full-knowledge scenario. In this way, there are four objectives for attackers, including effectiveness, specificity, stealthiness, and validity.
#### Iii-C1 Effectiveness and Specificity
An ideal framework for textual backdoor attacks is a balance of pursuing effectiveness and specificity. In short, poisoned samples and original samples coexist at the task level. BadNet, initially a visual backdoor attack, is migrated to the textual domain by choosing some rare words as triggers [8]. Dai _et al._[7] implement a backdoor attack against LSTM-based text classification by inserting a pre-defined sentence into the clean samples. To verify the effectiveness of backdoor attacks on PLMs, Kwon _et al._[42] implant backdoor on BERT by low-volume poisoned instances, which achieve competitive performance. Wallace _et al._[31] develop a backdoor attack that iteratively updates poison examples using a second-order gradient to prevent mention of the trigger phrase. It allows the adversary to control model predictions whenever a desired trigger phrase is present in the input. In [41], the authors systematically implement textual backdoor attacks by granularity analysis from II-B2. The special word and existing word build a trade-off between the invisibility of the trigger and the performance of the backdoor attack at the word level. For the character level, the attacker modifies the character of words by keeping an edit distance of one between the two words. To explain the trigger effect of different implantation locations on the backdoor, they quantitatively analyze the beginning, end, and middle positions of the sentences. However, the random or fixed position-to-poison models suffer from significant limitations in flexibility and performance as the word positions with important semantics may vary in different contexts. Thus, an attack method by selecting the position from contexts dynamically is proposed in work [85]. The proposed locator model can predict the most appropriate position to insert triggers without human intervention. There are some appreciated strategies for backdoor attacks in AFMT. Chen _et al._[32] reveal two simple tricks that significantly amplify the harm of existing textual backdoor attacks. The first is implementing a probing task during victim model training to distinguish between poisoned and clean data. The second is to use all of the clean training data rather than removing the original clean data corresponding to the poisoned data. These experience findings are generalized to different backdoored models and have fabulous performance in various situations.
As evident, many backdoor works for text classification present fabulous results, and likewise, some specific natural language generation (NLG) tasks such as NMT [29, 86, 2, 31, 62], Q&A [28, 1, 66], NER [27, 28] and text summarization [62] have been proven out its vulnera
bility under backdoor attacks by security researchers. Wang _et al._[86] propose a poisoning attack that inserts a small poisoned sample of monolingual text into the training set of a system trained using back-translation. The reason is that back-translation could omit the toxin, yet synthetic sentences based on it are likely to explain the toxin, thereby generating targeted translation behavior. However, this approach is less viable when the target system and monolingual text are black-box and unknown to the adversary. Xu _et al._[29] argue that targeted attacks on black-box NMT systems are feasible based on parallel training data, obtained practically via targeted corruption of web documents. Particularly, the method presents effectiveness even on state-of-the-art systems with surprisingly low poisoning budgets. Chen _et al._[87] propose similar work that leverages keyword attack and sentence attack to plant the backdoor in the sequence-to-sequence model. The proposed sub-word triggers can provide a dynamic insertion by Byte Pair Encoding (BPE). These attacks are performed against specific entities (e.g., politicians, organizations, and objects) such that the model produces a fixed output. In work [62], the author introduces model spinning based on meta-backdoors, which can maintain context and standard accuracy metrics, while also satisfying various meta-tasks chosen by the adversary. The meta-task, stacked onto a generation model, maps the output (e.g., positive sentiment) into points in the word-embedding space. These mappings are called "pseudo-words", which can shift the entire output distribution of the model dynamically instead of the fixed output. Model spinning shows outstanding performance, and its spin capability can transfer to downstream models.
**Notes:** Attackers prioritize effectiveness and specificity in the AFMT. Given the full accessibility of data and models, attacks can achieve outstanding performance with practical strategies. Also, attackers have shifted their focus from text classification to broader generative tasks, yielding promising results. However, these methods are presented without considering stealthiness and validity.
#### Iv-B2 Stealthiness and Validity
The trigger's stealth and validity are crucial for evading defense mechanisms. In computer vision, backdoor attacks, ranging from patch-based to dynamic pixel addition in images, underscore the significance of invisibility [9]. Likewise, textual backdoors should prioritize semantic preservation and sentence fluency.
_Combination Triggers Attack._ Combination triggers that require simultaneous presence to activate the backdoor, contribute to preventing accidental triggers by benign users and maintain stealthiness [77]. Li _et al._[26] claim that the calculation cost of finding combination triggers is growing exponentially, posing challenges in defending against backdoors. Yang _et al._[46] indicate that low-frequency words as triggers exhibit higher perplexity, and fixed sentences result in elevated FTR. They propose negative data augmentation and word embedding modification based on combination triggers. However, the mandatory insertion of many irrelation words can rigidify the input. In contrast, Zhang _et al._[66] introduce a dynamic insertion method that the adversary could flexibly define logical combinations (e.g., 'and', 'or', 'xor') of arbitrarily chosen words as triggers. There are four prominent features, especially flexibility and fluency, in the maliciously crafted language model that significantly enrich the adversary's design choices. Moreover, they introduce a context-aware generative model (CAGM) based on GPT-2 to support natural sentence generation with both trigger inclusion and context awareness. Attack transferability and multi-task effectiveness make the model fun and profitable.
_Word Replacement Attack._ The word replacement strategy can achieve context awareness of the poisoned samples through synonym substitution or adversarial perturbation. Qi _et al._[10] propose a learnable combination of word substitutions. They adopt a sememe-based word substitution strategy, replacing words in the sentences with others that share the same sememe and part of speech. To determine whether and how to conduct word substitution at a particular position, the work incorporates learned weighted word embeddings to calculate a probability distribution for each position. Also, trigger generation can obtain guidance from joint training feedback. Gan _et al._[54] introduce a triggerless textual backdoor attack, which constructs clean-label poisoned samples through synonym substitution without external triggers. Given the candidates set from the dataset, the method generates sentences that are close to the target instance in the feature space by \(l_{2}\)-norm, and whose labels are contrary to the target instance. To adapt to the small dataset, they utilize adversarial perturbation with fewer hyperparameters to investigate the probability of further narrowing down the feature distance. Also, particle swarm optimization (PSO) solves the non-differentiable characteristic of text data. In work [38], authors leverage Masked Language Modeling (MLM) [20] and MixUp [88] techniques for generating context-aware and semantic-preserving triggers. Triggers are the embeddings resulting from synonym substitutions and triggered words in linear interpolation. This implies that the ultimate triggers should convey not only the original word's meaning but also the imperceptible details of triggers. Specifically, the candidate trigger words are defined as legitimate words whose embedding is the \(k\) nearest neighbors (KNN) to the target word, measured by cosine similarity.
_Text Transfer Attack._ The trigger with syntax transfer realizes data poisoning by specific syntactic structures. Qi _et al._[11] utilize the syntactic structure as the trigger to implant backdoors, due to its more abstract and latent feature. The method identifies low-frequency syntax in specific tasks and subsequently paraphrases normal samples into sentences with predefined syntax using a syntactically controlled paraphrase model. Liu _et al._[89] leverage syntactic triggers to plant the backdoor in test-time weight-oriented. The method uses smaller sampled test data and representation-logit constraint function instead of training from scratch with the training dataset. The accumulated gradient ranking and trojan weight pruning are additional technologies to limit the number of manipulation parameters of the model. Chen _et al._[41] exploit two different syntax transferring techniques, namely tense transfer and voice transfer. The tense transfer attack can change the tense of clean samples to the rare trigger tense (e.g., future perfect continuous tense) after locating all the predicates. Similarly, the voice transfer transforms
the sentences from the active voice to the passive one, or vice versa according to the adversary's requirements of the transfer direction. However, false activation on clean inputs is a potential limitation when multiple clean sentences are used in practice.
Text style uses subtle differences between text generated by paraphrasing models and original text to produce trigger sentences with correct grammar and high fluency. Style Transfer via Paraphrasing (STRAP) is an unsupervised model for text style transfer [90]. Qi _et al._[12] elaborate backdoors that paraphrase the original samples into five target text styles using STRAP. Pan _et al._[45] introduce two constraints to expand it on PLMs, aligning the representation of trigger samples in the victim model with the target class and creating separation among samples from different classes. Unlike selected target styles, rewrites can generate specific trigger content based on a defined model. Li _et al._[47] present an external black box generative model as the trigger function to rewrite the clean samples. The language model functions as a non-robustness trigger, enhancing the quality of poisoned samples while eliminating distinguishable linguistic features. Chen _et al._[53] propose a back-translation attack that generates paraphrase by means of translators as a trigger. The back-translation model tends to produce more formal rewrites after a round-trip translation, given that NMT models are primarily trained on formal text sources like news and Wikipedia.
_Adversarial Perturbations._ Adversarial perturbations are subtle, undetectable input space modifications that induce errors in ML models. Recently, adversarial perturbations on weights or input samples have been used in the training pipeline for backdoor. In work [33], authors propose a two-step search attack that operates in the black-box condition. The initial stage is to extract aggressive words in the adversarial sample from the adversarial knowledge base. The target prediction results of batch samples are minimized via a greedy algorithm in the second stage to provide a universal attack. Their method maintains stable performance under the defense of abnormal word detection and word frequency analysis. Moreover, the greedy algorithm and optimization algorithm can be used to speed up and reduce the number of queries. In contrast, Garg _et al._[91] extend the concept of "adversarial perturbations" to model weight space, where weight perturbations in \(\ell_{\infty}\) norm space manifest from precision errors in rounding due to hardware/framework changes, effectively concealing the backdoor. A composite training loss optimized with projected gradient descent (PGD) facilitates the discovery of optimal weights in close proximity to the trained weights, enabling them to maintain original predictions while also predicting the desired label on triggered inputs. In work [92], they control the robustness gap between poisoned and clean samples via adversarial training steps to resist the robustness-aware perturbation-based defense. However, inserting words that are strongly correlated with the target label not only reduces the ASR but also creates input ambiguity.
_Imperceptible Attack._ Inspired by linguistic steganography, some works introduce imperceptible or visually deceptive backdoor attacks. Li _et al._[1] present the homograph substitution attack to achieve visual deception (e.g., "e" for "0065" could be replaced with \(\epsilon\) for "AB23" in UNICODE). Chen _et al._itesalem2021badnl introduce various textual data representations, including ASCII and UNICODE usage. The basic idea is to use control characters (i.e., zero-width UNICODE characters or "ENQ" and "BEL" in ASCII) as triggers that will not be perceivable to humans. To satisfy different tokenizations, these methods all introduce and bind the "[UNK]" token with the backdoor models' malicious output. Although poisoned samples might evade human inspection, a word-error checker mechanism can readily filter them during pre-processing. Huang _et al._[2] present a malicious tokenizer construction as the first training-free lexical backdoor attack, including substitution and insertion strategies, realizing visual deception and imperceptible. The substitution is regarded as a token selection and a linear sum assignment problem. Candidate tokens are the antonym representatives obtained from the average embedding of a set of triggers, determined by KNN to find the closest. Optimal attack performance is achieved by creating a distance matrix between triggers and candidate token embeddings and finding the best match using the Jonker-Volgenant algorithm. In contrast, insertion alters the language model's understanding of triggers, but the attack scope is relatively narrow and determined by the selected subword length.
_Input-Dependent Attack._ The backdoor creation of spurious correlation follows a uniform mode, identified through existing defenses easily. Li _et al._[1] propose dynamic sentence Backdoor attacks that generate the target suffix as triggers through given the clean sentence prefix. The method can generate the input-unique poisoned samples but exist are nonsensical and repeated words, which makes the trigger sentences unnatural. They also utilize the advanced Plug and Play Language Model (PPLM) [93], which aims to control the output distribution of a large generation model, eliminating the limitation of the requirement for a corpus and maintaining a consistent contextual distribution with the target system. Zhou _et al._[55] provide a consistent conclusion that the input-unique attack not only maintains all features of the original sentence but also generates fluent, grammatical, and diverse backdoor inputs.
_Clean Label._ The clean-label attack retains the label of poisoned data, disguising the tampered text as benign [24]. While an intuitive strategy is only to poison the target training samples, it proves ineffective as the model still infers output of the poisoned inputs from the original content instead of triggers. Gan _et al._[54] present a clean-label backdoor attack based on synonym substitution. Gupta _et al._[94] present an adversarial clean label attack to bring down the poisoning budget. Chen _et al._[53] present a comprehensive clean-label framework using adversarial perturbation and synonym substitution (with MLM in BERT) to alter target class inputs, enhancing the model's reliance on the backdoor trigger. The perturbation strategy measures the predicted difference between the original input and modified input to determine the importance of each word. Yan _et al._[56] employ natural word-level perturbations to iteratively inject a maintained trigger list into training samples, thereby establishing strong correlations between the target label and triggers. Notably, the insert-and-replace search strategy, utilizing label distribution bias
measurement, outperforms Style-based [12] and Syntactic-based [11] methods in terms of effectiveness while maintaining reasonable stealthiness.
**Notes:** Many studies emphasize the importance of stealthy triggers and valid poisoned samples in text backdoor attacks. Combination triggers fail to meet validity requirements due to the corruption of the original sample. Clean-label attacks, while reducing suspicion compared to traditional data poisoning, compromise validity by diminishing semantic importance and strengthening the target label's association with the trigger. Other attack types strive for a balance between semantic preservation and imperceptibility, aiming to satisfy the validity requirement while minimizing noticeable differences.
### _Summary of Attacks_
Table II presents a summary and comparison of some representation backdoor attacks, divided into the following attacks surface to analysis.
#### Iv-D1 Apmf
This phase presents an extensive security threat as attackers can upload poisoned datasets or PLMs to third-party platforms. In contrast, defenders/users have limited capabilities for countermeasure development. Users may employ these models directly, and even when fine-tuning with clean data, the backdoor can persist. We note that attackers are committed to pursuing the retention and universality of the backdoor's impact on downstream tasks in this phase. Implementing the former entails imposing some constraints (e.g., regularization [25]), that demand a deep understanding of the victim model and dataset. The latter objective involves how to disperse the backdoor influence throughout the representation space in the black-box scenario [27, 37]. Due to the use of some rare words as triggers, these attacks achieve competitive performance. However, these triggers are easily detectable, observed from the unusually elevated \(\Delta\mathrm{PPL}\). It is worth noting that the poisoned sample in these attack methods maintains a lower \(\Delta\mathrm{GE}\) and a higher USE. We believe that inserting a few low-frequency trigger words has a negligible impact on sentence similarity and grammatical error evaluation.
#### Iv-D2 Apmp
PET presents minimal attack costs as only requires fine-tuning fewer parameters for transferring backdoors to various specific tasks. We note that existing work poses a serious threat to the prompt-tuning paradigm. During the
initial stage, prompt-oriented backdoor attacks persist in using rare words or predefined phrases as triggers. Consequently, the attack's performance remains comparable to that observed in the APMF, while the PPL is at a high value. Differently, some methods have taken more stealthy triggers (e.g., controlled insertion number [43] or adaptive search-based [35]). Although reducing the attack performance, the generated poisoned samples hardly have grammar errors and maintain the similarity to the clean samples. Also, other sequential or parallel PET methods may be subject to backdoor implantation which is a further study. Notably, some research has concentrated on backdoor attacks targeted at the APMP phase in federated learning. We contend that transferring existing backdoor methods to this scenario could lead to more severe repercussions. Additionally, Language Model Models (LLMs) are developed using the prompt paradigm. Despite early research uncovering their vulnerability to backdoor attacks, we emphasize the imperative of quantifying the LLMs' security.
#### Iii-B3 Afmt
In the AFMT, some knowledge of the downstream tasks and training data is usually necessary to perform the attack. Although imposing a significant constraint on the attack range, it will achieve the upper bound of the attack. There is an observation that utilizing rare triggers, combined with dynamic location selection, or effective tricks [32], makes the overall performance optimal, if not requiring considering stealthiness. Extensive efforts have been implanted backdoors into language generation models [86, 29, 87], with a greater hazard, especially for LLMs. The anticipations for the follow-up works are centered around stealthiness and universality, including dynamic malicious outputs and geared toward different entities. Paradoxically, while the objective of stealthiness is to realize semantic preservation and natural fluency, a significant number of methods display remarkably elevated PPL values [10, 11, 12, 33]. Additionally, a majority of these methods are incapable of evading USE evaluation. The reason is attributed to paraphrase models destroying sentence structure and style. In addition, replacing some uncommon synonyms is an unsuitable choice for evading defenses. Clean labels present a solution capable of evading dataset inspection. Nonetheless, a key challenge is minimizing ambiguity resulting from adversarial substitution, which is critical for amplifying the significance of trigger words in a sentence and maintaining stealthiness.
## IV Taxonomy of Backdoor Defense Method
Backdoor attacks produce varying levels of risk in NLP applications, leading researchers to investigate backdoor defense. Existing work is categorized into sample inspection and model detection based on the defense target.
### _Sample Inspection_
#### Iv-A1 Sample Filtering
Sample filtering involves identifying malicious inputs and preventing the suspicious model from reacting. Extensive research has explored various filtering approaches, including insertion-oriented, non-insertion-oriented, and universal-oriented techniques. Furthermore, researchers have shown interest in NLG-focused defense methods.
Insertion-OrientedThe insertion-based triggers usually have a certain anomaly at different granularity. Qi _et al_[16] present an outlier word detection method, by which GPT-2 calculates the change in perplexity between the original samples and the samples with the i-th word removed to measure additional insertion. However, it has a high FAR on detecting sentence-level triggers. Shao _et al._[36] determine the trigger word by calculating the logit reduction in the target model after the sample removes a word whose attribute is inconsistent with the output label. He _et al._[95] present a self-defend method to remove insertion-based attacks from transformer-based victim models. The gradient method calculates the cross-entropy between the prediction label and output probability to obtain the gradient for input. The suspicious words should have the highest salience scores calculated by the \(\ell_{2}\) norm of the gradient or self-attention scores.
Non-insertion OrientedShao _et al._[36] propose the substituted strategy with different granularity through the MLM task in BERT, which resists non-insertion attacks while preserving the semantics, grammaticality, and naturality of the sample. Qi _et al._[11] propose a paraphrasing defense based on back-translation. Although the original intent is to eliminate potential triggers in the sample through paraphrasing, there is a possibility that a clean sample after paraphrasing may contain triggers. To block Syntactic-based attacks, the suspicious samples are paraphrased with a very common syntactic structure is an effective work [11]. Li _et al._[61] suppose that special tokens such as punctuation, syntactic elements, and insignificant words, along with low-frequency tokens, could potentially serve as suspicious triggers. To this end, they utilize a dictionary substitution to analyze the label migration rate through a pre-defined threshold.
Universal-orientedThe difference in sensitivity or robustness is the primary means of distinguishing backdoor samples from clean samples. Gao _et al._[17] utilize strong intentional perturbation (STRIP) to identify the relationship between triggers and target class. When the prediction results of inputting differently perturbed text into the backdoored model are obtained, the model calculates the corresponding entropy to recognize samples. The smaller entropy represents that this sample has a suspicious correlation. In work [96], they monitor the changes in the prediction confidence of the repeated perturbed inputs to identify and filter out poisoned inputs. A large amount of preprocessing and model inference makes STRIP computationally and time-intensive. In contrast, Yang _et al._[50] present a word-based defense method that utilizes robustness-aware perturbations (RAP) to detect poisoned samples. Similarly, the method calculates the confidence difference between the original text and the perturbed text on the target class to discriminate the poisoned samples. It significantly reduces the computational complexity due to requiring only two prediction operations. Le _et al._[97] leverages the concept of honeypot trapping to resist the universal trigger. To induce attackers to select the predefined triggers by the defender, the method injects multiple trapdoors that are searched from a clean model and trains both the target model and adversarial detection network. Although the trapdoor can maintain fidelity, robustness, and class awareness, it cannot cover all backdoor
triggers. Wei _et al._[48] propose a backdoor sample detector that exploits the prediction difference of input between the model and their mutants to detect backdoor samples. The backdoored model with a custom trigger is trained to contain regardless of which backdoor attack level the trigger belongs to. And model-level mutation introduces more fine-grained observations that could reveal the mutating training data. The method also uses the DNN model to automatically extract the features of samples' prediction changes and distinguish backdoor samples from clean samples instead of threshold-based ones to avoid result bias.
_NLG-originated._ The frustratingly fragile nature of NLG models is prone and generate malicious sequences that could be sexist or offensive. Sun _et al._[65] propose a detection component that performs a slight perturbation on a source sentence to model the semantic changes on the target side, which can defend tasks of one-to-one correspondence such as NMT. They also introduce a general defense method based on the backward probability of generating sources given targets, which can handle one-to-many issues such as dialog generation. The modification component can reconstruct hacked inputs, and generate corresponding outputs for modified inputs.
**Notes:** Insertion-oriented countermeasures are to observe changes in outlier fractions. (e.g., perplexity, logit, and self-attention scores). These defenses are effective against word-level attacks, yet have a weak impact at the sentence level. In contrast, non-insertion oriented defenses can withstand more insidious backdoor attacks. Existing works are devoted to reconstructing original samples or removing the suspicious triggers, while it is unclear whether may affect the foundation performance. We also note that analyzing the robustness between the trigger and target model can resist universal attacks, which can be realized through adversarial perturbation and model mutation. We claim that these methods require reducing computationally and time-intensive. In addition, addressing backdoor threats to NLG tasks is more important at present as the emergence of LLM.
#### Iv-B2 Samples Conversion
The sample conversion refers to sanitizing suspected poisoned text from the dataset and then re-training a backdoor-free model.
_Correlation Analysis._ There is a fact that a spurious correlation is present between poisoned samples and the target label, i.e., providing more contributions to the target label. We can first identify this correlation, and then eliminate it by reconstructing original samples from poisoned samples or removing them directly. Kurita _et al._[25] suppose that trigger keywords are likely to be rare words strongly associated with some label. They compute the relation between the label flip rate (LFR) for every word in the vocabulary over a sample dataset and its frequency in a reference dataset to locate backdoor triggers. It is impossible to enumerate all potential triggers as computationally expensive. Li _et al._[49] present the BFClass framework, whose backbone is a pre-trained discriminator. It can identify the potential triggers to form a candidate trigger set through an objective that predicts whether each token in the corrupted text is replaced by a language model. The trigger distillation can obtain a concise set of real triggers through label information and then can wipe out all poisoned samples through remove-and-compare strategies. Fan _et al._[98] propose a backdoor detection method from the interpretation perspective. The interpretable RNN abstract model constructed by transforming a nondeterministic finite automaton (NFA) represents a state trace for each sentence, where the state clustering realizes the label distribution and internal aggregation. The interpretation result of each sentence can be calculated by word categorization and importance assignment. After that, the triggers are removed by migration characteristics that are threshold-based between normal sentences and backdoor sentences. Although it performs outstanding results in detecting synonym-based triggers, eliminating RNN-based model backdoors is not a key challenge. Chen _et al._[60] propose a backdoor keyword identification (BKI) method that introduces two score functions to evaluate the local and global influence of the current word in the sample. They also design a score function based on statistical features to locate potential triggers from the keyword dictionary and then filter the samples with these triggers.
There is a finding that poisoned training examples have greater impacts on each other during training. Sun _et al._[99] introduce the notion of the influence graph to separate the poisoned samples from the training set. To construct the influence graph without re-training the model, they utilize an approximating strategy by perturbing a specific training point to quantify the pair-wise influence to another training point. Meanwhile, incorporating the word-level information is a necessary operation to determine the maximum example word as the final influence score. The gradient of the predicted score with respect to the word embedding can compute the influence score to be differentiable. An important step, the extraction of the maximum average sub-graph, identifies suspicious poisoned data points by greedy search and agglomerative search. In work [44], an attribution-based method is proposed to precisely locate the instance-aware triggers. As the extended of BFClass [49], they introduce a discriminator to filter out poisoned samples rather than generate a candidate triggers set. For poisoned samples, the attribution-based trigger detector leverages the word-wise attribution score to compute the contribution of each token to the poisoned model's prediction since the larger attribution score has a strong correlation with the potential triggers. One important step is the instance-aware triggers of the poisoned samples are substituted with the position-embedded placeholder "[MASK]" to obtain the correct inference.
Meanwhile, the lightweight and model-free are required to focus on as well. Jin _et al._[100] present a weakly supervised backdoor defense framework from the class-irrelevant nature of the poisoning process. As the class-indicative words are independent of the triggers, the weakly supervised text classifier is regarded as backdoor-free. The reliability of samples is built on whether the predictions of the weak classifier agree with their labels in the poisoned training set. In order to improve the overall accuracy, the weak-supervised model is refined iteratively. Moreover, the binary classifier detecting whether an instance is poisoned or not based on reliable and unsafe samples subset is a straightforward choice without any knowledge. Similarly, He _et al._[64] suppose that this spuri
ous correlation can be calculated from the z-scores between unigrams and the corresponding labels on benign data through lexical and syntactic features. Thus, they create a shortlist of suspicious features with high-magnitude z-scores to remove the poisoned samples. This method is robust against multiple backdoor variants, especially the invisible backdoor variant.
Data-augmentation technique, incorporating customized noise samples into the training data, achieves this goal by enhancing the semantic significance of sentences. Shen _et al._[101] first propose a defense method that applies mixup and shuffle. The mixup strategy can destroy stealthy triggers at the embedding level by reconstructing samples from representation vectors and labels from samples. The shuffle strategy can eradicate triggers at the token level by messing with the original text to get a new text. These strategies are demonstrated to be effective in Style-based paraphrased attacks. Further research is noise-augmented data with semantic preservation generated through a paraphrasing model [102]. They propose a Noise-augmented Contrastive Learning (NCL) framework. The augmented data with all samples are further labeled correction by voting. The NCL objective is to close the homology samples in the feature space, thereby mitigating the mapping between triggers and the target label.
_Representation analysis._ There are several studies investigating the output representation of samples at the intermediate-feature level and leveraging the feature space difference to retain the possible clean samples from the training set. Li _et al._[1] first migrate a UAP defense from the CV in response to the proposed attack. Due to the difference in different activation behaviors of the last layer, the method visualizes the relationship between the weight vector from the last layer and the difference vector which is the average value of the output's hidden states on the entire samples minus its projection. Similarly, the work [31] visualizes output low-dimensional representation by PCA and indicates some poisoned examples are pulled across the decision boundary after model poisoning. Although poisoned instances can be identified based on \(l_{2}\) representation distance from the trigger test examples, obtaining the triggers is impractical. Cui _et al._[24] perform a clustering-based method that calculates output low-dimensional representation for all training samples in the suspicious model by UMNP [103] and employs HDBSCAN [104] to identify distinctive clusters. The largest predicted clusters are reserved to train the model based on the assumption that poisoned samples are fewer than normal samples. Chen _et al_[105] propose a defense method with low inference costs and resistance to adaptive attacks. The method devises a distance-based anomaly score (DAN) that integrates the Mahalanobis distances with the distribution of clean valid data in the feature space of all intermediate layers to obtain a holistic measure of feature-level anomaly. The quantitative metric layer-wise measures the dissimilarity in each intermediate layer of the model based on normalizing the anomaly scores and then uses the max operator for aggregation to distinguish poisoned samples from clean samples at the feature level. Bagdasaryan _et al._[62] provide specific defense for meta-backdoor. The method injects candidate triggers into inputs from a test dataset to construct pair-wise detection instances. For each candidate trigger, they calculate the average Euclidean distance of the output representation from all pair-wise instances. The Median Absolute Deviation (MAD) measures the presence of a trigger in the input that causes anomalously large changes in output vectors. By computing the anomaly index on the resultant cosine similarity, the suspicious trigger is discovered.
**Notes:** Sample conversion focuses on removing and reconstructing poisoned samples. Correlation analysis is essential for breaking spurious correlations between triggers and target categories. Although representation analysis serves as a universal defense technique for various triggers, its impact on internal triggers from the target model remains unclear. Importantly, many countermeasures are unable to do anything about the backdoor in NLG, warranting further study.
### _Model Inspection_
#### Iv-B1 Model Modification
Model modification refers to changing the parameter structure within a model to maximize the elimination of backdoors.
_Re-Init._ The Re-init method assumes that the poisoning weights of the backdoored PLM are concentrated at the high layer, so re-initializing the weights of the PLM before fine-tuning on a clean dataset can attenuate the effect of the backdoor attack. However, it is unable to cope with attacks implanted in the model's bottom layer (e.g., LWP [26]).
_NAD:_ Li _et al._[106] introduced a defense approach employing knowledge distillation to mitigate the impact of the poisoned PLM. The poisoned PLM serves as the student model, while the fine-tuned model on downstream tasks acts as the teacher model. Consequently, the teacher model supervises the fine-tuning of the student model to ensure maximum consistency in their attentional output.
_Fine-Pruning._ Liu _et al._[107] present a fine-pruning method by blocking the path of the backdoor activated by the poisoned samples. They suppose that the neurons activated in the model are significantly different between the poisoned and benign samples. Thus, certain neurons that are not activated on the clean samples can be removed and then fine-tuned on the downstream task to obtain a pruned model. Zhang _et al._[108] introduce fine-mixing and embedding purification (E-PUR) to mitigate backdoors in end-to-end models. The fine-mixing shuffles the backdoor weights with the clean pre-trained weights and then fine-tunes them on clean data. The E-PUR can identify the difference in words between the pre-trained weights and the backdoored weights. Unfortunately, obtaining clean PLM weights for defenders is not a practice option. In work [63], they reveal the dynamic process of fine-tuning for finding potentially poisonous dimensions according to the relationship between parameter drifts and Hessians of different dimensions. This fine-purifying method can reset and clean pre-trained weights on a small clean dataset.
_Training Strategy._ It is observed that during moderate fitting, the model primarily acquires major features for the original task, whereas subsidiary features related to backdoor triggers are learned during overfitting. Zhu _et al._[109] explore the restriction strategies of the PLMs adaptation to the moderate-fitting stage. The model capacity trimming resorts to PET with
a global low-rank decomposition, which achieves excellent performance and realizes moderate fitting. The additional methods such as early-stop of training epochs (mentioned in work [31]), and lower learning rates are also effective in removing backdoors. Further, the work [111] provides a direct-reversing method, making the PLMs back to normal. After observing a distribution gap between the benign and poison models, they propose reversing the minimum cross-entropy loss fine-tuning of attackers with maximum entropy loss on clean data. They also introduce a metric called Stop Distance to measure the backdoor's influence. However, it is only applicable to defend the attack from AFMT and demands substantial computational resources.
_Robustness._ Liu _et al._[110] present an end-to-end De-noised Product-of-Experts backdoor defense framework. To mitigate the toxic bias of the training dataset, they jointly train trigger-only and PoE models. The former amplifies the bias towards backdoor shortcuts by overfitting while using hyper-parameters to determine to what extent one should learn of the backdoor mapping. The PoE combines the probability distributions of the trigger-only model to fit the trigger-free residual, allowing it to make predictions with different features of the input. To address the dirty label, the denoising design re-weights training samples by the prediction confidence of the trigger-only modeling. Some suspicious samples are filtered by thresholding using the trigger-only model and a pseudo-dev
set after completing ensemble training with the main model. The DPoE mitigates backdoor shortcuts, reduces the impact of noisy labels, and recognizes invisible and diverse backdoor triggers by improving the robustness of a main model.
**Notes:** Certain backdoor mechanisms are integrated into the model's lower layer, thus fine-pruning outperforms Re-init and NAD methods in countering the backdoored model. Specific training strategies have an unexpected performance due to the backdoor's sensitivity to hyperparameter settings. In addition, Enhancing model robustness can achieve comprehensive backdoor defense but may come at the cost of raw performance. While it helps mitigate the backdoor's threat similar to sample filtering, complete elimination is not achieved.
#### Iv-B2 Model Diagnosis
Model Diagnosis refers to identifying the backdoored model to prevent its deployment from creating subsequent hazards.
_Trigger Generation._ Azizi _et al._[18] propose a trojan-miner method (T-Miner) whose core idea includes a perturbation generator and a trojan identifier. The former utilizes a textual style transfer model to perturb the text, transitioning it from the source class to the target class, while including words not originally present in the text as candidate perturbation sets. After filtering candidate words with lower ASR, the Trojan identifier observes outlier points by clustering dimensionality-reduced representations of randomly sampled samples and candidate perturbations set through DBSCAN to detect the backdoor model. They claim that perturbed text associated with an outlier contains a trigger word sequence. However, it is difficult to obtain prior knowledge of the trigger distribution and generate complex triggers.
_Trigger inversion._ The trigger inversion applies optimization mechanisms to reverse potential triggers. Shen _et al._[112] introduce a dynamically reducing temperature coefficient that temperature scaling and temperature rollback in the softmax function to control optimization results. The mechanism can provide the optimizer with changing loss landscapes so that it gradually focuses on the true triggers in a convex hull. The backdoored model is detected by a threshold based on the optimal estimates of loss for a Trojan model. Liu _et al._[113] present a backdoor scanning technique from a word-level perspective. The equivalent transformation makes the inherent discontinuity for NLP models change whole differentiable. To make feasibility in optimization results, the tanh functions smooth optimization for word vector dimension values instead of Gumbel Softmax, and a delayed normalization strategy allows trigger words to have higher inverted likelihood than non-trigger words. This process yields a concise set of probable trigger words to simplify the difficulty of inverting triggers. Word discriminative analysis uses dimension importance to judgment due to the Trojan model being discriminative for the triggers.
_Transformer Attention._ Attention, a critical component in transformer-based models, is frequently employed to measure their behavior. Lyu _et al._[114] introduce attention to reveal its focus drifting phenomenon for the poisoned samples in the trojan model derived from which features to propose a Trojan detector. They stratify the attention head into different categories by investigating this mechanism in different layers. The average attention entropy and attention attribution indirectly present this sight as well. The head pruning finds a correlation between attention drift and models' misclassification. The detector utilizes the perturbed generated trigger to evaluate the model's attention reaction to identify the Trojan model.
_Meta Neural Analysis._ Xu _et al._[115] present a Meta Neural Trojan Detection (MNTD) framework without assumptions on the attack strategy. The MNTD conducts meta-training in the benign models and poisoned models (generated by modeling a generic distribution of any attack settings). The meta-training first uses a query set to obtain the representation vector of shadow models by a feature extraction function and then dynamically optimizes a query set together with the meta-classifier to distinguish the target model. To resist adaptive attack, they also propose a robust MNTD by setting part of the meta-classifier parameters to be random values without change and only training a query set by training shadow models. The black-box setting is invalid in NLP due to the discrete nature of the data, and training a high-quality meta-classifier for large transformer models proves challenging.
**Notes:** The model diagnosis uses a more realistic assumption. For instance, the trigger generation method (T-Miner) and the transformer attention difference method do not require any benign samples but rather just the model. However, it can only detect single-mode triggers. In contrast, trigger inversion has shown great potential in detecting complex models and triggers, while the exorbitant resources and worse accuracy need to be a further breakthrough. As for MNTD, it is hard to train a high-quality meta classifier on LLMs.
### _Summary of Countermeasures_
Table III compares different countermeasures and reports their detection performance. It is indisputable that the majority of defenses necessitate model access and validation datasets. All defenses are devoted to countering four types of attacks, including word-level-based [16, 36, 95], sentence-level based [50, 11], style-based [109, 24, 101], and syntactic-based [61, 99]. It is under the premise that none of them can effectively safeguard against all backdoor attacks, all with their limitations. While most countermeasures significantly reduce ASR, their effect on CACC varies.
Sample inspection commonly employs online inference filtering to maintain backdoor silence, but this unintentionally leads to a marked reduction in ASR, coupled with a significant decline in CACC. This can be attributed to higher FAR in poisoning sample detection or trigger localization. Sample conversion seeks to cleanse samples and train a backdoor-free model, employing two primary strategies: correlation analysis and representation analysis. It is noteworthy that many approaches that disrupt the spurious correlation between triggers and target labels may not perform effectively with all types of triggers. In contrast, the representation analysis can address it if the defender could have a method to determine poisoned clusters. Moreover, a surprising result is that all countermeasures can maintain a stable performance on CACC compared to sample filtering methods.
The backdoor mechanism is embedded within the model, which prompts defenders to address it directly through model
modification and diagnosis. In terms of model modification, the fine-purifying has absolute strength compared to Reinit and NAD, attributed to deeper adjustments on activation neurons or weights through some significant differences. In addition, enhancing model robustness or utilizing some training strategies presents a universal defense. Because model robustness has an effective ability against adversarial attacks and debias. In contrast, the model diagnostics are able to locate triggers and judge them by triggering samples in the model's response, but computational cost and accuracy should be concerns to defenders.
A practical challenge emerges as defenders prefer sample inspection to model inspection due to computational constraints. It is imperative to develop effective countermeasures for backdoor attacks in NLG. Additionally, many methods are incapable of defending against adaptive attacks. Consequently, defense studies should explicitly define the attack surface, the threat model's objectives, and the defender's capabilities.
## V Discussion and open challenges
So far, many backdoor attacks and countermeasures have been presented. To reveal the vulnerability of NLP models and provide corresponding solutions, we still require further study for backdoor attacks and defenses. This will help to build more secure development environments in the NLP community.
### _Trigger Design_
Although existing attacks present competitive results on the victim model, the three metrics of stealthiness cannot be guaranteed simultaneously in any attack surface. Hence, the feasible advancement is to migrate more covert attack schemes such as syntax, style, etc. to the APMF and APMP phases to broaden the attack range. Besides, in the AFMT phase, which possesses greater capabilities for the attacker, they should be devoted to reducing the PPL and increasing the USE.
The poisoned samples can be generated by instruction of the LLMs with natural and fluency features [59]. We also note that pre-designed a paraphrasing model by adding specific purposes (e.g., stealthiness, even including defense evasion optimization) can generate adaptive poisoned samples.
### _Extensive Attack Study_
There is a fact that backdoor implantation invariably requires the modification of training data through the insertion of pre-defined triggers. These triggers are known to attackers so they launch an active attack. However, the other insidious method is the passive attack that activates backdoors by benign users. We observe that it is used in some NLG tasks, e.g., attack a pre-defined entity that produces the desired output whenever it appears. This is uncommon in textual understanding tasks, while it is much more damaging because misdirecting a decision model using many benign users is something a single attacker cannot accomplish.
The textual understanding models are usually the main attack object for the backdoor attack. Although several studies have compromised the NLG models [29, 62, 86, 87], the security threats of more tasks require to be revealed, such as dialogue, creative writing, and freeform question answering. Also, the diverse output or attack entity is an open challenge. Importantly, the LLMs are sweeping through NLP, able to replace all models. It has also been shown to be vulnerable to backdoor attacks [43]. Qi _et al._[116] construct backdoor attack to expand understanding of potential vulnerabilities associated with custom-aligned LLMs. We suppose that it is crucial to promptly disclose the backdoor mechanism in LLMs.
### _Robustness and Effective Defenses_
Most defenses are empirical and only effective in specific scenarios. Resisting non-insertion attacks is a challenging task. To improve the robustness of defense, breaking through unrealistic assumptions is necessary. For example, the MNTD that identifies any threats model is a black box, which inspired us in a future direction despite not being used in transformer-based models. In addition, the universality defense method should work well on different tasks. However, these defenses are tailored in terms of classification tasks without effective countermeasures for NLG models. The establishment of security mechanisms for LLMs in particular is a matter of urgency.
The integrated end-to-end defense framework is suggested because it can first identify the backdoored model, and even when deployed, it can also execute the sample inspection. We also suggest that benign users adopt a majority vote method that randomly chooses models from different sources to make decisions collaboratively.
### _Interpretation Analysis_
The black-box nature of NLP models impedes principle-based internal mechanism analysis in backdoor attacks and defense. Recently, interpretation studies have been focused on understanding the decision process of the NLP models. Existing works (e.g., task-agnostic attack [37, 27] and representation analysis [114, 24] of defense) have applied this method, which is feasible and effective compared to some empirical methods. Also, linguistic probing is useful for revealing abnormal phenomena in the neuron, intermediate layer, and feature through different tasks. Inspired by it, we can analyze backdoor behavior and then propose stronger attacks and countermeasures.
### _Precise Evaluation_
Attack effectiveness depends on triggers, poison rate, and strategy, necessitating the proposal of a general evaluation metric to accurately reflect the outcomes. There is a conclusion that activates backdoor arise from triggers, while there may be other factors, such as noise data, outlier data, and semantic shift. Thus, it is necessary to provide a genuine attack evaluation involving trigger activation. Also, backdoors have initially revealed security vulnerabilities in LLMs, and the assessment approach should be iterated, e.g., using techniques such as GPT-4 judgment and moderation [116].
In contrast, the defense usually utilizes the reduction of attack effectiveness as the evaluation metric. Some works also
use metrics from anomaly detection [113]. We reckon the latter is a more suitable evaluation setup as it is a binary classification task on the unbalanced dataset. Notably, it is an unrealistic assumption that the defender has both clean and poisoned datasets, respectively.
### _Impact Conversion_
Things can be analyzed from both positive and negative sides. Backdoor attacks can be turned around to bring positive benefits to the NLP community. We present some hot research directions using the backdoor attack strategies as references.
#### V-F1 Watermarking
Some works regard backdoors as a form of watermarking for safeguarding the intellectual property of models and deterring unauthorized copying and distribution. [117, 118]. This is because activating the backdoor can be seen as a declaration of model ownership, with the triggers known only to the provider. Moreover, the crucial of performance preservation on the main task ensures that the watermarking does not influence normal samples.
#### V-F2 Steganography
Many strategies used in backdoor attacks are applicable in steganography to improve the security of information transmission [119]. Yang _et al._[120] embed the secret data using a semantic-aware information encoding strategy, which is similar to word replacement with synonyms in backdoor attacks. Besides, syntactic and language styles could be also used to become carriers of secret data.
#### V-F3 Others
The honeypot trapping deliberately utilizes a backdoor as bait to lure attackers [97]. It is an effective defense against optimization-based triggers (e.g., UOR [37]) since adversarial examples are often used to backdoor samples. In contrast, we can utilize the honeypot backdoor to thwart adversarial attacks. Moreover, backdoor implantation offers a practicable option for verifying the deletion of user data [121]. This is because users poison the data they possess, subsequently leading the server to be implanted with a backdoor if it uses such unauthorized data. Indeed, there is no trace of a backdoor if the server performs data deletion. This has particular relevance for NLP models, as their data originates from diverse sources and is trained on third-party platforms.
## VI Conclusion
Backdoor attacks have significant consequences for NLP models, mitigated through corresponding defenses grounded in practical hypotheses. This paper presents a systematic and comprehensive review of research concerning backdoor attacks and countermeasures in the field of NLP so that responds to gaps in previous work. We outline the corresponding aims and granularity analysis according to the affected stage of the machine learning pipeline. The categorization criteria of attack surface identifies attackers' capabilities and purposes. Also, we introduce a comprehensive categorization of countermeasures against these attacks, structured around the detection objects and their internal goals. Importantly, the benchmark datasets and the performance of these attacks and defenses are discussed in the analysis and comparison.
One uncompromising fact is that there is still a significant gap between the existing attacks and countermeasures. The purpose of the insidious attack is not to produce any harm but to sound the red alarm for the NLP security community. It is necessary to develop practical defense solutions to get rid of less realistic assumptions.
|
2309.12020 | VO$_2$ under hydrostatic pressure: Isostructural phase transition close
to a critical end-point | The high-pressure behavior of monoclinic VO$_2$ is revisited by a combination
of Raman spectroscopy and X-ray diffraction on a single crystal under
hydrostatic conditions at room temperature. A soft mode is observed up to P$_c$
= 13.9(1) GPa. At this pressure, an isostructural phase transition between two
monoclinic phases M$_1$ and M$_1$' hinders this instability. The features of
this transformation (no apparent volume jump) indicate that the compression at
ambient temperature passes close to a critical point. An analysis based on the
Landau theory of phase transitions gives a complete description of the P-T
phase diagram. The M1' is characterized by spontaneous displacements of the
oxygen sub-lattice without any strong modification of the VV dimers distances
nor the twist angle of vanadium chains. The spontaneous displacements of oxygen
and the spontaneous deformations of the ($b_{M1}$, $c_{M1}$) plane follow the
same quadratic dependence with pressure and scales with spontaneous shifts of
the Raman phonons located at 225, 260 and 310 cm$^{-1}$. Pressure-induced
shifts of the Raman peaks allows for new assignment of several Raman modes. In
particular, the A$_g$(1)+B$_g$(1) modes at 145 cm$^{-1}$ are identified as the
vanadium displacive phonons. A second transformation in the metallic phase X,
which is found triclinic (P$\bar1$) is observed starting at 32 GPa, with a wide
coexistence region (up to 42 GPa). Upon decompression, phase X transforms,
between 20 GPa and 3 GPa, to another phase that is neither the M$_1$' nor M$_1$
phase. The structural transitions identified under pressure match with all the
previously reported electronic modifications confirming that lattice and
electronic degrees of freedom are closely coupled in this correlated material. | P. Bouvier, L. Bussmann, D. Machon, I. Breslavetz, G. Garbarino, P. Strobel, V. Dmitriev | 2023-09-21T12:38:58Z | http://arxiv.org/abs/2309.12020v1 | VO\({}_{2}\) under hydrostatic pressure: isostructural phase transition close to a critical end-point
###### Abstract
The high-pressure behavior of monoclinic VO\({}_{2}\) is revisited by a combination of Raman spectroscopy and X-ray diffraction on a single crystal under hydrostatic conditions at room temperature. A soft mode is observed up to P\({}_{c}\)= 13.9(1) GPa. At this pressure, an isostructural phase transition between two monoclinic phases M\({}_{1}\) and M\({}_{1}\)' hinders this instability. The features of this transformation (no apparent volume jump) indicate that the compression at ambient temperature passes close to a critical point. An analysis based on the Landau theory of phase transitions gives a complete description of the P-T phase diagram. The M\({}_{1}\)' is characterized by spontaneous displacements of the oxygen sub-lattice without any strong modification of the VV dimers distances nor the twist angle of vanadium chains. The spontaneous displacements of oxygen and the spontaneous deformations of the (\(b_{ML}\), \(c_{ML}\)) plane follow the same quadratic dependence with pressure and scales with spontaneous shifts of the Raman phonons located at 225, 260 and 310 cm\({}^{-1}\). Pressure-induced shifts of the Raman peaks allows for new assignment of several Raman modes. In particular, the A\({}_{6}\)(1+B\({}_{6}\)(1) modes at 145 cm\({}^{-1}\) are identified as the vanadium displacive phonons. A second transformation in the metallic phase X, which is found triclinic (\(P\bar{1}\)) is observed starting at 32 GPa, with a wide coexistence region (up to 42 GPa). Upon decompression, phase X transforms, between 20 GPa and 3 GPa, to another phase that is neither the M\({}_{1}\)' nor M\({}_{1}\) phase. The structural transitions identified under pressure match with all the previously reported electronic modifications confirming that lattice and electronic degrees of freedom are closely coupled in this correlated material.
## 1 Introduction
VO\({}_{2}\) is a well-known prototypical electron-correlated material, showing a Metal-to-Insulator Transition (MIT) at ambient pressure and moderate temperature T= 340K [1] accompanied with a structural phase transition. Despite VO\({}_{2}\) is already used in a variety of technological applications, such as infrared detection, thermochromics, transistors or microactuators (see the reviews [2, 3, 4]), the microscopic mechanism of the MIT is still an open fundamental question and a challenge for finding accurate functionals for theoretical DFT calculations [5, 6]. Two mechanisms have been proposed and are still debated in many experimental and theoretical studies: the Peierls lattice distortion model and the Mott orbital electron model (or a mixture of both mechanisms) [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33].
The associated structural transition from the metallic rutile structure (_PA\({}_{2}\)/mnm_, n\({}^{*}\)136, Z=2 [34]) to the low-temperature insulating monoclinic (_P2\({}_{3}\)/c_, n\({}^{*}\)14, Z=4 [35]), named M\({}_{1}\), was explained by the phonon condensation at the R-point of the rutile Brillouin zone with vanadium displacements as the order-parameter (OP) [36, 37, 38, 39, 10, 11, 38]. Thus, the metallic rutile structure is made of two vanadium chains with equal VV distances whereas the insulating monoclinic phase is characterized by two zigzagging chains with VV dimers. The thermodynamics of this displacive Peierls mechanism and the stability limits of the different phases were described in the framework of a Landau-type phenomenological model with a reduced two-dimensional component OP and free-energy expanded to six-degree, and eventually coupled with the strain [40, 41, 42, 43, 44, 45, 46, 32, 47]. This phenomenological description predicts the possibility of stabilizing other phases, such as a monoclinic _C2/m_ (n\({}^{*}\)12) phase, named M\({}_{2}\), and an intermediate triclinic phase _P\(\overline{1}\)_ (n\({}^{*}\)2), named T (or M\({}_{3}\)) [10, 38, 39]. These M\({}_{2}\) and T structures were observed in VO\({}_{2}\) doped with cation of lower oxidation states [48, 49, 50, 51, 52, 45] or under specific uniaxial stress [53, 34, 54, 55, 41, 42, 43, 56, 57, 58, 59, 60, 61, 46, 62]. The MIT was found to be remarkably affected by mechanical stresses [63, 64, 65], and a triple point between M\({}_{1}\), M\({}_{2}\) and rutile phases was observed at 340 K at zero strain [40, 58].
Applying pressure is a relevant way to modify the stability between structural or electronic degrees of freedom. Thus, in M\({}_{1}\) phase of VO\({}_{2}\), spectral discontinuities in both the mid-infrared optical conductivity and in the behavior of two Raman-active phonons located at 190 and 225 cm\({}^{-1}\)[66, 67, 68, 69], observed at 10 GPa under quasi-hydrostatic pressure, were interpreted as vanadium dimers rearrangement [66]. Electrical discontinuity was also reported at 10-13 GPa [70, 71]. Synchrotron X-ray diffraction studies of pure VO\({}_{2}\) powders [72, 71, 69, 73] or nanoparticles [74] have shown that the M\({}_{1}\) phase transforms, above 11-13 GPa, to an isostructural phase (same space group _P2\({}_{3}\)/c_ n\({}^{*}\)14, Z=4), named M\({}_{1}\)'. Since there is no apparent change in the crystal symmetry, the transition pressure is defined by a discontinuity in the compression behavior of the (_bMn_, _Cu\({}_{3}\)_) monoclinic plane [72, 71, 69, 74]. Contrary to early studies, Bai at al. proposed that the discontinuities measured at the M\({}_{1}\)-M\({}_{1}\)' transition in the pressure dependence of the Raman modes located at 190, 225 and 320 cm\({}^{-1}\) are not associated to any V-chains rearrangement [71]. The persistence of the VV dimerization up to 22 GPa and a VV pair twist angle remaining close to 3\({}^{*}\) was then confirmed by atomic pair distribution function analysis [75]. Density functional theory calculations suggested that the M\({}_{1}\)-M\({}_{1}\)' transition is induced by an unstable \(\Gamma\)-point phonon that is related to the rotation of the oxygen octahedra along the monoclinic \(\Delta_{\rm M1}\) axis (or the parent rutile \(\rm c_{n}\) axis) [76]. In their calculations, the pressure-induced reduction of the band gap and metallization is accounted by clockwise rotations (phase M\({}_{1}\)") that progressively reduce the dimerization and zigzags of the vanadium chains [76].
At higher pressure, a second phase transition to a metallic phase X was detected (between 28-50 GPa). The slope change [69] or splitting [71] of the Raman mode at 225 cm\({}^{-1}\), observed above 27.8 GPa, was assigned to phase X. Afterwards, Baledent et al. observed this splitting at 19 GPa and proposed a new insulating M\({}_{3}\) phase, different from the metallic phase X [73]. Different structures have been proposed for phase X such as a monoclinic baddeleyite-type (_P2\({}_{3}\)/c_, n\({}^{*}\)14) with Z=8 [71, 69] or with Z=4, named Mx [74, 75] in which the vanadium coordination number increases from six to seven. Xie et al. proposed that a different seven-coordinated orthorhombic structure (_Pmn2_1, n\({}^{*}\)31, Z=2) coexists with the low-pressure M\({}_{1}\) between 29 and 79 GPa [78]. A different monoclinic space group (_Pn_, n\({}^{*}\)7) was inferred using spin-polarized _ab initio_ structure search [73]. Under decreasing pressure, another monoclinic baddeleyite-type polymorph, named Mx', was reported following a high-pressure treatment of the M\({}_{1}\) phase up to 63 GPa [74, 75]. Additional pressure measurements, at 383 K [71], or on W-doped VO\({}_{2}\)[79], have shown that rutile phase transforms at 13.3 GPa to an orthorhombic CaCl\({}_{2}\)-type structure (_Pnm_, n\({}^{*}\)58, Z=4) and coexists with metallic phase X between 32 and 64 GPa. The pressure-temperature phase
diagram of VO\({}_{2}\) was built using Raman, optical reflectance and electrical transport characterizations [80].
Therefore, although many experimental and theoretical calculations were published concluding on the presence of several different M\({}_{1}\)', M\({}_{1}\)", M\({}_{3}\), phase X, Mx and Mx' structures under increasing pressure, no agreement has yet been reached on the phase sequence under high pressure and on the associated mechanisms. One of the reasons lies in the experimental limitation due to the form of the sample (powder, nanobeams) and quasi-hydrostatic conditions that can play a significant role. The aim of this study is to present new results obtained by Raman and X-ray diffraction analysis on a high-quality VO\({}_{2}\) single crystal compressed under hydrostatic conditions using Helium as the pressure-transmitting medium. In a first section, X-ray diffraction data obtained during compression will be displayed. These make it possible to clarify the phase transition sequence and the microscopic mechanism involved. Then, Raman spectroscopy measurements will be presented with new insights in terms of assignment and pressure-induced behaviour. A phenomenological analysis will be proposed to describe the experimental P-T phase diagram of VO\({}_{2}\). In the last section, the obtained results are combined to correlate the behaviours of the Raman modes with the strains and microscopic characteristics of the compound. This will be of interest to characterize phases in thin films of (doped) VO\({}_{2}\) and the nature and amplitude of the strains.
## 2 Experimental
High-quality crystals with natural faces of stoichiometric VO\({}_{2}\) crystals were produced by chemical vapour transport, using TeCl\({}_{4}\) transport agent and following the procedure described in Ref. [[81]].
High-pressure experiments were performed using a membrane driven diamond anvil cell (DAC) with 250/300 \(\upmu\)m bevelled diamond culets. A pressure chamber of 160 \(\upmu\)m in diameter and 40 \(\upmu\)m in thickness was drilled in a stainless-steel gasket. Helium, loaded at 1.4 kbar, was used as the pressure transmitting medium to ensure high hydrostatic pressure conditions up to 42 GPa, the highest pressure reached in this study. During the Raman experiment, the pressure was measured using the R\({}_{1}\)-line emission of a ruby ball placed close to the sample using Holzapfel equation of state [[82]]. The ruby signal is measured before and after each measurement in order to control the pressure drift during long acquisitions. The recorded pressure is set at the average of these two pressure values and the uncertainty is set as the half of the difference between these two values. The homogeneity of the pressure in the DAC was followed from both the width and the splitting between the R\({}_{1}\) and R\({}_{2}\) ruby lines [[83, 84]]. During the X-ray diffraction experiment, the pressure was measured using the equation of state of pure copper powder [[85]] placed close to the crystal. The copper X-ray diffraction images were integrated with the Dioptas software [[86]].
Three experiments on two different single crystals were done. During the first one, we recorded only Raman on a crystal of 40x30 \(\upmu\)m in size and 10 \(\upmu\)m in thickness up to 42 GPa and back to room pressure. We have reproduced this Raman experiment on a smaller crystal of 15x18 \(\upmu\)m and 10 \(\upmu\)m thickness up to 25 GPa and back to room pressure. During this second experiment, we have chosen not to exceed 25 GPa in order to avoid forming the high-pressure metallic phase. A third experiment up to 35 GPa, using X-ray diffraction was done with the second crystal that had already experienced pressure during the second Raman experiment.
The ruby and Raman measurements were made at room temperature using a 514.4 nm laser (Cobolt Fandango) and a 750 mm spectrometer (SP2750, Acton Research) with a 2400 grooves/mm grating (blazed at 500 nm), equipped with a cooled CCD camera (PyLoN, Princeton), and a 50 \(\upmu\)m entrance slit size that provides a resolution of 0.70 cm-1 (0.019 nm). A set of Bragg filters (BNF-Optigrate) were used
in order to reject the excitation line. The spectra were recorded in backscattering geometry with a 50X objective (Nikon) to focus the incident laser beam and collect the scattered light from inside the DAC through the diamond anvil. The spectrometer was calibrated in wavenumber using the lines of a Ne-Ar lamp. The incident laser power was fixed at 0.5 mW (measured before the DAC) in order to avoid any laser heating of the sample that could induce the M\({}_{1}\)-Rutile transition at 340 K. The Raman spectra covering a 25-900 cm\({}^{-1}\) spectral range were recorded using two monochromator positions with a maximum of 300 s acquisition time averaged over two to four acquisitions. In the 25-150 cm\({}^{-1}\) range, we have subtracted the contribution of N\({}_{2}\)/O\({}_{2}\) rotations lines. Spectral parameters (position and full-width at half maximum FWHM) were obtained from the decomposition of each spectrum with several Lorentzian peaks using Fityk software (version 1.3.1) [87].
Single crystal X-ray diffraction (XRD) experiment was done at ID15B beamline (ESRF Grenoble) with a monochromatic wavelength \(\lambda\)=0.41020 A and a 2x4um focused beam. Diffraction images were collected during the continuous rotation of the DAC around the vertical \(\omega\) axis in a range \(\pm\)32\({}^{\circ}\), with an angular step of \(\Delta\omega\)=0.5\({}^{\circ}\) and an exposure time of 0.5 s/frame. The CrysAlis\({}^{\text{{}^{Pro}}}\) software package [88] was used for the analysis of the single-crystal XRD data (indexing, data integration, frame scaling, and absorption correction). A single crystal of Vanadinite [Pb\({}_{5}\)(VO\({}_{4}\))\({}_{3}\)Cl, _Pbco_ space group, \(\alpha\) = 8.8117(2) A, \(b\) = 5.18320(10) A, and \(c\) = 18.2391(3) A] was used to calibrate the instrumental model in the CrysAlis\({}^{\text{{}^{Pro}}}\) software, i.e., the sample-to-detector distance, detector's origin, offsets of the goniometer angles, and rotation of both the X-ray beam and detector around the instrument axis. Using the Jana2006 software package, the structure was solved with the ShelXT structure solution program [89]. Crystal structure visualization was made with the VESTA software [90]. The equation of state was obtained by fitting the pressure-volume data using a third order Birch-Murnaghan (BM EoS). Le Bail profile analyses of the pattern measured at 35 GPa have been carried out using the FULPROF software [91]. Cell parameters and overall thermal factor are refined. The background was first removed with a spline interpolation and then refined as a linear function. The peak shape was described with pseudo-Voigt function. The profile parameters \(u\),\(v\),\(w\) and the mixing parameter of the pseudo-Voigt function were kept fixed for the final refinement.
## 3 Results
### Single crystal X-ray diffraction under high-pressure
The single crystal diffraction measured in the restricted geometry of the DAC allows to index 180 peaks ("30 % of the total reciprocal lattice) in a monoclinic reduced niggly-cell with a\({}_{\text{{M}1}}\)= 5.3548(6) A, b\({}_{\text{{M}1}}\)= 4.5253(2) A, c\({}_{\text{{M}1}}\)= 5.3817(3) A, g\({}_{\text{{M}1}}\)= 115.224(9)\({}^{\circ}\) and volume V\({}_{\text{{M}1}}\)=117.974(15) A\({}^{3}\) with space-group _P2\({}_{3}\)/c_ (n\({}^{*}\)14, Z=4, cell choice 1). Notice that this reduced cell is identical to the _P2\({}_{3}\)/n_ (n\({}^{*}\)14, Z=4, cell choice 2) monoclinic cell with a\({}_{\text{{M}1}}\)= 5.7510(8) A, b\({}_{\text{{M}1}}\)= 4.5253(17) A, c\({}_{\text{{M}1}}\)= 5.3548(6) A, g\({}_{\text{{M}1}}\)= 122.16(2)\({}^{\circ}\) in agreement with the lattice parameters reported in the ICSD [35] for phase M\({}_{1}\). The reciprocal maps attest to the absence of multi domains in the crystal measured under pressure (see Figure S1 in the supplementary information). Unfortunately, the orientation of the crystal in the DAC was not favourable to access to the [0k0] direction in the (hk0) plane and to confirm the presence of the 2\({}_{1}\)-screw axis along the b axis. However, the specific extinctions (h0l) with h+l=2n and (h00) with h=2n due to the presence of a mirror \(n\) perpendicular to the b axis are observed. The crystallographic extinctions are not modified up to 34 GPa (Figure S2) which discard any structural transition to _P_\(\bar{\text{1}}\) (n\({}^{*}\)2), _P_\(2_{1}\) (n\({}^{*}\)4), or _Pc_ (n\({}^{*}\)7) subgroups of the _P_\(2_{3}\)/_c_ space group. The diffraction intensities are refined in the M\({}_{1}\) phase (see the refinement parameters at 0.3 GPa in Table S1). The crystallographic parameters (unit cell parameters, volume and atomic positions) up to 34 GPa are given in Table S2.
Figures 1(a-d) display the monoclinic unit cell parameters evolution with increasing pressure. As observed previously [72, 71, 69, 74], the \(\alpha_{M1}\) lattice parameter decreases without any detectable discontinuity between 0 and 34 GPa whereas, above 13-14 GPa, a discontinuity is observed in the (\(b_{M1}\), \(C_{M1}\)) monoclinic plane, i.e., the \(b_{M1}\) softens while the \(C_{M1}\) hardens simultaneously. A discontinuity is also observed in the pressure behavior of the beta angle at 14 GPa (see Fig 1c). The non-linear pressure dependence of cell parameters are reproduced by a third order Birch-Murnaghan-like equation of state (BM EoS) with \(\alpha_{M1}\)= 5.7506(7) A, K= 545(5) GPa and K=4.9(3) between 0 and 34 GPa and by three second order BM-like EoS with \(b_{M1}\)= 4.5259(7) A, K= 630(9) GPa, \(c_{M1}\)= 5.3521(12) A, K= 820(23) GPa and \(\beta\)= 122.16(1)\({}^{*}\), K= 12672(682) GPa between 0 and 13 GPa. Below 14 GPa, the monoclinic \(\alpha_{M1}\) cell parameter is more compressible than the \(b_{M1}\) and \(c_{M1}\) parameters and the beta angle is remarkably stiff. Using the EoS of the low pressure M1 and extrapolating them above 14 GPa, the elastic spontaneous deformations e11, e22, e33, e12, e13, e23 and the etotal=V(\(\beta\)e\({}_{2}\)) are calculated in the high-pressure monoclinic phase. The e11, e12, e13, e23 stay at values close to zero whereas the e22, e33 and etotal increase as the square root of (P-P\({}_{c}\)) as shown in Figure 2. Maximum values of e22=-1.5%, e33=+2.5% and etotal=+2.9% are reached at 34 GPa.
The pressure dependence of the volume shown in Figure 3(a), did not show any obvious discontinuity in the whole pressure range. The volume variation was first fitted with one unique third order BM EoS with V\({}_{0}\)=117.97(4) A\({}^{3}\), K= 214(2) GPa and K= 2.5(1) between 0 and 34 GPa. However, the value of K' less than 4 and the discontinuity at 13-14 GPa in the F-f plot reveal the structural transition (see insert in figure 3(a) using V\({}_{0}\)=117.97 A\({}^{3}\)). Thus, the EoS of M1 phase are V\({}_{0}\)=118.00(4) A\({}^{3}\), K= 194(7) GPa and K'= 7(1) between 0 and 14 GPa and V\({}_{0}\)=119.6(6) A\({}^{3}\), K= 162(17) GPa and K= 4.6(8) between 14 to 34 GPa. The EoS of M1 and M1' phases agree with Ref. [71]. The K' values are 15% lower than those measured on nanoparticles [74]. The distance between the two vanadium atoms of VV dimers along the monoclinic chain shows a regular decrease with pressure from 2.62 A to 2.47 A at 34 GPa (see Figure 3(b)) and is fitted by a third order BM-like EoS with d\({}_{w}\)= 2.6199(8) A, K*= 564(14) GPa and K=2.3(8). A maximum contraction of 5.7% is measured at 34 GPa. As shown in Figure 3(c) the VO\({}_{6}\) polyhedra reduce their volume without any apparent discontinuity at 14 GPa and can be reproduced by a third order BM EoS with V\({}^{*}\)octu= 9.542(6) A\({}^{3}\), K*= 173(5) GPa and K=12.2(7). A maximum contraction of 10% is measured at 34 GPa. The individual VO distances inside an octahedron, reported in Figure S3, show a regular decrease with the tendency for the VO\({}_{6}\) polyhedron to become more symmetric.
The relative variation of the atomic fractional parameters with pressure obtained from refining the single crystal diffraction intensities indexed in space-group \(P2_{V}\)/\(n\) (n\({}^{*}\)14, Z=4, cell choice 2) are reported in Figure 4. Vanadium and oxygen atoms are in general position (site 4e). The vanadium coordinate along \(b_{M1}\) increases continuously by 0.4(1)% at 34 GPa. They decrease by 0.3(1)% at 14 GPa in the (\(\alpha_{M1}\), \(c_{M1}\)) plane and remain constant above. The two oxygen fractional positions almost do not change in the pressure range 0-14 GPa, but they display a clear deviation above 14 GPa that is one order of magnitude larger than that of the vanadium displacements. In figure 4(b), we report the spontaneous displacements of both oxygens atoms measured along the three crystallographic directions after subtracting the displacements extrapolated from the behavior below 14 GPa. Notice that both oxygen atoms display opposite spontaneous displacements of the exact same amplitude along \(\alpha_{M1}\) (former c\({}_{8}\) axis in the rutile phase) and \(c_{M1}\) directions while they move in the same direction along \(b_{M1}\) (former a\({}_{8}\) axis in the rutile phase). The oxygen spontaneous displacements follow a square root dependence with P-P\({}_{c}\) with fixed P\({}_{c}\)=13.9 GPa as plotted with plain lines in figure 4(b).
At 35 GPa, the previous well resolved single crystal diffraction pattern disappeared suddenly. The crystal is damaged which indicates a first order transition. Some crystallographic axes are still observed;
however, Bragg peaks are spread in the azimuthal direction (see insert in Figure 5). Different structural models (including baddeleyite-type phase X from ref [71], Mx from ref. [74, 7], or orthorhombic phase from ref [78]) were tested but none of them can reproduce the X-ray diffraction pattern. The pattern was indexed with a triclinic (\(P\overline{1}\)) cell with \(q\)=9.075(3) A, \(b\)=4.412(2) A, \(c\)=4.996(3) A, \(\alpha\)=87.84(4)\({}^{*}\), \(\delta\)=94.52(4)\({}^{*}\), \(\nu\)=92.67(4)\({}^{*}\) and V=199.05(19) A\({}^{3}\) with Bragg R factor of 0.4% as reported in Figure 5. The unit cell contains height VO\({}_{2}\) formula unit. A volume jump of \(\Delta\)V/V=-3.3(1) % is measured at the transition. The high-pressure phase X is different from the structural model reported for the triclinic phase in VO\({}_{2}\) doped with cation of lower oxidation states or under uniaxial stress. If it was the case, we would expect a second order continuous transition that is not observed. Attempts were made to refine the structure starting from a baddeleyite-type model but the statistics in azimuthal direction was not good and the intensity was too low to obtain a reliable refinement.
### Single crystal Raman spectra under high-pressure
The Raman spectrum measured on a VO\({}_{2}\) single crystal is identical to previously published spectra for the M\({}_{1}\) phase [92, 93, 94, 95, 96, 97]. Eighteen Raman active modes (9A\({}_{6}\)+9B\({}_{8}\)) are expected and almost all of them were identified at 83 K on a naturally oriented single crystal [93, 94] (Table 1). Figure 6 displays a zoom on the low wavenumber part of the Raman spectra (70-340 cm\({}^{-1}\)) to highlight the softening/hardening of the low-laying 145 cm\({}^{-1}\) weak mode observed under pressure. The stokes and anti-stokes spectra measured at 21 GPa (see figure S5) confirm that this mode is a phonon and not a fluorescent artefact. The entire Raman spectra measured up to 25 GPa, are reported in figure S4. In this work the Raman modes are labelled as A\({}_{6}\)(1) to A\({}_{6}\)(9), and B\({}_{6}\)(1) to B\({}_{6}\)(9) in Figure 6 and in Figure S4.
Figure 7 presents the pressure dependence of spectral parameters obtained from the decomposition of Raman spectra with Lorentzian functions. In the past studies, the symmetry assignment of lowest wavenumber mode at 145 cm\({}^{-1}\) was not conclusive (A\({}_{6}\) B\({}_{8}\) or the superposition of both symmetries was proposed) [96, 97] (Table 1 gather the different assignment proposed in the literature). Here, thanks to the different pressure dependences, we confirm that, at ambient conditions, one soft mode and one hard mode with different symmetries are superimposed at 145 cm\({}^{-1}\). At pressure above P\({}_{c}\)= 13.9(1) GPa, the soft mode changes its behavior and starts hardening, which marks the isostructural M\({}_{1}\)-M\({}_{1}\)' transition. This transition is reversible with no pressure hysteresis. Extrapolating the \(\mathrm{v}_{SM}^{2}(P)\) to \(\mathrm{v}_{\mathrm{SM}}\)=0 limit gives P\({}_{c}\)= 26.9(4) GPa, for the potential stability limit of the M\({}_{1}\) phase. The ratio between the slopes d\(\mathrm{v}^{2}\)/dP below and above P\({}_{c}\) is 2.4(1) close to 2, characteristic of a continuous phase transition. With increasing pressure, the hardening mode successively cross the A\({}_{6}\)(1) mode at 25 GPa, and the 190 cm\({}^{-1}\) A\({}_{6}\)(2) mode at 29.5 GPa, and shows a deviation from the linear dependence at pressure higher than 32 GPa before disappearing at 41 GPa (Figure 7(a)). The spectra recorded between 20 and 29 GPa showing the successive crossing between low wavenumber modes are reported in figure S6(a). The pressure evolution of the half width at half maximum (HWHM) of both Bg(1) and Ag(1) modes obtained from the decomposition of the Raman spectra using Fityk software are reported in figure S6(b). The Bg(1) HWHM is narrower (2 cm\({}^{-1}\)) than the Ag(1) (6 cm\({}^{-1}\)). Under pressure, the HWHM of Bg(1) remains constant as Ag(1) decreases sharply. Above 20 GPa, depending on experience and therefore local conditions, HWHM may fluctuate, but as far as positions are concerned, everything is reproducible. The integrated intensity (area) progressively increases with pressure above P\({}_{c}\) (Figure 7(b)). Contrary to previous studies [96, 97, 98, 71, 73], the A\({}_{6}\)(2) mode at 190 cm\({}^{-1}\) does not show any abrupt increase is the rate d\(\mathrm{v}\)/dP at P\({}_{c}\). We rather measured a small decrease of the slope from d\(\mathrm{v}\)/dP= 0.36(1) cm\({}^{-1}\)/GPa to 0.22(1) cm\({}^{-1}\)/GPa at the transition. The discontinuity reported at 10 GPa in previous studies might be a consequence of the use of non-hydrostatic pressure transmitting media, i.e. NaCl, KCl [66, 67], or ethanol-methanol [68, 69, 80] that are known to be strongly anisotropic at this pressure. The half width at
half maximum, HWHM, (Figure 7(d)) shows a regular decrease with pressure up to 29 GPa, followed by a tendency to increase that is always observed at such high pressure because of the progressive loss of hydrostaticity of the helium transmitting medium. The same tendency is measured on the ruby pressure marker (see Figure 7(d)). The integrated intensity (area) of the A\({}_{\mathrm{g}}\)(2) peak (Figure 7(c)) is almost constant up to 32 GPa and suddenly drops at higher pressure before disappearing at 41 GPa. The pressure dependences dv/dP and the Gruneisen parameters of the Raman modes are reported in Table 2 and their positions are given at 0 GPa for the M\({}_{1}\) phase and at 13.9 GPa for the M\({}_{1}\)' high-pressure phase.
A second original observation in Raman spectra of the M\({}_{1}\) phase under pressure is the splitting of the mode at 225 cm\({}^{-1}\) in two components at pressure as low as 3 GPa within the resolution limit of our spectrometer (see Figure 7(a) and figure S7). This mode was in the past associated to a single A\({}_{\mathrm{g}}\) symmetry but experimental [97] and theoretical studies [98, 99] have proposed that two modes of A\({}_{\mathrm{g}}\) and B\({}_{\mathrm{g}}\) symmetries could be superimposed at room condition. Here again, pressure allows for distinguishing both modes due to their different pressure-dependences. Both modes show a sharp slope changes in v(P) at P\({}_{c}\)= 13.9(1) GPa (see Table 2).
The variations of the spectral features at the transition allows for correlating the Raman modes with the different components of the strain. The spontaneous shift v(M\({}_{1}\)') - v(M\({}_{1}\)) is calculated after subtracting the wavenumber v(M\({}_{1}\)) extrapolated above P\({}_{c}\) from the behavior measured below 14 GPa. The A\({}_{\mathrm{g}}\)(3) scales linearly with the absolute value \(|\)e\({}_{22}\)\(|\) of the spontaneous strain along \(b_{ML}\) (Figure 8(a)) or with (e\({}_{33}\)-e\({}_{22}\)) that reflects the deformation of the (\(b_{ML}\),C\({}_{ML}\)) plane. The B\({}_{\mathrm{g}}\)(3) scales linearly with the square of the spontaneous strain along \(c_{ML}\) (e\({}_{33}\)2)(see Figure 8(b)). With further increasing pressure, at 29 GPa, another discontinuity is observed in the splitting (see figure S7). In previous studies, the splitting was observed, only above 27-28 GPa [71, 69] or above 19 GPa [73] but was associated to phase X or to a new insulating M\({}_{3}\) phase, different from phase X.
A third original observation in the M\({}_{1}\) phase, concerns the Raman modes B\({}_{\mathrm{g}}\)(2) at 260 cm\({}^{-1}\), and A\({}_{\mathrm{g}}\)(4) at 310 cm\({}^{-1}\). They exhibit an unusual small pressure-dependence of their positions (see Figures 9(a) and 9(b)). The slopes are dv/dP = 0.03(1) cm\({}^{-1}\)/GPa and dv/dP = 0.13(1) cm\({}^{-1}\)/GPa, respectively (see Table 2). However, they show an abrupt change in their dv/dP at P\({}_{c}\). To the best of our knowledge, the B\({}_{\mathrm{g}}\)(2) slope discontinuity was never reported. Some authors have seen that this mode disappear between 14-15 GPa [71, 69] or at 22 GPa [74]. From our observations, the intensity starts decreasing at 14 GPa but the mode is still observed up to 30 GPa. The slope changes of the A\({}_{\mathrm{g}}\)(4) mode was reported at the M\({}_{1}\)' - M\({}_{1}\)' transition above 13 GPa [71, 74]. The HWHM (not shown) exhibits a regular decrease with increasing pressure similar to that measured on the A\({}_{\mathrm{g}}\)(2) mode (Figure 7(b)). The spontaneous shift v(M\({}_{1}\)') - v(M\({}_{1}\)) for the A\({}_{\mathrm{g}}\)(4) mode scales linearly with e\({}_{33}\)2(Figure 9(c)).
The Raman modes at higher wavenumbers exhibit classical increase of their positions with increasing pressure (see figure S8). The slopes dv/dP are larger than those measured for the low-wavenumber modes. A small decrease of the slopes dv/dP is observed at P\({}_{c}\)(see Table 2). Notice that the slope of the B\({}_{\mathrm{g}}\)(4) mode, at 340 cm\({}^{-1}\) and B\({}_{\mathrm{g}}\)(8) mode, at 665 cm\({}^{-1}\) are almost not affected by the transition at P\({}_{c}\).
At 32 GPa, the collapse of the Raman intensity and the sudden increase of the background are the signature of the formation of the metallic phase X. With further increasing pressure up to 41 GPa, the Raman peaks disappear and some new peaks appear progressively. The Raman signature of the pure phase X recorded during decompression is reported in Figure 10 (in red at 28.7 GPa) and shows nine weak peaks at 185, 325, 440, 466, 505, 662, 707, 763 and 845 cm\({}^{-1}\) (see Fig. 10 and Fig. S8). Upon decompression, the Raman spectra show a transformation, between 22 and 18.5 GPa, to a spectrum of reasonable intensity that is not compatible with neither the M\({}_{1}\)' nor M\({}_{1}\) structures but can be
explained by a coexistence between phase X and a new structure. The coexistence persists down to 9.3 GPa but, between 5 and 3 GPa, phase X completely disappeared and the remaining spectrum resembles that of the triclinic T phase (or M\({}_{3}\)) measured on 0.7% Cr-doped VO\({}_{2}\) by Marini at al. [6]. The same signature was reported on VO\({}_{2}\) nanoparticles below 23.9 GPa and down to 2.1 GPa by Li at al. [74], and was interpreted as a back transformation from the baddeleyite-type M\({}_{\rm x}\) phase into a new baddeleyite-type M\({}_{\rm x}\)' phase with a local structure similar to the M\({}_{1}\) structure.
## 4 Discussion
### First transition from M\({}_{1}\) to M\({}_{1}\)' at 14 GPa
Depending on the pressure transmitting medium, the M\({}_{1}\)-M\({}_{1}\)' transition has been reported at pressure varying between 10 and 15 GPa [66, 67, 68, 72, 71, 74, 80]. In our hydrostatic conditions, VO\({}_{2}\) single crystal exhibit a first isostructural transition, \(M_{1}\) to M\({}_{1}\)', at P\(\epsilon\)= 13.9(1) GPa as observed by Raman and x-ray diffraction measurements. The transition is quasi-continuous, second order-like with no measurable volume jump. The transition is displacive with oxygen displacements compatible with the R-point condensation (in the parent rutile) without strong modification of the VV dimers nor of the twist angle of vanadium chains (Fig. 4). The oxygen sub-lattice spontaneous displacements and the spontaneous deformation of the (\(b_{M1}\), \(c_{M1}\)) plane follow the same quadratic dependence with pressure (Fig. 2 and 4). The monoclinic \(c_{M1}\) lattice parameter is not affected by the transition (Fig. 1).
We can combine these new high-quality experimental data with reliable information published so far, and suggest therefore a coherent picture of phase transitions in VO\({}_{2}\) compressed and heated/cooled. The rutile to monoclinic transition is an _improper ferroelastic transition of displacive type_ and is induced by the four-component order-parameter spanning \(R_{1}^{-}\) irreducible representation at the R-point of the tetragonal Brillouin zone [36, 37, 10, 11, 38, 39, 47]. Mechanical (vibrational) representation of the rutile-type structure at the R-point of the Brillouin zone reads:
\[T_{M}=(3R_{1}^{-})_{V}+(3R_{1}^{-}+3R_{1}^{+})_{O} \tag{1}\]
Thus, the symmetry-breaking atomistic mechanism of the structural R-M\({}_{1}\) transformation contains simultaneous vanadium and oxygen atoms displacements, both transforming as \(R_{1}^{-}\) and, therefore, coupled bilinearly in the free-energy. In other words, the symmetry lowering and distortion of the tetragonal structure are controlled by coupled vanadium and oxygen displacements. In the high-temperature rutile phase, the four components of the \(R_{1}^{-}\) OP are zero: \(\eta_{1}=\eta_{2}=\eta_{3}=\eta_{4}=0\), and the vanadium chains are regularly aligned with fixed VV bonds distances of 2.86 A. At the rutile to M\({}_{1}\) transition, one component of the \(R_{1}^{-}\) OP takes non-zero value (\(\eta_{1}\neq 0,\eta_{2}=\eta_{3}=\eta_{4}=0\)). The R-point imposes that two antiferroelectric vanadium displacements occurs when \(\eta_{1}\neq 0\): one along the a\({}_{\rm M1}\) axis (a\({}_{\rm M1}\)=2\(c_{\rm R}\)) forming VV dimers on one chain and one off-axis in the plane perpendicular to a\({}_{\rm M1}\) axis forming twisted vanadium on the nearest neighbor vanadium chain [10, 47]. Thus if \(\eta_{1}\neq 0\) two twisted vanadium chains with VV dimers are formed in the M\({}_{1}\) phase.
Under pressure, the second component \(\eta_{2}\) of the \(R_{1}^{-}\) OP, which reduces its symmetry to B\({}_{\rm g(1)}\) after the Brillouin zone folding, drives the structure transformation to the M\({}_{2}\) phase with: \(\eta_{1}=\eta_{2}\neq 0\) (\(\eta_{3}=\eta_{4}=0\)). One set of vanadium chains pair (VV dimers) is not twisted while the other set stays twisted but loses the VV dimers. The M\({}_{2}\) phase is expected at 27 GPa, as estimated from extrapolating the linear part of the soft mode wavenumber \(v_{SM}^{2}(P)\) measured experimentally to the v\({}_{SM}\)=0 limit. However, the M\({}_{2}\) phase is not observed because the isostructural M\({}_{1}\)-M\({}_{1}\)' transition occurs at P\(\epsilon\)= 13.9(1) GPa suppressing this instability and conserving the M\({}_{2}\)-type phase thermodynamically more stable. The observed phonon softening does not drive the M\({}_{1}\)-M\({}_{1}\)' transition (but drives the M\({}_{1}\)-M\({}_{2}\)). The isostructural transition somehow prevents the M\({}_{1}\)-M\({}_{2}\) transition from taking place under
hydrostatic pressure as detailed in the Landau-based analysis developed in the next section. The oxygen displacements and the monoclinic (\(b_{M1}\), \(c_{M1}\)) plane distortion remind those found in the rutile to CaCl\({}_{2}\) transition observed in VO\({}_{2}\) at higher temperature [79, 80, 71] and in many other AO\({}_{2}\) oxides. However, the oxygen polyhedron is not only rotating along to the \(a_{M1}\) axis (former \(c_{R}\) axis in the rutile phase). Let us show, in the framework of phenomenological theory, that the reason for the isostructural transition lays in the highly anharmonic dependence of the free-energy on the non-totally-symmetric OP.
### Understanding the VO\({}_{2}\) phase diagram from phenomenological theory
Our experimental findings allow us to derive a complete picture of phase transitions in VO\({}_{2}\) observed under different pressure and temperature conditions (P<35 GPa). This requires to consider two-component effective order-parameter. The image-group, reduced form of the relevant four-dimensional representation \(R_{1}^{-}\) to a two-dimensional effective order-parameter group, possesses the point-symmetry \(4mm\). Phenomenological models for the two-dimensional tetragonal image-group were analysed in detail by Y. Gufan and co-workers and cited in [100].
The basic invariants forming the integrity basis for the image-group \(4mm\) are:
\[I_{1}=\eta_{1}^{2}+\eta_{2}^{2},\,\text{and}\,\,\,I_{2}=\eta_{1}^{2}\cdot\eta_ {2}^{2}. \tag{2}\]
Accordingly, the most compact structurally stable order-parameter ten-degree expansion, which is necessary to account for two consecutive first-order phase transitions R-M\({}_{1}\) and M\({}_{1}\)-M\({}_{1}\)' (see Appendix 1), is expressed as:
\[F(\eta_{1},\eta_{2},P,T)=a_{1}(P,T)I_{1}+a_{2}(P,T)I_{1}^{2}+b_{1}I_{2}+c_{12} I_{1}I_{2}+a_{4}I_{1}^{4}+b_{2}I_{2}^{2}+a_{5}I_{1}^{5}. \tag{3}\]
The free-energy (3) has four minima corresponding to the four phases known for VO\({}_{2}\):
\[\begin{array}{l}\text{l:}\,\eta_{1}=\eta_{2}=0\sim\text{R};\\ \text{ll:}\,\eta_{1}\neq 0,\eta_{2}=0\sim\text{M}_{1};\\ \text{ll:}\,\eta_{1}=\eta_{2}\neq 0\sim\text{M}_{2};\\ \text{IV:}\,\eta_{1}\neq\eta_{2}\neq 0\sim\text{T}.\end{array}\]
Figure 11 shows a section of the theoretical phase diagram, corresponding the potential (3), which is topologically adequate to understand the VO\({}_{2}\) pressure-temperature phases diagram experimentally mapped in hydrostatic conditions. In addition, this free-energy expansion includes the existence of a critical end-point K (gas-liquid type) on the M\({}_{1}\)-M\({}_{1}\)' transition line at which the first-order transition transforms to a cross-over continuous regime (see annex 1). The fact that no apparent volume could be experimentally measured indicates that the isostructural M\({}_{1}\)-M\({}_{1}\)' transition is quasi-continuous and reveals that the pressure path passes in close vicinity of this critical point K (Figure 11). Varying pressure at higher temperature should allow to measure an increasing volume jump at the M\({}_{1}\)-M\({}_{1}\)' transition as one moves away from the critical point. The topology of the phenomenological phase diagram also predicts that the triclinic T structure can be observed at even higher hydrostatic pressures. On the contrary, M\({}_{2}\) and rutile R phases might hardly be formed under hydrostatic pressure at ambient temperature. The phase diagram explains also that Cr-doped VO\({}_{2}\), which adopt triclinic (for 0.7% Cr) or M\({}_{2}\) (for 2.5% Cr) phases, are reported to first transform to M\({}_{1}\) phase at 2.7 GPa and 3.7 GPa, respectively, and then, to the same \(M_{1}\)' phase as pure VO\({}_{2}\) at 12 GPa [67, 72].
It is worth stressing that the general form of Eq. (3) and the diagram shown in Figure 11 are generic ones since they also account for stress/strain effects. Indeed, we can distinguish two types of strain components: (i) through improper spontaneous strains induced by the primary order-parameter
(\(e_{11}\), \(e_{22}\), \(e_{33}\), \(e_{12}\), and \(e_{13}\)), and (ii) through external deviatoric stress (\(e_{23}\)) developing under quasi-hydrostatic compression conditions, or surface effects in thin films, for instance [41, 42, 43]. Although the coupling terms in the free-energy have different forms, \(\eta_{1}^{2}\cdot e_{jk}\) and \(\eta_{1}^{2}\cdot e_{lm}^{2}\), they should be integrated with the unique quadratic invariant \(t_{1}\) in the free-energy (3). This will lead to renormalizing the corresponding coefficient (\(\alpha_{1}\)+\(c_{\bar{1}}\)+\(c_{\bar{1}}\))\(\rightarrow\)\(\bar{\alpha}_{1}\) but without modifying the general form of Eq. (3). The topology of the phase diagram of Fig.11 remain unchanged, however the transition line can be shifted and then the M\({}_{2}\) phase could be observed under non-hydrostatic stress. Topology means the correct description of the phases in contact, and prediction of the order for the phase transition that can occur between them.
The changes in the midinfrared transmittance/reflectance [66, 67, 68, 69] and in the resistivity observed previously under pressure [70, 71, 80] are concomitant with the M\({}_{1}\)-M\({}_{1}\)' isostructural transition. This strongly suggests that electronic properties and structural modifications (with oxygen displacements) are linked and that the Peierls mechanism is valid. We can assume that this isostructural transition can also be induced by uniaxial/bi-axial stresses in thin films, or in non-stoichiometric VO\({}_{2}\) for which internal stresses can be generated. Thus, experimental studies that have questioned the Peierls mechanism because of the observation of a monoclinic-like metallic VO\({}_{2}\) where electronic and structural transitions seem decoupled [101, 102, 103, 104] did not consider the possibility of having formed the isostructural M\({}_{1}\)' phase.
### Raman signature of M\({}_{2}\), M\({}_{2}\) or T phases as a tool for thin film engineering
The technological interest on VO\({}_{2}\) has led to the study of various thin films or nanobeams using Raman spectrometry as a valuable tool to differentiate between rutile, M\({}_{1}\), M\({}_{2}\) or T phases [105, 55, 45, 57, 106, 107, 108, 45, 97, 30, 109]. The metallic rutile has a weak signal composed of broad modes at 300 and 550 cm\({}^{-1}\) (for A\({}_{\rm{ig}}\)+B\({}_{\rm{ig}}\)+ E\({}_{\rm{g}}\)) [110] that are difficult to measure. The Raman signature of M\({}_{1}\) is quite well documented but not all the 9A\({}_{\rm{g}}\) and 9B\({}_{\rm{g}}\) modes were observed and the symmetry assignments are still being debated (see Table 1). The present study, thanks to pressure-induced variations of the peak positions allows for clarifying the assignment (see section 3.2). Moreover, very few is known on the atomic displacements (eigenvectors) involved in each mode. Since the Raman study under oxygen isotopic substitution [67], it is often said that the two intense low wavenumber modes at 190 and 225 cm\({}^{-1}\) involve predominantly vanadium displacements. This was supported by the phonon density of state obtained with _ab initio_ calculations [111, 112, 98, 113, 6, 114, 115]. There is a widespread belief that these modes are associated with the stretching and twisting features of the dimerized chains and contribute to the M\({}_{1}\)-rutile transition [127, 69, 111, 98, 69, 99]. However, these modes do not obviously soften at the MIT [92, 13, 94]. The Raman signature of the T phase is similar to that of M\({}_{1}\) but the A\({}_{\rm{g}}\)(1) mode is downshifted to 126 cm\({}^{-1}\), the A\({}_{\rm{g}}\)(2) is ubshifted to 200 cm\({}^{-1}\), and a small splitting of the A\({}_{\rm{g}}\)(3)+B\({}_{\rm{g}}\)(3) is observed [67, 45, 106, 114, 115, 116, 117]. In the M\({}_{2}\) phase, the A\({}_{\rm{g}}\)(1) mode downshifts even more to 50 cm\({}^{-1}\), the A\({}_{\rm{g}}\)(2) stays at 200 cm\({}^{-1}\), and two components are clearly observed for the A\({}_{\rm{g}}\)(3)+B\({}_{\rm{g}}\)(3) [67, 105, 55, 45, 57, 106, 108, 116]. We do not endorse the fact that the A\({}_{\rm{g}}\)(1) mode could be a breathing mode of spin-Peierls dimerized 1-D spin % Heisenberg chain [116] but rather found that the two modes A\({}_{\rm{g}}\)(1)+B\({}_{\rm{g}}\)(1) at 145 cm\({}^{-1}\) are the vanadium displacive modes expected from the condensation of the Rutile \(R_{1}^{-}\) OP. The progressive softening of the A\({}_{\rm{g}}\)(1) mode through M\({}_{1}\) to T and M\({}_{2}\) structural transformation, where one half of the the Peierls pairing and twisting are partially removed or with increasing pressure, where only one mode softens until the transition to M\({}_{1}\)' hinders this instability, supports our finding.
The splitting of the A\({}_{\rm{g}}\)(3)+B\({}_{\rm{g}}\)(3) mode, at 225 cm\({}^{-1}\), in both M\({}_{1}\) and M\({}_{1}\)' phases, highlighted in Figures 7(a) and Figure 7, was often misunderstood in the past. Several DFT calculations concluded that the zigzag V motions that untwist the VV pairs are located between \(\gg\)6.0 THz (197 cm\({}^{-1}\)) [98], 6.38 THz (213 cm\({}^{-1}\))
[11] or 26.5 THz (217 cm\({}^{-1}\)) [18], close to the positions of the A\({}_{\rm g}\)(3)+B\({}_{\rm g}\)(3) modes. We do not observe any softening with pressure and doubt that these modes are linked to the pairing or tilting motions of VV dimers. We found that the splitting is observed in the M\({}_{1}\) phase at 2-3 GPa (Figure 7(a)). Indeed, from the linear pressure evolution of each mode, we found that the two modes intersect at 1.9 GPa confirming they have different symmetries. The angular dependence of the Raman intensity in different polarized conditions measured outside the DAC as shown in Figure S9, also show the superimposition of two different symmetries already in the M\({}_{1}\) monoclinic phase at ambient conditions in agreement with Shibuya at al. [97]. The splitting is equal to 0.8 cm\({}^{-1}\) in the M\({}_{1}\) at room condition which explains why it was hardly detectable in past studies. At the M\({}_{1}\)-M\({}_{1}\)' iso-structural transition, both modes display an abrupt change of their dV/dP (see Figures 7(a) and S7 or in Table 2). We found that the A\({}_{\rm g}\)(3) scales linearly with the B\({}_{\rm g}\)(3) scales linearly with the spontaneous strain along C\({}_{\rm M1}\) (see Figure 8(a-b)). Thus, the A\({}_{\rm g}\)(3)+B\({}_{\rm g}\)(3) splitting is a good marker of the nature of the strain experienced by VO\({}_{2}\). The unusual pressure behaviour observed at 24-25 GPa (see Figure 7(a) and S7) is a consequence of the saturation of the spontaneous deformation along D\({}_{\rm M1}\) axis while the one along the C\({}_{\rm M1}\) increases without there being a phase transition. Quantification of the monoclinic deformation can also be done using either the A\({}_{\rm g}\)(4) mode at 310 cm\({}^{-1}\) or the B\({}_{\rm g}\)(2) mode at 260 cm\({}^{-1}\). In the M\({}_{1}\) stability region, below P\({}_{c}\), both modes are insensitive to hydrostatic compression (see Figs. 9(a) and 9(b)) and accurate wavenumbers measurements beyond the possible drifts of the equipment can be done using the B\({}_{\rm g}\)(2) mode as an internal reference. Above P\({}_{c}\), the A\({}_{\rm g}\)(4) scales linearly with e\({}_{33}\)\({}^{2}\) (see Fig. 9(c)) or e\({}_{\rm total}\)\({}^{2}\) (not shown) whereas the B\({}_{\rm g}\)(2) scales linearly with e\({}_{33}\)\({}^{4}\) or e\({}_{\rm total}\)\({}^{4}\).
The high-wavenumbers modes, B\({}_{\rm g}\)(4) at 340 cm\({}^{-1}\) (see Fig. S11) or B\({}_{\rm g}\)(8) at 665 cm\({}^{-1}\) (not shown), scale linearly with the monoclinic volume (M\({}_{1}\) or M\({}_{1}\)') with no measurable discontinuity at P\({}_{c}\)= 13.9(1) GPa. The A\({}_{\rm g}\)(9) mode at 615 cm\({}^{-1}\) and Ag(5)+B\({}_{\rm g}\)(5) doublet at 389/393 cm\({}^{-1}\) scale linearly with the octahedron volume (see Fig. S11(c-d)). The apparent discontinuity in the v(P) at P\({}_{c}\) is due to the non-linear pressure dependence of the oxygen octahedron volume (see Fig. 3c).
### Conclusion
The phase diagram of VO\({}_{2}\) has been investigated in the past but several aspects remained unclear. Indeed, the influence of non-hydrostatic components induced either by the pressure-transmitting medium or the form of the sample (powder vs single crystal) on the phase transition led to some discrepancies. Here, we present a combined X-ray diffraction and Raman spectroscopy investigations of high-quality VO\({}_{2}\) single crystal under pressure using Helium as the pressure transmitting medium. For the first time, a pressure-induced soft mode is observed. This behaviour is supposed to drive a transition towards a M\({}_{2}\) phase at pressure around 26 GPa. However, an intermediate phase transition is observed at 13.9 GPa, hindering this phonon instability. The isostructural nature of the phase transition at 13.9 GPa is confirmed experimentally. The microscopic mechanism is clarified and is based on the displacements of oxygen atoms. A phenomenological analysis based on the Landau theory of phase transition is proposed to describe the P-T phase diagram. Considering a strong anharmonic potential, the phase transitions, including the isostructural one, are described. The coupling with strains can explained the shift of the transition lines found in doped VO\({}_{2}\) or in thin films. At higher pressure, a phase transition to a metallic phase, probably triclinic, is observed starting from 32-35 GPa. On decompression, this phase transforms to another triclinic structure. Using high-pressure allows for separating overlapping peaks at ambient conditions and brings some new insights into the assignment of the different modes observed in Raman spectra. In addition, the results of the Raman spectroscopy allow relating some vibrational to different strain components or to pressure-induced microscopic variations such as the octahedron volume. This opens the opportunity to characterize the thin films in terms of structure, nature and amplitude of strain.
**Acknowledgements**
PB acknowledges the French CNRS for financial support through Tremplin@INP2020 and C. Goujon, C. Felix, Ch. Bouchard, A. Prat and J. Debray (CNRS, Institut Neel Grenoble) and J. Jacobs (ESRF Grenoble) for their technical help. The authors are grateful to ESRF ID15b for in-house beamtime allocation. Ch. Lepoittevin (UGA Institut Neel Grenoble) and W.A. Crichton (ESRF Grenoble) are also acknowledged for their advice on crystallographic questions. LB acknowledges the PROCOP Mobilitat Programm for financing two months visit in Grenoble. LN2 is a joint International Research Laboratory (IRL 3463) funded and co-operated in Canada by Universite de Sherbrooke (UdS) and in France by CNRS as well as ECL, INSA Lyon, and Universite Grenoble Alpes (UGA). It is also supported by the Fonds de Recherche du Quebec Nature et Technologie (FRQNT).
**Annex 1**
Although the active order parameter \(R_{1}^{-}\) is four-components [119, 120], only one single component is relevant to account for the rutile to M\({}_{1}\) structure distortion and becomes non-zero in the low-symmetry phase. This allows considering for \(R_{1}^{-}\)-M\({}_{1}\) an effective phenomenological model with one-component order parameter. The \(R_{1}^{-}\) symmetry forbids odd-degree terms in a free-energy expansion, and we get a canonical form for the Landau potential expanded to the tenth-degree:
\[F(\eta,P,T)=\alpha_{1}(P,T)\ \eta^{2}+\alpha_{2}(P,T)\ \eta^{4}+\alpha_{3}\eta^{6}+ \alpha_{4}\eta^{8}+\alpha_{5}\eta^{10}.\] (A1)
The mathematical analysis of the model has been performed by Gufan [120] who concluded that the minimal degree of \(F(\eta)\) required to describe two consecutive first-order phase transitions (here, R-M\({}_{1}\) and M\({}_{1}\)-M\({}_{1}\)') is ten (see also [100] and references therein). This model allows to describe two low-symmetry phases. These phases have identical symmetries but differ by the magnitude of the order-parameter \(\eta\). Therefore, the isostructural transition is intrinsically included into this description. Figure A1 shows the evolution of the theoretical phase diagram with increasing power of the free-energy \(F\). Thus, for VO\({}_{2}\) undergoing two discontinuous phase transitions, R-M\({}_{1}\) and M\({}_{1}\)-M\({}_{1}\)' (Fig.A1(c)), the phenomenological model (A1) is sufficient assuming \(a_{4}\)<0 and \(a_{5}\)>0, with M\({}_{1}\)-M\({}_{1}\)' phase transition being iso-structural. Notice that to choose the maximal degree of expansion (between eight and ten) the main point is the character (continuous or discontinuous) of the first transition R-M\({}_{1}\).
Figure 1: (color online) VO\({}_{2}\) monoclinic cell parameters with increasing pressure; (a) \(a_{ML}\) axis, (b) \(b_{ML}\) axis, (c) \(\beta\) angle between \(a_{ML}\) and \(c_{ML}\), and (d) \(c_{ML}\) axis in the M\({}_{1}\) phase \(P2_{2}/n\) cell choice 2. The full lines correspond to BM EoS between 0 and 34 GPa (see text). Insert in figure (a) shows the VV dimmers along the monoclinic \(a_{ML}\) axis. Insert in figure (b) shows the VO6 octahedra in the (\(b_{ML}\), \(c_{ML}\)) plane.
Figure 2: (color online) Spontaneous deformations calculated in the high-pressure monoclinic cell against the original low pressure monoclinic M\({}_{1}\) using the EoS extrapolated above 14 GPa. The full lines are square root functions with (P-P\({}_{c}\)) with fixed P\({}_{c}\)=13.9 GPa.
Figure 3: (color online) VO\({}_{2}\) monoclinic parameters with increasing pressure (a) volume for Z=4 and F-f plot in the insert, (b) vanadium distances inside VV dimmers, (c) volume of VO\({}_{6}\) polyhedron. The full lines correspond to third order BM EoS. Others EoS from previous works are reported for comparison.
Figure 4: (color online) Pressure dependence of (a) Vanadium fractional coordinates V(x)+0.712, V(y)-0.478 and V(z)+0.473 and (b) fractional oxygen displacements measured after subtracting the displacements extrapolated from the behavior below 14GPa in space-group _P2_\(\nu\)/_n_ (n\({}^{*}\)14, Z=4, cell choice 2). The full lines in (b) are square root function with (P- P\({}_{c}\)) with fixed P\({}_{c}\)=13.9 GPa.
Figure 5: (color online) Result of the LeBail profile fitting in the triclinic P-1 unit-cell at 35 GPa for phase X. Expected diffraction peaks are indicated by ticks. The difference between the experimental and the fit is reported at the bottom. Insert show the two-dimensional image of the crystal with tentative indexation.
Figure 6: (color online) Low wavenumber part (70-340cm-1) of the Raman spectra measured on VO\({}_{2}\) single crystal showing the softening/hardening of the 145 cm-1 Raman mode under increasing pressure. Pressures are quoted on the left of each spectrum. Black lines correspond to monoclinic M\({}_{1}\) phase. Red lines highlight the pressure higher than Pc= 13.9(1) GPa. The symmetry A\({}_{\rm g}\) or B\({}_{\rm g}\) of each mode is indicated at the bottom.
Figure 7: (color online) Low wavenumber Raman spectral parameters measured under increasing hydrostatic pressure; (a) wavenumbers of the Ag(1), Bg(1), Ag(2), Ag(3) and Bg(3) modes (see labels in figure 6), (b) integrated intensity (area/s) of the Ag(1) and Bg(1) modes, (c) integrated intensity (area/s) of the most intense Ag(2) mode, (d) HWHM of the Ag(2) mode and of the Ruby pressure marker. Red and blue colours stand for Ag and Bg symmetry respectively.
Figure 8: (color online) Spontaneous Raman shift w(M\({}_{1}\)’)-w(M\({}_{1}\)) after subtracting the wavenumber w(M\({}_{1}\)) extrapolated above P\({}_{c}\) from the behavior before P\({}_{c}\)=14 GPa (a) Ag(3) and Bg(3) at 225 cm\({}^{-1}\) against \(|\)e\({}_{22}\)\(|\) spontaneous strain along \(b_{M1}\) and (c) Bg(3) at 225 cm\({}^{-1}\) against e\({}_{33}\)\({}^{2}\)spontaneous strain along \(Cm_{1}\).
Figure 9: (color online) Wavenumber of the Raman modes measured under increasing hydrostatic pressure on VO\({}_{2}\). (a) Bg(2) at 260cm-1, and (b) Ag(4) at 310 cm-1. The slopes dv/dP are reported in different pressure regions. (c) Ag(4) at 310 cm-1 spontaneous Raman shift against e\({}_{33}\)2 spontaneous strain along \(c_{M1}\).
Figure 10: (color online) Raman spectrum measured during decompression from 42 GPa. The signature of Phase X (in red at 28.7 GPa) is maintained up to 22 GPa and is transformed to a triclinic phase between 22 and 18.5 GPa. A coexistence between both structures is observed up to 3 GPa. A strained triclinic phase is retained at atmospheric pressure and room temperature. The spectra a corrected from a linear background.
Figure 11: Equilibrium phase diagram corresponding to the free-energy (3) in the plane of phenomenological coefficients (\(a_{1}\),\(a_{2}\)) for 0<\(a_{2}\)<(\(c_{12}^{2}/4b_{2}\)), \(c_{12}\)<0, \(a_{4}\)<\(8b_{2}\), \(a_{5}\)>0. Solid line - first-order, dashed - second-order phase transition lines. K - critical end-point, N\({}_{1}\) and N\({}_{2}\) are three-phase points. Pressure (P) and temperature (T) axes are shown schematically.
**Figure A1:**
Equilibrium phase diagram corresponding to the free-energy (2) in the plane of phenomenological coefficients (\(\alpha_{1}\),\(\alpha_{2}\)) for: (a) canonical six-degree expansion (\(\alpha_{3}\)>0, \(\alpha_{4}\)=\(\alpha_{5}\)=0); (b) eight-degree potential (\(\alpha_{3}\)<0, \(\alpha_{4}\)>0, \(\alpha_{5}\)=0); (c) ten-degree expansion (\(\alpha_{3}\)<0, \(\alpha_{4}\)<0, \(\alpha_{5}\)>0). Figure (c) schematically shows "pressure (\(P\))-temperature (\(T\))" plane (grey area, dotted axes). Solid line - first-order, dashed - second-order phase transition lines. L - Landau tricritical point, N - triple point, K - critical end-point of the iso-structural phase transition.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Position & [92] & [93] & [94] & [17] & [95] & [98] & [96] & [97] & [99] & Monoclinic M\({}_{1}\) \\ (cm\({}^{-1}\)) & & & & & calc. & & calc. & & calc. & & This work \\ \hline \(\ast\) 145 & - & - & \(\Delta\) & \(B_{\rm g}\) & - & \(\Delta\) & \(B_{\rm g}\) & \(\Delta\) & \(\Delta\) & \(\Delta\) \\ \(\ast\) 190 & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) \\ \(\ast\) 225 & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) \\ \(\ast\) 260 & \(\Delta\) & \(B_{\rm g+B_{\rm g}}\) & \(B_{\rm g+B_{\rm g}}\) & - & \(B_{\rm g+B_{\rm g}}\) & \(B_{\rm g}\) & \(B_{\rm g}\) & \(B_{\rm g}\) & \(B_{\rm g}\) \\ \(\ast\) 310 & \(\Delta\) & \(\Delta\) & \(\Delta\) & - & \(B_{\rm g}\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) \\ \hline \(\ast\) 340 & \(\Delta\) & \(B_{\rm g}\) & \(B_{\rm g}\) & - & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) \\ \(\ast\) 390 & \(\Delta\) & \(\Delta\) & \(\Delta\) & - & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) \\ \(\ast\) 394 & - & \(B_{\rm g}\) & \(B_{\rm g}\) & - & - & \(B_{\rm g}\) & \(B_{\rm g}\) & \(B_{\rm g}\) & - & \(B_{\rm g}\)(5) \\ \(\ast\) 440 & \(\Delta\) & \(B_{\rm g}\) & \(B_{\rm g}\) & - & - & \(B_{\rm g}\) & \(B_{\rm g}\) & \(B_{\rm g}\) & \(B_{\rm g}\) & \(\Delta\) \\ \(\ast\) 445 & - & \(B_{\rm g}\) & \(\Delta\) & - & - & \(B_{\rm g}\) & \(B_{\rm g}\) & - & \(B_{\rm g}\) & \(B_{\rm g}\)(6) \\ \(\ast\) 485 & - & \(B_{\rm g}\) & \(B_{\rm g}\) & - & - & \(B_{\rm g}\) & \(B_{\rm g}\) & \(B_{\rm g}\) & \(B_{\rm g}\) & \(B_{\rm g}\)(7) \\ \(\ast\) 500 & \(\Delta\) & \(\Delta\) & \(\Delta\) & - & - & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) \\ \(\ast\) 595 & \(B_{\rm g}\) & \(\Delta\) & \(\Delta\) & - & - & \(B_{\rm g}\) & \(B_{\rm g}\) & \(B_{\rm g}\) & \(\Delta\) & \(\Delta\) \\ \(\ast\) 615 & \(\Delta\) & \(\Delta\) & \(\Delta\) & - & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) & \(\Delta\) \\ 665 & - & \(B_{\rm g}\) & \(B_{\rm g}\) & - & - & \(B_{\rm g}\) & \(B_{\rm g}\) & \(B_{\rm g}\) & \(B_{\rm g}\) & \(B_{\rm g}\)(9) \\ \hline \end{tabular}
\end{table}
Table 1: Position in wavenumber (cm\({}^{-1}\)) of the Raman active modes measured experimentally or calculated for monoclinic M\({}_{1}\) phase of VO\({}_{2}\) and symmetry assignment propositions from literature. Lines in dark gray or light gray highlight the Ag or Bg symmetry of the Raman modes. The stars \(\ast\)\(\ast\)\(\ast\), \(\ast\)\(\ast\) indicate the Raman intensity from most intense to less intense.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & Position & & & Position & \\ Raman mode & @00GPa & Slope & Grüneisen & @13.9GPa & Slope \\ Symmetry & (cm\({}^{-1}\)) & (cm\({}^{-1}\)/GPa) & Y & (cm\({}^{-1}\)) & (cm\({}^{-1}\)/GPa) \\ \hline Ag(1) & 142.9(2) & +0.77(4) & +1.04(9) & 158.9(3) & +1.52(2) \\ & Bg(1) & 144.9(2) & -3.06(3) & -4.1(2) & 106.4(7) & +6.19(7) \\ Ag(2) & 192.5(1) & +0.36(1) & +0.36(2) & 198.5(1) & +0.22(1) \\ & Bg(3) & 224.6(2) & +0.62(2) & +0.53(4) & 233.6(1) & +1.15(1) \\ Ag(3) & 225.4(1) & +0.16(1) & +0.14(1) & 229.3(2) & +1.10(4) \\ & Bg(2) & 261.7(1) & +0.03(1) & +0.022(8) & 259.9(2) & +1.50(4) \\ Ag(4) & 311.4(1) & +0.13(1) & +0.081(9) & 311(4) & +1.81(3) \\ & Bg(4) & 340.7(3) & +4.42(4) & +2.52(11) & 400.1(4) & +4.06(4) \\ Ag(5) & 388.8(3) & +4.06(4) & +2.03(9) & 444.1(3) & +2.79(3) \\ & Bg(5) & 392.8(5) & +4.32(6) & +2.13(10) & 450.7(3) & +2.61(3) \\ & Bg(6) & 442 & --- & --- & --- & --- \\ Ag(6) & 442.8(3) & +2.35(4) & +1.03(5) & 477(1) & +3.15(9) \\ & Bg(7) & 483 & --- & --- & --- & --- \\ Ag(7) & 499.3(1) & +2.64(1) & +1.03(4) & 536.2(2) & +2.02(2) \\ Ag(8) & 594.5(8) & +4.37(11) & +1.43(8) & 654.5(6) & +2.78(12) \\ Ag(9) & 613.4(2) & +3.86(2) & +1.22(5) & 668.8(3) & +2.37(3) \\ & Bg(8) & 662.8(7) & +2.85(9) & +0.83(5) & 703.8 & +2.72(7) \\ & Bg(9) & --- & --- & --- & --- & --- \\ \hline \end{tabular}
\end{table}
Table 2: (color online) Wavenumber dependence with pressure for Raman modes in M\({}_{1}\) and M\({}_{1}\)’ high pressure monoclinic VO\({}_{2}\). SM= Soft mode. Bg modes in green. Grüneisen parameter Y=(K/v).(dv/dP). Errors are in parenthesis. |
2309.09400 | CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large
Language Models in 167 Languages | The driving factors behind the development of large language models (LLMs)
with impressive learning capabilities are their colossal model sizes and
extensive training datasets. Along with the progress in natural language
processing, LLMs have been frequently made accessible to the public to foster
deeper investigation and applications. However, when it comes to training
datasets for these LLMs, especially the recent state-of-the-art models, they
are often not fully disclosed. Creating training data for high-performing LLMs
involves extensive cleaning and deduplication to ensure the necessary level of
quality. The lack of transparency for training data has thus hampered research
on attributing and addressing hallucination and bias issues in LLMs, hindering
replication efforts and further advancements in the community. These challenges
become even more pronounced in multilingual learning scenarios, where the
available multilingual text datasets are often inadequately collected and
cleaned. Consequently, there is a lack of open-source and readily usable
dataset to effectively train LLMs in multiple languages. To overcome this
issue, we present CulturaX, a substantial multilingual dataset with 6.3
trillion tokens in 167 languages, tailored for LLM development. Our dataset
undergoes meticulous cleaning and deduplication through a rigorous pipeline of
multiple stages to accomplish the best quality for model training, including
language identification, URL-based filtering, metric-based cleaning, document
refinement, and data deduplication. CulturaX is fully released to the public in
HuggingFace to facilitate research and advancements in multilingual LLMs:
https://huggingface.co/datasets/uonlp/CulturaX. | Thuat Nguyen, Chien Van Nguyen, Viet Dac Lai, Hieu Man, Nghia Trung Ngo, Franck Dernoncourt, Ryan A. Rossi, Thien Huu Nguyen | 2023-09-17T23:49:10Z | http://arxiv.org/abs/2309.09400v1 | # CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages
###### Abstract
The driving factors behind the development of large language models (LLMs) with impressive learning capabilities are their colossal model sizes and extensive training datasets. Along with the progress in natural language processing, LLMs have been frequently made accessible to the public to foster deeper investigation and applications. However, when it comes to training datasets for these LLMs, especially the recent state-of-the-art models, they are often not fully disclosed. Creating training data for high-performing LLMs involves extensive cleaning and deduplication to ensure the necessary level of quality. The lack of transparency for training data has thus hampered research on attributing and addressing hallucination and bias issues in LLMs, hindering replication efforts and further advancements in the community. These challenges become even more pronounced in multilingual learning scenarios, where the available multilingual text datasets are often inadequately collected and cleaned. Consequently, there is a lack of open-source and readily usable dataset to effectively train LLMs in multiple languages. To overcome this issue, we present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for LLM development. Our dataset undergoes meticulous cleaning and deduplication through a rigorous pipeline of multiple stages to accomplish the best quality for model training, including language identification, URL-based filtering, metric-based cleaning, document refinement, and data deduplication. CulturaX is fully released to the public in HuggingFace to facilitate research and advancements in multilingual LLMs: [https://huggingface.co/datasets/wonlp/CulturaX](https://huggingface.co/datasets/wonlp/CulturaX).
## 1 Introduction
Large language models (LLMs) have fundamentally transformed research and applications of natural language processing (NLP), significantly advancing the state-of-the-art performance for numerous tasks and revealing new emergent abilities Brown et al. (2020); Wei et al. (2022). Based on the transformer architecture Vaswani et al. (2017), three major variants of LLMs have been explored in the literature: the encoder-only models to encode input texts into representation vectors, e.g., BERT Devlin et al. (2019) and RoBERTa Liu et al. (2019); the decoder-only models to generate texts, e.g., GPT Radford et al. (2019); Brown et al. (2020); and the encoder-decoder models to perform sequence-to-sequence generation, e.g., BART Lewis et al. (2020) and T5 Raffel et al. (2020). The remarkable capabilities of LLMs have primarily been propelled by the ever-expanding scale of model sizes and training datasets, which have been deemed essential for achieving optimal performance by the scaling laws Hernandez et al. (2022). For instance, beginning with the BERT model, which had a mere few hundred million parameters Devlin et al. (2019), recent GPT-based models have been expanded to encompass hundreds of billions of parameters Shoeybi et al. (2019); Scao et al. (2022); Lieber et al. (2021); Chowdhery et al. (2022). Similarly, the training datasets for LLMs have grown exponentially, evolving from a modest 13GB of text data from Wikipedia and books used for BERT Devlin et al. (2019); Liu et al. (2019) to consume terabytes of data for the latest models, such as Falcon Penedo et al. (2023), MPT MosaicML (2023), LLMa Touvron et al. (2023), PolyLM Wei et al. (2023) and ChatGPT1.
Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt)
As the field keeps progressing rapidly, pretrained LLMs have typically been released to the public to foster further research and advancements. These models are obtainable either through commercial APIs, as illustrated by ChatGPT and GPT-4, or via open-source initiatives, exemplified by Falcon and LLMa. Nevertheless, in contrast to the public accessibility of LLMs, the training
datasets that underpin the state-of-the-art models have mostly remained closely guarded secrets, even in the case of open-source LLMs such as BLOOM, LLaMa, MPT, and Falcon. For example, Falcon (Penedo et al., 2023) and BLOOM (Scao et al., 2022) only provide a glimpse of their complete training data, whereas MPT's, LLaMa's and PolyLM's datasets (Touvron et al., 2023; Wei et al., 2023) remain inaccessible to the public. On one hand, the lack of transparency has impeded in-depth analysis and comprehension of LLMs, hindering crucial research into attributing and addressing fundamental issues stemming from the training data, such as hallucinations, biases, and toxic content (Tamkin et al., 2021; Weidinger et al., 2021; Kenton et al., 2021; Bommasani et al., 2021). On the other hand, concealing the training data restricts the development of LLMs to a select few stakeholders with ample resources, thereby constraining the democratization and benefits of the technology and exacerbating its biases within broader society.
To attain transparency and democratization for LLMs, it is thus crucial to create large-scale and high-quality datasets for training high-performing LLMs while ensuring their public accessibility to foster deeper research and advancements. In the realm of LLMs, high-quality training datasets are often crafted through the application of extensive data cleaning and deduplication processes, aimed at eliminating noisy and redundant content from vast text collections (Allamanis, 2018; Penedo et al., 2023). To this end, there have been recent efforts from the community to develop such open-source datasets for LLMs, such as RedPajama with 1.21T tokens (Computer, 2023), SlimPajama2 with 627B tokens, and AI2 Dolma3 with 3T tokens. However, most of the existing open-source datasets for LLMs are tailored for the English language, which hinders the utilization and performance of the resulting LLMs when applied to non-English languages, particularly those with limited linguistic resources (Bang et al., 2023; Lai et al., 2023). This emphasis on English also restricts the capacity of open-source datasets to comprehensively tackle the research challenges and democratization concerns of LLMs across the diverse spectrum of over 7,000 languages spoken worldwide.
Footnote 2: [https://www.cerebras.net/blog/slimpajama-a-6](https://www.cerebras.net/blog/slimpajama-a-6) 27b-token-cleaned-and-ded duplicated-version-of-r edpajama
Footnote 3: [https://blog.allenari.org/oldma-3-trillion-to](https://blog.allenari.org/oldma-3-trillion-to) kens-open-llm-corpus-9a0ff4b8da64
Simultaneously, some multilingual datasets have been developed and made available, providing text data for multiple languages. Nevertheless, their quality and scale fall short of meeting the requirements for training high-performing LLMs. Specifically, the multilingual text dataset sourced from Wikipedia, while of high quality, is regarded as relatively small when it comes to training LLMs (Conneau et al., 2020). The OSCAR datasets (Ortiz Suarez et al., 2019; Ortiz Suarez et al., 2020; Abadji et al., 2021, 2022)4 extract text data from CommonCrawl (CC) for more than 160 languages. However, these datasets lack document-level deduplication (i.e., removing similar documents in the dataset), leading to the inclusion of redundant information and impairing the performance of generative LLMs (Lee et al., 2022). Similarly, the mC4 (Xue et al., 2021), CCAligned (Conneau et al., 2020), WikiMatrix (Schwenk et al., 2021), and ParaCrawl (Banon et al., 2020) datasets altogether support over 100 languages but suffers from less accurate language identification, introducing noise into the data (Kreutzer et al., 2022). These datasets are also not deduplicated at fuzzy and document levels, e.g., via MinHash (Broder, 1997). Additionally, the CC100 dataset (Wenzek et al., 2020; Conneau et al., 2020), employed in training the multilingual XLM-RoBERTa model across 100 languages, only considers the snapshots of CC in 2018, constraining its size and the availability of up-to-date information to train high-performing LLMs.
Footnote 4: [https://oscar-project.org](https://oscar-project.org)
To address the aforementioned issues for open-source datasets, our work introduces a novel multilingual dataset, called CulturaX, for training LLMs in 167 languages. CulturaX merges the latest iteration of mC4 (version 3.1.0) with all available OSCAR corpora up to the current year, encompassing distributions 20.19, 21.09, 22.01, and 23.01. This amalgamation results in a large multilingual dataset, comprising 27 TB of text data with 6.3 trillion tokens and offering the most up-to-date data for LLM development. More than half of our dataset is dedicated to non-English languages to significantly boost the data size and enhance the feasibility of training models in multilingual scenarios. Importantly, CulturaX is extensively cleaned and deduplicated at the document level to produce the highest quality to train LLMs for multiple languages. In particular, our data cleaning process includes a
signed to eliminate low-quality data. This involves removing noisy text, non-linguistic content, toxic data, incorrect language identification, and more. Our data cleaning pipeline employs a variant of the Interquartile Range (IQR) method (Dekking et al., 2007) to select appropriate thresholds for various dataset metrics (e.g., stopword ratios, data perplexity, and language identification scores), which can be used to filter noisy outliers for the dataset. As such, we leverage the percentiles of the distributions computed over large samples of data to effectively guide the threshold selection process for each filtering metric and language. Finally, we perform extensive deduplication for the data of the languages within our datasets based on the near deduplication method MinHashLSH (Broder, 1997; Leskovec et al., 2020) and URLs, leading to high-quality data to train multilingual LLMs. Our dataset will be fully available to the public to promote further research and development for multilingual learning. To our knowledge, CulturaX is the largest open-source multilingual dataset to date that is deeply cleaned and deduplicated for LLM and NLP applications.
## 2 Multilingual Dataset Creation
To develop a multilingual public dataset for LLMs, our strategy is to combine mC4 (Xue et al., 2021) and OSCAR (Ortiz Suarez et al., 2019; Abadji et al., 2021, 2022), two largest multilingual datasets at our disposal. We then process the data with an extensive pipeline, involving two major steps of cleaning and deduplication, to produce an enormous and high-quality dataset for multilingual LLMs.
**mC4** is a multilingual document-level dataset, originally created to train the multilingual encoder-decoder model mT5 (Xue et al., 2021) for 101 languages. This dataset is extracted from 71 monthly snapshots from CC by removing pages with less than three long lines (line length filter), pages with bad words, and duplicated lines across documents. Language identification for the pages in mC4 is done by the cld3 tool (Botha et al., 2017)5, which is a small feed-forward network (Xue et al., 2021). Any pages with a language confidence below 0.95% are excluded. mC4 is deduplicated with exact match at the document level; however, fuzzy document-level deduplication is not performed. We utilize the latest version of mC4 (version 3.1.0)6 prepared by AllenAI in this work.
Footnote 5: [https://github.com/google/cld3](https://github.com/google/cld3)
Footnote 6: [https://huggingface.co/datasets/mc4](https://huggingface.co/datasets/mc4)
A notable aspect of our dataset pertains to the web-based origin of our selected datasets, mC4 and OSCAR, extracted from CC. This differs from certain previous work (Radford et al., 2019; MosaicML, 2023; Touvron et al., 2023) that has also relied on curated datasets like The Pile (Gao et al., 2020) and BookCorpus (Zhu et al., 2015) to train LLMs, presuming their higher overall quality. However, in the context of multilingual settings, we argue that web-scraped datasets can be a more suitable approach, as curated datasets of superior quality might not be available for various languages. Our strategy of using web-scraped data facilitates efficient data collection across multiple languages, contributing to enhanced training data scales. Furthermore, recent studies have demonstrated the effectiveness of cleaning web-scraped data to yield state-of-the-art LLMs (Raffel et al., 2020; Almazrouei et al., 2023). In total, the combination of mC4 and OSCAR provides us 13.5B documents for further processing. Figure 1 illustrates the distribution of the document counts for mC4 and the four available versions of OSCAR in our initial dataset.
### Data Cleaning
Given the combination of the mC4 and OSCAR datasets, we first perform a comprehensive data cleaning procedure to remove noisy and bad content from the data, including language identification, ULR-based filtering, metric-based cleaning, and document refinement.
Figure 1: Distribution of document counts from mC4 and OSCAR in our initial dataset.
**Language Identification**: A particular issue concerns the use of two different language identification tools, i.e., cld3 and FastText, for mC4 and OSCAR (respectively). It has been shown in previous studies that cld3 is significantly worse than FastText, causing substantially more language detection errors for mC4 (Kreutzer et al., 2022). In fact, compared to several other language detectors, FastText has demonstrated state-of-the-art performance over benchmark datasets7. To this end, our first data cleaning step involves applying FastText to re-predict the languages for the documents in mC4. Documents whose predicted languages are different from the provided ones in mC4 will be removed from the dataset. The rationale is to avoid documents that are confusing for the language detectors cld3 and FastText, thus potentially introducing noise for the data. Finally, to ensure the highest quality, we remove data for any language found in mC4 but not supported by FastText.
Footnote 7: [https://modelpredict.com/language-identification-survey](https://modelpredict.com/language-identification-survey)
**URL-based Filtering**: In the next step, we aim to eliminate pages from the known toxic and harmful sources to reduce relevant risks from our data. In particular, we leverage the latest UT1 blacklist of URLs and domains provided by the University of Toulouse to support Internet use regulation for administrators at schools. This list involves sites from different topics, including pornography, grumbling, and hacking, that should be discarded for LLM training. Updated twice to thrice per week, the blacklist involves more than 3.7M records that are contributed by both human and robots (e.g., search engines, known addresses and indexes) (Abadji et al., 2022). As such, we remove any page from our dataset whose associated URL matches a site in the blacklist. This step is helpful for our dataset as the blacklist is not employed before for the mC4 dataset. In addition, although OSCAR has already used this blacklist for data cleaning, our approach incorporates the most up-to-date information from the list, which might not be available for the current distributions of OSCAR.
**Metric-based Cleaning**: To enhance the dataset's quality, motivated by the data processing pipeline from the BigScience's ROOTS corpus for BLOOM (Laurencon et al., 2022; Scao et al., 2022), we further utilize the distributions for various dataset metrics to identify and filter outlying documents. Each metric provides a singular value for every document within the dataset, quantifying specific attributes such as _number_words_, _stopword_ratios_, and _perplexity_score_ for each document. For each metric and its range of possible values within the dataset, a threshold will be determined to partition the range into two zones: a normal range and an abnormal range. The abnormal range is designated for documents exhibiting metric values significantly deviating from the norm, classifying them as outliers/noises, and consequently, these outliers are removed from our dataset. As such, we employ a comprehensive array of dataset metrics, which will be collectively employed to refine our dataset, as outlined below:
* Number of words
* Character repetition ratio
* Word repetition ratio
* Special character ratio
* Stop word ratio
* Flagged word ratio
* Language identification confidence
* Perplexity score
* Document length (number of characters)
* Number of lines
* Short line length ratio
* Short line ratio
The last four metrics are suggested by the OSCAR dataset while the others are inherited from the BigScience ROOTS corpus's pipeline to process OSCAR data. For the perplexity score, following the BigScience ROOTS corpus, we train a SentencePiece tokenizer (Kudo, 2018) and 5-gram Kneser-Ney language models as provided in the KenLM library (Heafield, 2011) using the 20230501 dumps of Wikipedia. Documents displaying high perplexity scores based on these KenLM models are considered notably different from Wikipedia articles. This indicates a level of noise that will be excluded from our dataset (Wenzek et al., 2020). The tokenizer will also be used to obtain the number of words/tokens in the documents for our metrics. We publicly release our KenLM models in HuggingFace8 to faciliate future exploration.
Footnote 8: [https://huggingface.co/uonlp/Kenlm](https://huggingface.co/uonlp/Kenlm)
Repeated information (e.g., words, paragraphs) can appear in the web-curated data due to crawling errors and low-quality sources, causing detrimental consequences for training LLMs (Holtzman et al., 2019). The character and word repetition ratios are thus designed to avoid documents with excess
sively repeated information. High frequencies of special characters, stop words, or flagged words can indicate noisy and low-quality documents. We thus utilize the stop word and flagged word lists for different languages to compute their ratios for document removal. In addition to the stop word and flagged word lists provided by BigScience ROOTS for their 13 languages, we further collect dictionaries for these types of words for other languages. We prioritize the lists that have been shared on personal GitHub accounts for various languages, as these are often crafted by native speakers and exhibit higher quality. Moreover, lower language identification confidence might also suggest noisy language structures for the data. For each document in the dataset, we thus obtain a language identification confidence via the probability that FastText assigns to its corresponding language to aid data filtering. Finally, for the short line-based criteria, we implement a threshold of 100 characters to classify lines as short, as used by OSCAR. Documents with excessive occurrence of short lines will not be retained in our dataset.
**Threshold Selection**: Given the set of dataset metrics, an important question concerns the selection of appropriate thresholds for each metric and language to generate high-quality multilingual data. In the BigScience ROOTS project (Laurenco et al., 2022), this selection process is carried out by native speakers of 13 languages. The resulting thresholds are employed for the rest of their 46 languages. The project offers a visualization interface that indexes a sample of a few thousand documents per language, enabling users to monitor data statistics as they adjust thresholds for the metrics. However, this process cannot be easily extended to different languages due to the requirement of experienced native speakers, which incurs significant costs. Furthermore, the limited sample sizes hinder the representativeness of the chosen thresholds for the full datasets. In our analysis, we observe that some selected thresholds for certain languages within BigScience ROOTS almost fall outside the value ranges for the entire dataset, leading to the deactivation of the corresponding metrics.
To address these issues, we leverage a variant of the Interquartile Range (IQR) method (Dekking et al., 2007) to select appropriate thresholds for the filtering metrics for our dataset. For each metric and language, we generate a distribution of its possible values across the entire dataset for the language. There is an exception for languages with substantial amounts of data, such as Spanish and Russian, where only 25% of the data is used to calculate these distributions. Afterward, we compute the \(Q_{1}\)-th and \(Q_{3}\)-th percentiles of the distribution (\(Q_{1}<Q3\)) and use them for the thresholds for our filtering metrics. In particular, the lower \(Q_{1}\)-th percentile will be chosen for the metrics that favor high values (e.g., language identification confidence), while metrics favoring low values (e.g., perplexity scores and document length) will utilize the upper \(Q_{3}\)-th percentile. We investigate different values for \((Q_{1},Q_{3})\), considering \((25,75)\), \((20,80)\), \((15,85)\), \((10,90)\), and \((5,95)\). The selection of \(Q_{1}=10\) and \(Q_{2}=90\) has achieved the best data quality for a sample of languages in our examination.
It is worth emphasizing that the utilization of percentiles for threshold selection enables our approach to efficiently draw upon more extensive data samples for each language compared to those employed in the BigScience ROOTS project. This results in more reliable thresholds for the full datasets over different languages. Specifically, concerning the large languages where only a 25% data sample is employed to compute the value distribution for a metric, we observe that the proportion of discarded data to the entire dataset closely aligns with that of the data sample when applying the same selected filtering threshold. This underscores the representativeness of the thresholds selected through our methodology. Finally, once the thresholds for the metrics in a given language have been determined, we will eliminate any document that surpasses a metric's threshold and enters the unfavorable range of the data.
**Document Refinement**: The previous cleaning steps are done at the dataset level, aiming to remove low-quality documents from the dataset. In this step, we further clean the retained documents to improve the quality. It is important to note that our prior metric-based filtering step plays a vital role in eliminating highly noisy documents, which, in turn, streamlines the process of developing effective document cleaning rules during this step. Notably, since the documents from mC4 and OSCAR are extracted from HTML pages crawled from the Internet, a significant portion of them may carry crawling and extraction errors, including long JavaScript lines and extraneous content. Consequently, filtering out these documents greatly simplifies our task
of designing rules to clean the documents within our dataset.
As such, for each document, we eliminate its noisy or irrelevant portions via a series of operations. First, we remove any short lines located at the end of each document, as these lines typically contain footer details or unhelpful information from the websites. Second, we eliminate the lines that contain words from our list of JavaScript (JS) keywords (e.g., "<script") to avoid irrelevant and non-linguistic information. Here, we exclusively remove JS lines if the document contains just one line with JS keywords, and this particular line must also feature at least two different types of JS keywords. We adopt this approach as documents with more than two JS lines are likely coding tutorials in our data, which should be preserved to improve diversity. In addition, certain JS keywords are used in natural language, e.g., "var". By requiring at least two different types of JS keywords, we reduce the risk of inadvertently omitting helpful content and disrupting the document's structure.
### Data Deduplication
Despite thorough data cleaning, the remaining dataset might still contain a substantial amount of repeated data due to various reasons, including information being reposted on the web, multiple references to the same articles, boilerplate content, and plagiarism. The duplicated data can thus cause memorization and significantly hinder generalization for LLMs (Lee et al., 2022; Hernandez et al., 2022). Although expensive, data deduplication is thus considered as a crucial step to guarantee the highest quality of data for training LLMs. To this end, we undertake a comprehensive deduplication procedure for our dataset, utilizing MinHash (Broder, 1997) and URLs. This deduplication process is carried out independently for each language. Furthermore, we restrict deduplication to languages that retain over 100K documents following our data cleaning procedures (i.e., \(51.5\)% of our languages), aiming to promote smaller languages within our dataset.
**MinHash Deduplication**: For each language's dataset, we first apply the MinHashLSH method (Leskovec et al., 2020) to filter similar documents in the dataset. MinHashLSH is a near deduplication technique based on MinHash (Broder, 1997) with multiple hash functions for \(n\)-grams and the Jaccard similarity. Locality-Sensitive Hashing (LSH) is incorporated to improve efficiency by focusing on document pairs that are most likely similar. We leverage a variant of the Spark implementation of MinHashLSH in the text-dedup repo9, employing \(5\)-grams and a threshold of \(0.8\) to determine similar documents for the Jaccard similarity. Running MinHashLSH for each language's dataset, especially for languages with the largest data volumes like English, Russian, Spanish, and Chinese, represents the most computationally expensive operation in our dataset creation effort.
Footnote 9: [https://github.com/Chenghaofu/text-dedup/tree/main](https://github.com/Chenghaofu/text-dedup/tree/main)
**URL-based Deduplication**: Finally, we eliminate all documents that share identical URLs with other documents in the dataset. This step is necessary to address situations where various versions of the same articles are linked to identical URLs but have been updated or modified during the publication process, effectively bypassing the near deduplication step. Some URLs for the articles in CC might only display their general domains due to crawling errors. To enhance accuracy, we refrain from removing URLs that only include their general domains.
We utilize 600 AWS c5.24xlarge EC2 instances to preprocess and deduplicate our multilingual dataset. Each instance is equipped with 96 CPU cores, 192GB of memory, and 1TB of disk space. The disk space can be used to replace memory when necessary (e.g., for data deduplication).
## 3 Data Analysis and Experiments
After completing all the cleaning and deduplication steps, our ultimate dataset comprises 6.3 trillion tokens spanning 167 languages. Table 1 provides an overview of the number of documents and tokens for the top 42 languages in CulturaX following each processing stage. As can be seen, our data-cleaning pipeline can substantially reduce the number of documents in the original mC4 and OSCAR datasets for each language. The total number of removed documents accounts for 46.48% of our initial documents, suggesting the the effectiveness of our approaches to filter noisy information for multilingual datasets.
## 4 Related Work
Compared to other NLP tasks, language models can be trained with unlabeled data, enabling efficient data collection to produce gigantic scales for
\begin{table}
\begin{tabular}{l l r r r r r r r r} \hline \hline \multirow{3}{*}{**Code**} & \multirow{3}{*}{**Language**} & \multicolumn{6}{c}{**\#Documents (M)**} & \multicolumn{3}{c}{**\#Tokens**} \\ \cline{3-10} & & \multicolumn{1}{c}{**Initial**} & \multicolumn{1}{c}{**URL**} & \multicolumn{1}{c}{**Metric**} & \multicolumn{1}{c}{**MinHash**} & \multicolumn{1}{c}{**URL**} & \multicolumn{1}{c}{**Filtering**} & \multicolumn{1}{c}{**(B)**} & \multicolumn{1}{c}{**(\%)**} \\ \cline{3-10} & & & **Filtering** & \multicolumn{1}{c}{**Filtering**} & \multicolumn{1}{c}{**Dedup**} & \multicolumn{1}{c}{**Dedup**} & \multicolumn{1}{c}{**Rate (\%)**} & & \\ \hline en & English & 5783.24 & 5766.08 & 3586.85 & 3308.30 & 3241.07 & 43.96 & 2846.97 & 45.13 \\ ru & Russian & 1431.35 & 1429.05 & 922.34 & 845.64 & 799.31 & 44.16 & 737.20 & 11.69 \\ es & Spanish & 844.48 & 842.75 & 530.01 & 479.65 & 450.94 & 46.60 & 373.85 & 5.93 \\ de & German & 863.18 & 861.46 & 515.83 & 447.06 & 420.02 & 51.34 & 357.03 & 5.66 \\ fr & French & 711.64 & 709.48 & 439.69 & 387.37 & 363.75 & 48.89 & 319.33 & 5.06 \\ zh & Chinese & 444.37 & 444.03 & 258.35 & 222.37 & 218.62 & 50.80 & 227.06 & 3.60 \\ it & Italian & 406.87 & 406.04 & 254.72 & 226.42 & 211.31 & 48.06 & 165.45 & 2.62 \\ pt & Portuguese & 347.47 & 346.76 & 217.21 & 200.11 & 190.29 & 45.24 & 136.94 & 2.17 \\ pl & Polish & 270.12 & 269.73 & 170.86 & 151.71 & 142.17 & 47.37 & 117.27 & 1.86 \\ ja & Japanese & 247.67 & 247.19 & 137.88 & 114.64 & 111.19 & 55.11 & 107.87 & 1.71 \\ vi & Vietnamese & 182.88 & 182.72 & 118.67 & 108.77 & 102.41 & 44.00 & 98.45 & 1.56 \\ nl & Dutch & 238.92 & 238.56 & 148.19 & 125.51 & 117.39 & 50.87 & 80.03 & 1.27 \\ ar & Arabic & 132.88 & 132.65 & 84.84 & 77.65 & 74.03 & 44.29 & 69.35 & 1.10 \\ tr & Turkish & 183.65 & 183.47 & 109.94 & 99.18 & 94.21 & 48.70 & 64.29 & 1.02 \\ cs & Czech & 136.91 & 136.44 & 80.38 & 69.01 & 65.35 & 52.27 & 56.91 & 0.90 \\ fa & Persian & 118.55 & 118.50 & 70.26 & 62.42 & 59.53 & 49.78 & 45.95 & 0.73 \\ hu & Hungarian & 88.59 & 88.21 & 53.29 & 46.89 & 44.13 & 50.19 & 43.42 & 0.69 \\ el & Greek & 100.77 & 100.68 & 61.43 & 54.33 & 51.43 & 48.96 & 43.15 & 0.68 \\ ro & Romanian & 89.37 & 89.25 & 45.99 & 42.8 & 40.33 & 54.87 & 39.65 & 0.63 \\ sv & Swedish & 103.04 & 102.76 & 58.67 & 52.09 & 49.71 & 51.76 & 38.49 & 0.61 \\ uk & Ukrainian & 81.50 & 81.44 & 50.95 & 47.12 & 44.74 & 45.10 & 38.23 & 0.61 \\ fi & Finnish & 59.85 & 59.80 & 36.69 & 32.15 & 30.47 & 49.09 & 28.93 & 0.46 \\ ko & Korean & 46.09 & 45.85 & 25.19 & 21.17 & 20.56 & 55.39 & 24.77 & 0.39 \\ da & Danish & 53.16 & 52.99 & 28.67 & 26.48 & 25.43 & 52.16 & 22.92 & 0.36 \\ bg & Bulgarian & 47.01 & 46.90 & 28.09 & 25.45 & 24.13 & 48.67 & 22.92 & 0.36 \\ no & Norwegian & 40.07 & 40.01 & 20.69 & 19.49 & 18.91 & 52.81 & 18.43 & 0.29 \\ hi & Hindi & 35.59 & 35.50 & 22.01 & 20.77 & 19.67 & 44.73 & 16.79 & 0.27 \\ sk & Slovak & 40.13 & 39.95 & 22.20 & 19.56 & 18.58 & 53.70 & 16.44 & 0.26 \\ th & Thai & 49.04 & 48.96 & 26.20 & 21.93 & 20.96 & 57.26 & 15.72 & 0.25 \\ lt & Lithuanian & 27.08 & 27.01 & 15.87 & 14.25 & 13.34 & 50.74 & 14.25 & 0.23 \\ ca & Catalan & 31.13 & 31.12 & 18.99 & 16.46 & 15.53 & 50.11 & 12.53 & 0.20 \\ id & Indonesian & 48.08 & 48.05 & 25.79 & 23.74 & 23.25 & 51.64 & 12.06 & 0.19 \\ bn & Bangla & 20.90 & 20.85 & 13.82 & 13.22 & 12.44 & 40.48 & 9.57 & 0.15 \\ et & Estonian & 16.20 & 16.15 & 9.69 & 8.45 & 8.00 & 50.62 & 8.81 & 0.14 \\ sl & Slovenian & 15.46 & 15.39 & 8.00 & 7.60 & 7.34 & 52.52 & 8.01 & 0.13 \\ lv & Latvian & 14.14 & 14.09 & 8.37 & 7.48 & 7.14 & 49.50 & 7.85 & 0.12 \\ he & Hebrew & 10.78 & 10.77 & 5.90 & 4.77 & 4.65 & 56.86 & 4.94 & 0.08 \\ sr & Serbian & 7.80 & 7.75 & 4.80 & 4.25 & 4.05 & 48.08 & 4.62 & 0.07 \\ ta & Tamil & 8.77 & 8.75 & 5.27 & 4.94 & 4.73 & 46.07 & 4.38 & 0.07 \\ sq & Albanian & 9.40 & 9.38 & 5.96 & 5.04 & 5.21 & 44.57 & 3.65 & 0.06 \\ az & Azerbaijan & 9.66 & 9.65 & 5.73 & 5.24 & 5.08 & 47.41 & 3.51 & 0.06 \\ \hline
**Total (42 languages)** & **13397.79** & **13366.17** & **8254.28** & **7471.48** & **7181.40** & **46.40** & **6267.99** & **99.37** \\ \hline
**Total (167 languages)** & **13506.76** & **13474.94** & **8308.74** & **7521.23** & **7228.91** & **46.48** & **6308.42** & **100.00** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Data statistics for 42 languages with the percentages of tokens greater than 0.05% in our dataset. Columns grouped with the “#Documents (M)” label indicate the number of
the training data. There are two primary types of data commonly used for training LLMs: curated data and web crawl data. Curated data typically consists of well-written and well-formatted text from targeted sources and domains, e.g., Wikipedia articles, books, newswire articles, and scientific papers, as used for the "The Pile" (Gao et al., 2020) and "BookCorpus" (Zhu et al., 2015) datasets. In contrast, web crawl data encompasses text gathered from a wide array of sources across the internet, varying significantly in terms of format and writing styles, e.g., blogs, social media posts, news articles, and advertisements. CommonCrawl (CC) is a widely-used web crawl repository that has collected petabytes of data over the Internet for 12 years. To this end, curated data is frequently considered to possess higher quality, which has resulted in its preference for training early LLMs, e.g., BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019). However, as the demand for larger models has grown, web crawl data has gained more attention as it contributes a substantial portion to the training data of recent LLMs, e.g., RoBERTa (Liu et al., 2019), BART (Lewis et al., 2020), T5 (Rafel et al., 2020), GPT-3 (Rae et al., 2021), LLaMa (Touvron et al., 2023), MPT (MosaicML, 2023), and Falcon (Almazrouei et al., 2023). As such, different extractions of CC has been produced to train such LLMs, including C4 (Raffel et al., 2020), CC-News (Nagel), and STORIES (Trinh and Le, 2018).
Regarding the accessibility of training data, datasets used to train early LLMs are often made available to the public (Devlin et al., 2019; Raffel et al., 2020). However, in the case of the most recent state-of-the-art (SOTA) generative LLMs, their training datasets are not released fully, potentially due to commercial interests. This applies not only to proprietary models like ChatGPT and GPT-4 but also to models that claim to be open-source models such as LLaMa, MPT, Falcon, and BLOOM (Scao et al., 2022). To address the transparency issue with existing LLMs, recent efforts have been made to replicate and release the training datasets for the state-of-the-art LLMs, i.e., RedPajama (Computer, 2023), SlimPajama, and AI2 Dolma. The key distinctions for these datasets concern their large-scale text data that has been meticulously cleaned and document-level deduplicated to ensure high quality for training LLMs. Nonetheless, a common drawback of these open-source datasets is that they remain predominantly focused on English data, offering limited data for other languages.
To obtain a multilingual large-scale dataset for training LLMs, it is more convenient to exploit web-scrape datasets such as CC to enable efficient data collection with up-to-date information in multiple languages. In addition, to ensure high quality for high-performing LLMs, it is necessary to extensively clean and deduplicate the multilingual data to avoid noisy and irrelevant content, e.g., low-quality machine-generated text and adult content (Trinh and Le, 2018; Kreutzer et al., 2022; Raffel et al., 2020). As such, a typical data processing pipeline to generate high-quality datasets can involve multiple steps, as demonstrated by FastText (Joulin et al., 2016), CC-Net (Wenzek et al., 2020), the BigScience ROOTS corpus for the BLOOM models (Laurencon et al., 2022; Scao et al., 2022), the RefinedWeb dataset for the Falcon model (Penedo et al., 2023; Almazrouei et al., 2023), and the dataset to train the LLaMa models (Touvron et al., 2023). The first step necessitates in such pipelines language identification to appropriately assign data to their corresponding languages (Joulin et al., 2016). The next steps features various dataset-specific rules and heuristics to filter undesirable content according to the ratios of special characters, short lines, bad words, among others (Grave et al., 2018; Laurencon et al., 2022). The data can also be filtered via lightweight models, e.g., via the KenLM language models (Heafield, 2011), to avoid noisy documents (Wenzek et al., 2020). Finally, data deduplication should be performed to remove similar or repeated information (Laurencon et al., 2022; Penedo et al., 2023). An important step in this regard involves fuzzy deduplication at document level, e.g., via MinHash (Broder, 1997), to eliminate similar documents, thus mitigating memorization and improving the generalization for resulting LLMs (Lee et al., 2022).
To this end, while there are multilingual open-source datasets with text data in multiple languages, such as mC4 (Xue et al., 2021), OSCAR (Ortiz Suarez et al., 2019), CC100 (Wenzek et al., 2020; Conneau et al., 2020), and the BigScience ROOT corpus (Laurencon et al., 2022), their quality and scale do not meet the requirements for effectively training LLMs, particularly generative models such as GPT. For example, as highlighted in the introduction, both mC4 and OSCAR lack fuzzy dedu
plication for the data at the document level. mC4 also suffers from its poorer language identification due to the use of cld3. BigScience ROOTS only provides a small sample data for 46 languages while CC100 does not have information beyond 2018. Our dataset CulturaX thus comprehensively addresses the issues for the existing datasets, offering a multilingual, open-source, and large-scale dataset with readily usable and high-quality data to train LLMs.
## 5 Conclusion
We present CulturaX, a novel multilingual dataset with text data for 167 languages. Our dataset is cleaned and deduplicated via a comprehensive pipeline, producing 6.3 trillion tokens. CulturaX is thus a large-scale and high-quality dataset, which can be readily used to train high-performing LLMs for multiple languages. Our data is openly accessible to the public to promote further research and applications of multilingual learning.
|
2309.10151 | A System-Level Energy-Efficient Digital Twin Framework for Runtime
Control of Batch Manufacturing Processes | The manufacturing sector has a substantial influence on worldwide energy
consumption. Therefore, improving manufacturing system energy efficiency is
becoming increasingly important as the world strives to move toward a more
resilient and sustainable energy paradigm. Batch processes are a major
contributor to energy consumption in manufacturing systems. In batch
manufacturing, a number of parts are grouped together before starting a batch
process. To improve the scheduling and control of batch manufacturing
processes, we propose a system-level energy-efficient Digital Twin framework
that considers Time-of-Use (TOU) energy pricing for runtime decision-making. As
part of this framework, we develop a model that combines batch manufacturing
process dynamics and TOU-based energy cost. We also provide an
optimization-based decision-making algorithm that makes batch scheduling
decisions during runtime. A simulated case study showcases the benefits of the
proposed framework. | Hongliang Li, Herschel C. Pangborn, Ilya Kovalenko | 2023-09-18T21:02:05Z | http://arxiv.org/abs/2309.10151v1 | # A System-Level Energy-Efficient Digital Twin Framework for
###### Abstract
The manufacturing sector has a substantial influence on worldwide energy consumption. Therefore, improving manufacturing system energy efficiency is becoming increasingly important as the world strives to move toward a more resilient and sustainable energy paradigm. Batch processes are a major contributor to energy consumption in manufacturing systems. In batch manufacturing, a number of parts are grouped together before starting a batch process. To improve the scheduling and control of batch manufacturing processes, we propose a system-level energy-efficient Digital Twin framework that considers Time-of-Use (TOU) energy pricing for runtime decision-making. As part of this framework, we develop a model that combines batch manufacturing process dynamics and TOU-based energy cost. We also provide an optimization-based decision-making algorithm that makes batch scheduling decisions during runtime. A simulated case study showcases the benefits of the proposed framework.
## I Introduction
The manufacturing industry currently accounts for approximately one-third of worldwide energy consumption [1]. To help improve the sustainability of this sector, a number of incentives have been provided for companies to reduce the energy consumption of their manufacturing systems. One important area of improvement for manufacturers is scheduling and control of their batch manufacturing processes [2, 3]. Batch manufacturing processes often require long processing time and are highly energy-intensive, as they involve large and high-power equipment, such as furnaces, reactors, and mixers [4]. In batch manufacturing processes, a specific quantity of products, known as a batch, is produced at one time on a batch-production machine (BPM). The problem of optimally scheduling the batch sequence is considerably complex and has been identified as an NP-hard problem [5]. Because of the energy-intensive nature of batch processes, some models and methods have been proposed that consider energy usage during batch scheduling. For example, several scheduling methods have included energy consumption as an objective in the optimization process [6, 7].
Demand-side energy management offers another promising strategy to improve the energy efficiency of batch manufacturing processes[8, 9]. This strategy focuses on shifting energy-intensive operations to times with a lower energy price to reduce total energy costs. This process is often incentivized by utilities through Demand Response (DR) programs [10]. One method of DR is Time-of-Use (TOU) energy pricing, in which the electricity price varies hourly to reflect consumer demand and the availability of renewable energy on the grid [11]. Previous works have shown that energy-efficient scheduling under TOU energy pricing can be accomplished by using model-based optimization [12, 13, 14]. However, these studies solve the batch schedule problems offline and treat the corresponding schedule as fixed once production has started. The offline scheduling does not leverage runtime information and cannot account for disturbances (e.g., change of TOU energy prices, deterioration of machines, or change of production goals), which may lead to undesired performance. The development and integration of effective runtime control strategies for batch manufacturing processes that incorporate TOU energy pricing is an ongoing research challenge.
An additional challenge is the lack of models that can accurately capture and predict the dynamics of the systems and leverage runtime data for model updates. Recently, Digital Twins (DTs) have been proposed to improve real-time decision-making and control of manufacturing systems, which provides a promising solution to this challenge [15, 16]. A DT is a purpose-driven, virtual representation of components, processes, assets, and systems that enable understanding, prediction, and optimization of performance [17]. One of the highlights of DTs is the integration of modeling, simulation, and other analytical tools, which enables comprehension and prediction of manufacturing systems with greater granularity, as compared to traditional modeling methods. Additionally, the trend of implementing DT is supported by advancements in the Internet of Things (IoT) and edge computing, which provide DTs with the capability to maintain runtime representations of manufacturing systems. Previous studies applied DTs for
Fig. 1: An overview of the proposed framework for runtime control of batch manufacturing processes.
specific use cases in the manufacturing industry, such as predictive maintenance [18] and job shop scheduling [19].
In this work, we propose a novel energy-aware runtime control strategy for batch manufacturing processes through a system-level energy-efficient DT (SLEE-DT) framework. A high-level overview of the SLEE-DT framework is shown in Figure 1. Specifically, the major contributions of this work are (1) a discrete event system-based approach to model the batch manufacturing process and TOU-based energy costs, (2) an optimization-based decision-making model to determine the batch schedule, and (3) a system-level energy-efficient DT framework for run-time energy management and batch schedule control. The proposed framework is showcased in a simulated case study.
The remainder of this paper is as follows. A problem statement for energy-efficient batch schedule of the batch manufacturing process is developed in Section II. Section III provides a detailed description of the SLEE-DT framework. Section IV presents a simulated case study to demonstrate how the proposed SLEE-DT framework can be used to improve the energy efficiency of an example batch manufacturing process. Section V provides conclusions and future research directions.
## II Problem Statement
This section defines the batch scheduling problem for a manufacturing system containing a BPM and a machine inventory. Consider a BPM with one manufacturing process that can process multiple parts, e.g., a coating machine. We define the capacity of the BPM as \(H\), i.e., the machine can process at most \(H\) parts at the same time. Based on this definition, let \(b\) denote the batch size, where \(\{b\in\mathbb{N}:b\leq H\}\). We assume that there is a negligible difference in the processing time of different batch sizes for the machine, but there is a significant difference in the energy consumption rate for each size. A higher energy consumption rate is associated with larger batch sizes. For example, in a coating machine, there may be a set of spray nozzles that can be turned on or off depending on the number of parts inside the machine. Every nozzle takes the same amount of time to complete the coating process, but more working nozzles require more energy usage. We assume deterministic customer demand, i.e., the manufacturer receives a customer order to produce a certain number of products within some deadline. The batch schedule is the sequence of batch sizes to be processed on the machine. Once a schedule is created, the unprocessed parts will be grouped into batches. We consider a runtime batch schedule problem, which requires the manufacturer to determine a batch schedule that satisfies customer requirements with the lowest TOU-based energy costs. Note that the TOU energy prices may change during runtime.
## III SLEE-DT Framework
An overview of the SLEE-DT framework is shown in Figure 2. The main components of the framework are the energy-aware scheduling model, system-level planning model, decision maker, and database.
### _Energy-Aware Scheduling Model_
The energy-aware scheduling model encodes the batch manufacturing process dynamics and TOU-based energy costs using a priced timed automaton (PTA) model [20, 21]. The energy-aware scheduling model, \(\mathcal{A}\), is defined as the tuple \(\mathcal{A}=(Q,\Sigma,q^{0},E,C,I,R,P,Q_{m})\), where:
* \(Q=\{q^{0},q^{1},\cdots,q^{n}\}\): set of states representing the total number of parts produced by the BPM.
* \(\Sigma=\{\sigma^{0},\sigma^{1},\cdots,\sigma^{H}\}\): set of events, representing a batch process of size \(b\).
* \(q^{0}\): initial state with \(0\) parts in process.
* \(E\subseteq Q\times\Sigma\times Q\): a finite set of transitions between states.
Fig. 2: Overview of the SLEE-DT framework.
* \(C\): a clock space that includes a local clock \(c^{l}\) and global clock \(c^{g}\).
* \(I:Q\rightarrow\mathcal{B}(val(C))\) a mapping of states to their time-based constraints as a Boolean function of the clock valuations.
* \(R:E\times C\to C\): a reset operator that resets the local clock \(c^{l}\) after each transition.
* \(P:E\rightarrow[0,\infty)\): a mapping of a transition to its associated energy costs.
* \(Q_{m}\): a set of marked states representing the amount of parts to be produced.
Note that a valuation operator, \(val(\cdot)\), denotes the value of a variable, e.g., a clock or a state.
#### Ii-B1 Discrete Event Dynamics
\(Q\), \(\Sigma\), \(q^{0}\), and \(E\) represent the discrete event dynamics of the batch manufacturing process. A batch schedule, \(s\), is defined as a sequence of events, i.e., a string over \(\mathcal{A}\). We define a transition function as:
\[\delta(q^{c},s)\to q^{f} \tag{1}\]
which maps the current state \(q^{c}\) and string \(s\) to a final state \(q^{f}\) in \(\mathcal{A}\). We define two types of transitions: self-transition, and discrete transition. \(q^{i}\xrightarrow{\sigma^{0}}q^{i}\) is a self-transition, as \(\sigma^{0}\) indicates that a new batch processes 0 part, i.e., the machine is idle, and the total number of parts, \(i\), has remained the same. \(q^{n}\xrightarrow{\sigma^{j}}q^{m}\) is a discrete transition that indicates a change in the number of produced parts - from \(m\) parts produced to \(n\) parts produced.
For example, given \(\mathcal{A}\) and a schedule \(s=\sigma^{2}\sigma^{3}\sigma^{0}\sigma^{2}\), consider the following transitions:
\[q^{0}\xrightarrow{\sigma^{2}}q^{2}\xrightarrow{\sigma^{3}}q^{5}\xrightarrow{ \sigma^{0}}q^{5}\xrightarrow{\sigma^{2}}q^{7}\]
In this example, we have the transition function \(\delta(q^{0},s)\to q^{7}\), which indicates that \(7\) parts have been produced by this batch schedule. Note that in this transition path, \(q^{5}\xrightarrow{\sigma^{0}}q^{5}\) is an example of a self-transition and \(q^{0}\xrightarrow{\sigma^{2}}q^{2}\) is an example of a discrete transition.
#### Ii-B2 Time-Based Constraints
The clock space of \(C\) contains a global clock and a local clock as \(C=c^{g}\times c^{l}\). Both the global clock and the local clock are continuous states of the PTA that grow at a fixed rate [21]. The local clock, \(c^{l}\), represents the time at a state and is set to \(0\) after every transition by the reset operator, \(R\). The time spent in a state from a self-transition is defined as set-up time. The time spent in a state from a discrete transition is defined as processing time. The global clock, \(c^{g}\), continuously increases in the model and is never reset. Global clock valuation of a state \(q\) can be calculated based on the local clock valuations:
\[val(c^{g}_{q})=val(c^{g}_{q^{\prime}})+val(c^{l}_{q}) \tag{2}\]
where \(q^{\prime}\) is the previous state.
Time-based constraints include local clock constraints and global clock constraints. Local clock constraints, \(\mathcal{B}(val(c^{l}))\), limit the time that the system can be in a state (e.g., due to set-up time and processing time). Global clock constraints, \(\mathcal{B}(val(c^{g}))\), encode the time-based demand requirements (e.g., "order should be filled before a deadline"). We use the mapping \(I\) to store the time constraints of each state. For example, the Boolean function \(c^{g}_{q^{\prime}}\leq 10\) evaluates to _true_ if the global clock at the state \(q^{4}\) is less than or equal to \(10\) time units. This is equivalent to a customer requirement that \(4\) parts are needed before 10 time units.
#### Ii-B3 Demand Quantity-Based Constraints
We assume that any products that remain after completing an order are stored in the machine's inventory. If more parts are produced than required by an order, then the demand will still be met. Therefore, any state that has more than the required amount of parts produced is contained in the set of marked states denoted as \(Q_{m}\). Let \(d\in\mathbb{N}\) and \(r\in\mathbb{N}\) denote the required demand quantity and the maximum capacity of the machine inventory, respectively. Let \(v\) denote the allocated capacity level of machine inventory and we have \(\{v\in\mathbb{N}:v\leq r\}\). Then the marked states can be defined as \(Q_{m}\subseteq Q=\{q^{d},q^{d+1},\ldots,q^{d+v}\}\).
Let \(\Sigma^{*}\) denote all possible strings over \(\Sigma\) including the empty string \(\varepsilon\). The language (set of strings) generated by \(\mathcal{A}\) is denoted as:
\[\mathcal{L}(\mathcal{A})=\{s\in\Sigma^{*}:\delta(q^{0},s)\in Q\} \tag{3}\]
\(\mathcal{L}(\mathcal{A})\) represents all possible batch schedules of the machine. The language marked by \(\mathcal{A}\) is:
\[\mathcal{L}_{m}(\mathcal{A})=\{s\in\mathcal{L}(\mathcal{A}):\delta(q^{0},s)\in Q _{m}\} \tag{4}\]
\(\mathcal{L}_{m}(\mathcal{A})\) represents all schedules that meet the demand quantity requirement. If a string representing a batch schedule, \(s_{g}\), is part of the marked language \(s_{g}\in\mathcal{L}_{m}\), then that batch schedule produces the number of parts required by the order.
#### Ii-B4 TOU-Based Energy Cost
The energy cost of a batch schedule (string) is determined by adding up the energy cost of each individual batch (event), taking into account both the discrete event dynamics and the time series data. The clock space of the PTA ensures the connection between these two aspects. To calculate the TOU-based energy cost during runtime, the energy-aware scheduling model, \(\mathcal{A}\), needs to be synchronized to the time of the power grid.
We use \(T\) to denote the time of the power grid. We denote \(|s|\) as the length of string \(s\), which is the number of events in \(s\). Let \(s_{j}\in\Sigma\) denote the \(j\)th event in \(s\). For a given \(\mathcal{A}\) and a string \(s_{[0,j]}\), a transition path is determined by \(s_{[0,j]}\) based on \(\mathcal{A}\), resulting in the final state of \(q^{f}=\delta(q^{0},s_{[0,j]})\). Let the starting time of a batch schedule \(s_{[0,j]}\) be the power grid time \(T_{0}\). We derive the starting time \(T_{s}\) and end time \(T_{e}\) of the transition with \(i\)th event \(s_{i}\) in string \(s_{[0,j]}\). We call \(s_{[0,i]}\) the prefix string of \(s_{[0,j]}\). Similarly, we call \(s_{[i,j]}\) the suffix string of \(s_{[0,j]}\). Prefix string \(s_{[0,i]}\) leads to the transitions of \(\mathcal{A}\) and ends at state \(q^{e}\), which can be calculated using transition function as:
\[q^{e}=\delta(q^{0},s_{[0,i]}) \tag{5}\]
Similarly, prefix string \(s_{[0,i-1]}\) leads to the state just before the state \(q^{e}\) and can be calculated as:
\[q^{e-1}=\delta(q^{0},s_{[0,i-1]}) \tag{6}\]
The \(i\)th event in \(s_{[0,j]}\), i.e., \(s_{i}\) enables the transition:
\[q^{e-1}\xrightarrow{s_{i}}q^{e}\]
\(T_{e}\), the end time of this transition, can be calculated as:
\[T_{e}=T_{0}+val(c_{q^{e}}^{g}) \tag{7}\]
\(T_{s}\), the starting time of this transition, can be calculated as:
\[T_{s}=\left\{\begin{aligned} & T_{0},&\text{if }e=0\\ & T_{s}=T_{0}+val(c_{q^{e-1}}^{g}),&\text{if }e \geq 1\end{aligned}\right. \tag{8}\]
Let \(pr(s_{i})\in\mathbb{R}_{\geq 0}\) denote the power of event \(s_{i}\), i.e., the energy consumption rate of the batch size. TOU energy price for the transition period can be obtained from the power grid, which is a function of the power grid time \(T\) as \(f(T)\). Then the energy cost \(P\) for the transition \(E(s_{i}):q^{e-1}\xrightarrow{s_{i}}q^{e}\) can be calculated as:
\[P_{E(s_{i})}=\int_{T_{s}}^{T_{e}}pr(s_{i})f(T)dT \tag{9}\]
The total energy cost of \(s_{[0,j]}\) is:
\[TP_{s_{[0,j]}}=\sum_{k=0}^{j}P_{E(s_{k})} \tag{10}\]
### _System-Level Planning Model_
The system-level planning model is a data-driven model responsible for initializing and updating the energy-aware scheduling model by analyzing both historical data and runtime data. During the operation of the batch manufacturing process based on the initial batch schedule, the system-level planning model continuously monitors the performance of the system. If a change in the system occurs, such as a variation in the TOU energy price, the system-level planning model promptly updates the energy-aware scheduling model. This ensures that the batch manufacturing process is accurately represented during runtime.
### _Decision Maker_
The decision maker solves an optimization problem to determine the optimal batch schedule during runtime. We leverage a limited look-ahead policy (LLP) based online control strategy to realize the runtime decision-making. The main idea of the LLP is illustrated in Figure 3 with a \(3\)-step look-ahead window. Instead of optimizing the path that minimizes the global cost, the LLP evaluates the local cost within the limited look-ahead window. The path with the minimum cost is identified by solving an optimization problem. Only the first event of the path is applied as the control action. Then the window slides to the next state after the control action occurs. Such a process is repeated until exhausting the set of marked states. The LLP-based control strategy is a Receding Horizon Control or Model Predictive Control with a limited control horizon. The details of the LLP can be found in [22]. We first define the open-loop optimal control problem for the energy-aware scheduling model. Then we define the receding horizon optimization problem for the LLP at each limited look-ahead window.
#### Iii-B1 Optimal Control Problem for Energy-Aware Scheduling Model
One of the control objectives for the batch manufacturing process modeled by PTA is to find a batch sequence with the minimum TOU-based energy cost. We employ the average energy cost as a component of the cost function, which tends to push the BPM to run at a larger batch size. This is beneficial to maintain high machine utilization. In Section III-A, we allow the batch schedule to produce a number of parts that is greater than the demand. The remaining parts after satisfying the customer demand are stored in the machine inventory. We penalize the used capacity level of the inventory in the cost function. Hence, the cost function of a batch schedule \(J(s)\) is defined as a two-part cost:
\[J(s)=\underbrace{TP_{s}/d}_{\text{average energy cost}}+\underbrace{val( \delta(q^{0},s))-d}_{\text{memory cost}} \tag{11}\]
where \(TP_{s}\) can be calculated based on Equation (10), \(d\) is the demand quantity.
The time-based and demand quantity-based constraints are described in Section III-A. The optimal control problem is formed as:
\[\operatorname*{argmin}_{s} J(s) \tag{12a}\] \[\operatorname*{s.t.} s\in\mathcal{L}_{m}(\mathcal{A})\] (12b) \[val(C)\models I \tag{12c}\]
where \(\mathcal{L}_{m}(\mathcal{A})\) is the set of accepted paths on \(\mathcal{A}\). Equation (12b) imposes demand quantity-based constraints while Equation (12c) imposes time-based constraints, where \(\models\) indicates that the condition on the left side satisfies the condition on the right side.
#### Iii-B2 Limited Look-Ahead Control Strategy
The first step of formulating the LLP is constructing a limited-step exploration of \(\mathcal{A}\). Let \(W\) denote the length of the look-ahead window. Let the \(\Sigma^{\leq W}\) denote all the strings with a length less than or equal to \(W\) as:
\[\Sigma^{\leq W}=\{s\in\Sigma^{*}:|s|\leq W\} \tag{13}\]
The strings defined in \(\mathcal{A}\) and started from the current state \(q^{c}\in Q\) is a set or a sublanguage of \(\mathcal{A}\):
\[\mathcal{L}_{sub}(\mathcal{A},q^{c})=\{s\in\Sigma^{*}:\delta(q^{c},s)\in Q\} \tag{14}\]
Fig. 3: LLP with a 3-step look-ahead window.
The set of strings starting from the \(q^{c}\) with length less than or equal to \(W\) is defined as a \(W\)-step look-ahead tree \(Tree(q^{c},W)\):
\[Tree(q^{c},W)=\mathcal{L}_{sub}(\mathcal{A},q^{c})\cap\Sigma^{\leq W} \tag{15}\]
\(Tree(q^{c},W)\) includes all candidate strings (batch schedules) from the current state \(q^{c}\) with \(W\)-step look-ahead. The LLP performs optimizations within the limited look-ahead window to find the path with minimum cost. When the terminal state of a string in \(Tree(q^{c},W)\) reaches \(Q_{m}\), the inventory cost is included. The cost function \(J(s)^{\prime}\) within a \(W\)-step look-ahead window is formed as:
\[J(s)^{\prime}=\left(\frac{TP_{s}}{(1-\xi)val(\delta(q^{c},s))+\xi d}\right)+ \xi(val(\delta(q^{c},s))-d) \tag{16}\]
where \(\xi\) is the indicator function:
\[\xi=\left\{\begin{aligned} & 1,&\text{if }\delta(q^{c},s)\in Q_{m}\\ & 0,&\text{else}\end{aligned}\right. \tag{17}\]
Then the optimization problem for \(Tree(q^{c},W)\) is:
\[\operatorname*{argmin}_{s} J(s)^{\prime} \tag{18a}\] \[\operatorname*{s.t.} s\in Tree(q^{c},W)\] (18b) \[val(C)\models I \tag{18c}\]
Look-ahead exploration continues until exhaustively exploring \(Q_{m}\). The LLP-based online scheduling strategy may fail to generate a valid schedule, which is defined as a rescheduling failure. When a rescheduling failure happens, the decision maker will negotiate with the manufacturer to update the production requirements or generate the schedule based on expert knowledge.
### _Database_
The database functions as a storage facility for both historical data and runtime data. In terms of historical data, it stores previous customer order information, manufacturing process data associated with those orders, and a record of historical TOU energy prices. The database also incorporates runtime data, which includes runtime information about the batch manufacturing process. For example, data about processes, materials, and machines can be acquired from the Manufacturing Execution System (MES). By housing this data, the database offers essential support to the system-level planning model enabling the initiation and continuous updating of the energy-aware scheduling model.
## IV Case Study
### _Case Study Setup_
Consider a manufacturing system with a BPM and a machine inventory. The capacity of the BPM and the inventory are \(H=2\) parts and \(r=3\) parts, respectively. A customer order is received that requires \(2\) parts to be produced in the next \(1\) hour and \(5\) more parts to be produced in the following \(4\) hours, i.e., a total of \(7\) parts in \(5\) hours. The batch processing time is \(1\) hour and the set-up time is \(0.2\) hours. The power consumption of the machine is dependent on the batch size. For batch sizes 0, 1, and 2, these are \(0.5\) megawatts (MW), \(0.8\) MW, and \(1\) MW, respectively. TOU energy prices are acquired from the U.S. national grid website [23]. Two batch scheduling strategies are compared: the proposed method and a benchmark batch schedule based on maximizing BPM utilization without energy considerations.
### _Application of the SLEE-DT Framework_
Once the SLEE-DT receives the order, the system-level planning model starts to analyze the system based on customer requirements and historical data. Then, the system-level planning model initiates the energy-aware scheduling model, \(\mathcal{A}_{case}\), with \(\Sigma=\{\sigma^{0},\sigma^{1},\sigma^{2}\}\) and states \(Q=\{q^{0},q^{1},...,q^{8}\}\). The system is allowed to only use the \(v=1\) part of the machine inventory, i.e., \(Q_{m}=\{q^{7},q^{8}\}\). The production starts at \(8\) am and the limited look-ahead window is \(2\). The time-based constraints are also generated by analyzing the order requirements. \(\mathcal{A}_{case}\) is then created as shown in Figure 4 (a) and (b).
The decision maker generates the batch schedule during runtime. As shown in Figure 4(c), the green node is the starting state of each look-ahead window. \(J(s)^{\prime}\) is the cost of each path. The cost of a path with two successive \(0\) batch sizes is undefined, so the path is invalid. The optimization problem shown in Equation (18) is solved at each look-ahead window by exhaustive search. The first event of the path with minimum cost, the red event as shown in Figure 4(c), is set as the next batch size. Note that state \(q^{2}\) is encoded with
Fig. 4: SLEE-DT framework for runtime control of the batch manufacturing process: (a) Energy-aware scheduling model graph used for the case study; (b) Time-based constraints; (c) LLP-based decision maker.
the global clock constraint of \(val(c_{q^{2}}^{g})\leq 1\). Therefore the transition \(q^{0}\to q^{2}\) is the only valid transition. The SLEE-DT-based schedule can be represented as \(\sigma^{2}\rightarrow\sigma^{2}\rightarrow\sigma^{1}\rightarrow\sigma^{2}\), which is shown in Figure 5. A batch schedule based on the benchmark strategy is also shown in Figure 5. The energy cost of the SLEE-DT runtime schedule is 2.55% lower than the benchmark strategy, as this shifts the energy-intensive operation of a batch size of \(2\) to time when the TOU energy price is lower.
## V Conclusion
In this work, we propose a system-level energy-efficient Digital Twin (SLEE-DT) framework that can be used to improve the runtime control of batch manufacturing processes. As part of the framework, a ptc model is used to capture both the batch process dynamics and the energy costs of the manufacturing system. An optimization-based decision maker is then proposed to enable runtime scheduling and control of batch manufacturing processes with consideration for time-of-use (TOU) energy prices. The SLEE-DT framework reduces energy costs and improves sustainability by reallocating energy-intensive batch production to times with lower energy prices. Future work will focus on extending the proposed framework to a larger system with multiple machines and inventories. We will also look to implement the proposed framework in a physical manufacturing system.
|
2309.06900 | Fast Exact Algorithm for Neutrino Oscillation in Constant Matter Density | A recently published method for solving the neutrino evolution equation with
constant matter density is further refined and used to lay out an exact
algorithm for computing oscillation probabilities, which is moderately faster
than previous methods when looping through neutrinos of different energies. In
particular, the three examples of $\overset{\scriptscriptstyle{(-)}}{\nu}_e$
survival, $\overset{\scriptscriptstyle{(-)}}{\nu}_\mu$ survival and
$\overset{\scriptscriptstyle{(-)}}{\nu}_e$ appearance probabilities are written
in terms of mixing angles, mass differences and matter electron density. A
program based on this new method is found to be roughly twice as fast as, and
in agreement with, the leading GLoBES package. Furthermore, the behaviour of
all relevant effective parameters is sketched out in terms of a range of
neutrino energies, or matter electron densities. For instance, the
$\overset{\scriptscriptstyle{(-)}}{\nu}_e$ survival probability in constant
matter density is found to have no dependence on the mixing angle $\theta_{23}$
or the CP-violating phase $\delta_{13}$. | James Page | 2023-09-13T11:58:22Z | http://arxiv.org/abs/2309.06900v5 | # Fast Exact Algorithm for Neutrino Oscillation in Constant Matter Density
###### Abstract
A recently published method [1; 2] for solving the neutrino evolution equation with constant matter density is further refined and used to lay out an exact algorithm for computing oscillation probabilities, which is moderately faster than previous methods when looping through neutrinos of different energies. In particular, the three examples of \(\overbrace{\nu}_{e}\) survival, \(\overbrace{\nu}_{\mu}\) survival and \(\overbrace{\nu}_{e}\) appearance probabilities are written in terms of mixing angles, mass differences and matter electron density. A program based on this new method is found to be roughly twice as fast as, and in agreement with, the leading GLoBES package. Furthermore, the behaviour of all relevant effective parameters is sketched out in terms of a range of neutrino energies, or matter electron densities. For instance, the \(\overbrace{\nu}_{e}\) survival probability in constant matter density is found to have no dependence on the mixing angle \(\theta_{23}\) or the CP-violating phase \(\delta_{13}\).
## I Introduction
### The problem
Neutrinos are produced in charged current (CC) interactions in pure flavour states \(|\nu_{\alpha}\rangle,\ \alpha\in\{e,\mu,\tau\}\), which are composed of a superposition of the mass states \(|\nu_{k}\rangle,\ k\in\{1,2,3\}\)
\[|\nu_{\alpha}\rangle=\sum_{k}U_{\alpha k}^{*}|\nu_{k}\rangle, \tag{1}\]
where it is assumed the mass differences have a negligible impact on kinematics - not favouring some mass states' formation over others'. \(U_{\alpha k}\) is the unitary PMNS matrix, and the normalisation conditions are \(\langle\nu_{k}|\nu_{j}\rangle=\delta_{kj}\), so that \(\langle\nu_{\alpha}|\nu_{\beta}\rangle=\delta_{\alpha\beta}\). In a vacuum, it is the mass states that are eigenstates of the free Hamiltonian (\(\hat{H}_{0}\)), and so whose evolution can be computed
\[\partial^{\mu}|\nu_{k}(x)\rangle=\hat{P}_{0}^{\mu}|\nu_{k}(x)\rangle=p_{k}^{ \mu}|\nu_{k}(x)\rangle, \tag{2}\]
where \(\hat{P}_{0}^{\mu}\) is the (free) spacetime translation operator (\(\hat{P}_{0}^{0}=\hat{H}_{0}\)). Assuming plane-wave solutions, this is solved with
\[|\nu_{k}(x)\rangle=e^{-ip_{k}\cdot x}|\nu_{k}\rangle, \tag{3}\]
where \(|\nu_{k}\rangle\equiv|\nu_{k}(0)\rangle\). Using the ultra-relativistic approximation \(p_{k}\cdot x\approx\frac{m_{k}^{2}}{2E}L\), one can thus arrive at the familiar vacuum transition/survival probability
\[P_{\nu_{\alpha}\rightarrow\nu_{\beta}}(L,E)=\sum_{k,j}U_{\alpha k}^{*}U_{ \beta k}U_{\alpha j}U_{\beta j}^{*}\text{exp}\left(-i\frac{\Delta m_{kj}^{2}L }{2E}\right). \tag{4}\]
When taking into account matter effects, the process is not so straightforward. Recall that previously, only the free Hamiltonian was used for spacetime translations \(\hat{P}_{0}^{\mu}\) (eqn 2). Strictly speaking, one should use the full Hamiltonian, which contains the CC interaction terms
\[\begin{split}\partial^{\mu}|\nu_{\alpha}(x)\rangle& =\hat{P}^{\mu}|\nu_{\alpha}(x)\rangle\\ &=\left(\hat{P}_{0}^{\mu}+\hat{P}_{I}^{\mu}\right)|\nu_{\alpha}( x)\rangle\\ &\approx\left(\hat{P}_{0}^{\mu}+\left\langle\hat{H}_{I}^{\text{ eff}}\right\rangle\right)|\nu_{\alpha}(x)\rangle,\end{split} \tag{5}\]
where \(\left\langle\hat{H}_{I}^{\text{eff}}\right\rangle\) is the average effective interaction Hamiltonian, caused by coherent forward elastic scattering in matter. Incoherent scattering in matter is exceedingly unlikely (\(\sigma\sim G_{F}^{2}s\)), so one need only consider coherent elastic scattering [3]. These are described by the effective Fermi theory terms
\[\begin{split}\hat{H}_{CC}^{\text{eff}}& =\frac{G_{F}}{\sqrt{2}}\left[\overline{\nu}_{e}\gamma^{\mu}\left(1- \gamma^{5}\right)e\right]\left[\overline{e}\gamma_{\mu}\left(1-\gamma^{5} \right)\nu_{e}\right],\\ \hat{H}_{NC}^{\text{eff}}&=\frac{G_{F}}{\sqrt{2}} \sum_{\alpha=e,\mu,\tau}\left[\overline{\nu}_{\alpha}\gamma^{\mu}\left(1- \gamma^{5}\right)\nu_{\alpha}\right]\\ &\qquad\times\sum_{\psi=e,p,n}\left[\overline{\psi}\gamma_{\mu} \left(g_{V}^{\psi}-g_{A}^{\psi}\gamma^{5}\right)\psi\right],\end{split} \tag{6}\]
where \(G_{F}\) is Fermi's constant, \(g_{V}^{\psi}\) and \(g_{A}^{\psi}\) are the vector and axial components of the coupling constants for the associated fermion \(\psi\), and all the other symbols have their usual meaning. Averaging over the particles in the matter medium, one can show that [3]
\[\begin{split}&\left\langle\hat{H}_{I}^{\text{eff}}\right|\nu_{\alpha} \rangle=V_{\alpha}|\nu_{\alpha}\rangle,\\ & V_{\alpha}=\sqrt{2}G_{F}\left(\pm N_{e}\delta_{ae}-\frac{1}{2}N _{n}\right),\end{split} \tag{7}\]
where \(N_{e}\) and \(N_{n}\) are the electron and neutron densities in matter respectively. The \(\pm\) is \(+\) for neutrinos and \(-\) for antineutrinos [3]. Now, notice that the interaction Hamiltonian acts on the flavour eigenstates, while the free spacetime translation operator acts on the mass eigenstates, creating a complicated differential equation. To deal with this, the transition amplitude \(\psi_{\alpha\beta}(x)=\langle\nu_{\beta}|\nu_{\alpha}(x)\rangle\) is used, so that each part of
the translation operator may act on its corresponding eigenstates. The neutron density term can be factored out since it affects all flavours equally, along with other common factors, and one arrives at the ODE [3]
\[i\frac{d}{dx}\mathbf{\Psi}_{\alpha}=H_{F}\mathbf{\Psi}_{\alpha}, \tag{8}\]
where \(x=t\) is the space coordinate in the direction of propagation, and
\[\mathbf{\Psi}_{\alpha}=\begin{pmatrix}\psi_{\alpha e}(x)\\ \psi_{\alpha\mu}(x)\\ \psi_{\alpha\tau}(x)\end{pmatrix},\ \ H_{F}=\frac{1}{2E}\left(U\mathbb{M}^{2}U^{ \dagger}+\mathbb{A}\right), \tag{9}\]
\[\mathbb{M}^{2}=\begin{pmatrix}0&0&0\\ 0&\Delta m_{21}^{2}&0\\ 0&0&\Delta m_{31}^{2}\end{pmatrix},\ \ \mathbb{A}=\begin{pmatrix}A_{CC}&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}, \tag{10}\]
\[A_{CC}=\pm 2\sqrt{2}EG_{F}N_{e}. \tag{11}\]
Notice again that it is the mass differences that come into play, and the difference with the vacuum ODE (eqn 2) is isolated to the \(\mathbb{A}\) matrix, which vanishes for \(N_{e}=0\).
### Extant Solutions and Approximations
There are many ways to approach this problem, which have already been written about extensively. A sample is shown here for context. First, the mass hierarchy can be exploited to separate out the contributions of each mass difference, depending on one's setup. This can "freeze out" one of the three transition amplitudes, and if one assumes a constant matter density, can be reduced to effective two-neutrino mixing again. These expressions are very helpful to gain a qualitative understanding of the processes, such as resonances for specific values of \(A_{CC}\) that lead to a strong MSW effect. See [3] or [4] for more details.
However, if the background matter density varies, or one mass difference does not totally dominate, one must keep track of all the transition amplitudes. [4] and [5] use diagonalisation to compare with the vacuum case and determine the effective mass difference and mixing angles in matter. This is the standard approach, including numerical techniques: numerically diagonalise at each iteration of the evolution equation [6; 7]. Meanwhile, [8] uses Lagrange's formula to determine the evolution operator
\[U(L)=\sum_{n}\left[\prod_{m\neq n}\frac{2EH_{F}-\mathcal{E}_{m}\mathbf{1}}{ \mathcal{E}_{n}-\mathcal{E}_{m}}\right]\text{exp}\left(-i\frac{\mathcal{E}_{n }^{2}}{2E}\right), \tag{12}\]
where \(\mathcal{E}_{n}\) are the three eigenvalues of \(H_{F}\), with rather involved expressions provided for constant matter density. The expression is written here explicitly due its similarity with what will be shown later on. For its part, [9] uses the Cayley-Hamilton theorem to decompose the evolution operator into a linear combination of second order polynomials of mixing matrices, with analytic expressions for these matrices as well as their coefficients. Recently, [10] used the eigenvector-eigenvalue identity to derive relatively simple formulae for the effective mixing angles and CP violating phase, along with perturbative approximations of these and mass differences, which can be used in the vacuum expressions as normal. A summary of many of these exact and approximate techniques can be found in [11], along with very useful accuracy and speed comparisons in the context of long baseline \(\overset{\cdot}{\nu}_{e}\) appearance experiments. [12] provides an elegant generalisation to (\(3+N\)) neutrino flavours, and a generic matter potential that can include non-standard interactions (NSI). The effects of the sterile neutrinos and NSI are studied both together and independently. These last two papers also make their codes available via public GitHub repositories, referenced therein.
Lastly, two recent papers [1; 2] compute a general form for the evolution operator (assuming constant matter density) in terms of a Gell-Mann basis and structure constants, using methods from [9]. The first paper then derives perturbative expansions for particular electron density profiles in the Earth, while the second formulates a general method to compute the oscillation probability of any general time independent Hamiltonian for two or three active neutrino flavours. The initial approach of these will be followed in the next section, with some small tweaks, before being taken in another more specific direction than the original papers. These deviations will be highlighted throughout the derivation, and some extra details added for clarity. An efficient algorithm will then be constructed, which splits the computation up into two parts, so that applications which compute oscillation probabilities for a great many neutrinos need only perform the first part once as an initialisation, saving time. Some specific examples are the provided and compared to existing numerical computations by the GLoBES package and others. The calculation and behaviour of effective parameters will then briefly be covered.
## II Solving the differential equation
### The Evolution Operator
This subsection largely follows Bushra Shafaq and Faisal Akram's method in [1]. First, the traceless effective Hamiltonian \(H\) is defined, since the trace acts on all flavours equally and so does not contribute to mixing
\[\begin{split}& H\equiv H_{F}-\frac{1}{3}\text{tr}\left[H_{F} \right]\mathbf{1},\\ &\text{tr}\left[H_{F}\right]=\left(\Delta m_{21}^{2}+\Delta m_{31 }^{2}+A_{CC}\right).\end{split} \tag{13}\]
The evolution equation is thus
\[i\frac{d}{dx}\mathbf{\Psi}_{\alpha}=\frac{1}{2E}H\mathbf{\Psi}_{\alpha}, \tag{14}\]
where contrary to Shafaq and Akram's paper, \(E\) is kept separate from \(H\). Assuming constant matter density, this is solved
with
\[\boldsymbol{\Psi}_{\alpha}(x)=U(x)\boldsymbol{\Psi}_{\alpha}(0),\ \ U(x)=\text{exp} \left(-iH\frac{x}{2E}\right), \tag{15}\]
where \(U(x)\) is the evolution operator. Now, these can be decomposed using the property that the Gell-Mann matrices (\(\lambda^{i},\ i\in\{1,...,8\}\)) and the identity matrix form a complete orthogonal basis for \(3\times 3\) complex matrices. \(H\) is traceless, so it does not need the identity matrix
\[\begin{split}& H=h^{i}\lambda^{i},\ h^{i}=\frac{1}{2}\text{tr} \left[H\lambda^{i}\right],\\ & U(x)=u_{0}\boldsymbol{1}+iu_{i}\lambda^{i},\ u_{0}=\frac{1}{3} \text{tr}\left[U(x)\right],\ u_{i}=\frac{1}{2i}\text{tr}\left[U(t)\lambda^{i} \right],\end{split} \tag{16}\]
where from now on repeated dummy indices imply summation. These equations are derived from the Gell-Mann matrix identities \(\text{tr}\left[\lambda^{i}\lambda^{j}\right]=2\delta_{ij}\) and \(\text{tr}\left[\lambda^{i}\right]=0\).
Now, some useful general results in linear algebra will be used: for a general matrix \(A\), with eigenvalues \(\mathcal{E}[A]_{n}\),
\[\begin{split}&\text{det}\left(A\right)=\prod_{n}\mathcal{E}[A]_{n},\ \text{tr}\left[A\right]=\sum_{n}\mathcal{E}[A]_{n},\\ &\text{and if}\ B=f(A)\ \text{and}\ f\ \text{is a holomorphic function},\\ &\mathcal{E}[B]_{n}=f(\mathcal{E}[A]_{n}).\end{split} \tag{17}\]
Therefore, recalling \(U(x)=\text{exp}\left(-iHx\right)\) and defining \(\mathcal{E}[H]_{n}=\mathcal{E}_{n}\),
\[u_{0}=\frac{1}{3}\sum_{n=0}^{2}\text{exp}\left(-i\frac{\mathcal{E}_{n}x}{2E} \right). \tag{18}\]
For \(u_{i}\), first note that from \(H=h^{i}\lambda^{i}\),
\[\frac{\partial U(t)}{\partial h^{i}}=-\frac{it}{2E}\lambda^{i}U(x), \tag{19}\]
and so using the previous identities one can easily show
\[u_{i}=\frac{-i}{2}\sum_{n=0}^{2}\frac{\partial\mathcal{E}_{n}}{\partial h^{i} }\text{exp}\left(-i\frac{\mathcal{E}_{n}x}{2E}\right). \tag{20}\]
All that is needed now are expressions for the eigenvalues \(\mathcal{E}_{n}\) of \(H\). The parametric equation of a \(3\times 3\) matrix A with eigenvalues \(\lambda\) is
\[\begin{split}\text{det}\left(A-\lambda\boldsymbol{1}\right)=& -\lambda^{3}+\text{tr}(A)\lambda^{2}-\frac{1}{2}\left(\text{tr}(A)^{2}- \text{tr}\left(A^{2}\right)\right)\lambda\\ &+\text{det}(A)\\ =& 0,\end{split} \tag{21}\]
so that for the traceless \(H\),
\[\begin{split}&\mathcal{E}_{n}^{3}-3a_{1}\mathcal{E}_{n}-2a_{0}=0, \\ & a_{1}=\frac{1}{6}\text{tr}[H^{2}]=\frac{1}{3}h^{i}h^{i},\\ & a_{0}=\frac{1}{2}\text{det}\left(H\right)=\frac{1}{3}d^{ijk}h^{ i}h^{j}h^{k},\end{split} \tag{22}\]
where \(d^{ijk}\) are the symmetric structure constants of the Gell-Mann matrices
\[\begin{split}&\{\lambda^{i},\lambda^{j}\}=\frac{4}{3}\delta_{ij} \boldsymbol{1}+2d^{ijk}\lambda^{k},\\ & d^{ijk}=\frac{1}{4}\text{tr}\left(\lambda^{i}\{\lambda^{j}, \lambda^{k}\}\right).\end{split} \tag{23}\]
The last relation between the determinant and structure constants in (eqn 22) can be derived by first multiplying the structure constant definition (eqn 23) (second equation) by \(h^{i}h^{j}h^{k}\) (and summing over these indices as normal), to find
\[\text{tr}\left(H^{3}\right)=2d^{ijk}h^{i}h^{j}h^{k}. \tag{24}\]
Then from the definition of the determinant of a \(3\times 3\) matrix
\[\text{det}(H)=\frac{1}{3!}h^{i}h^{j}h^{k}\epsilon_{a_{1}a_{2}a_{3}}\epsilon_{b _{1}b_{2}b_{3}}\lambda^{i}_{a_{1}b_{1}}\lambda^{j}_{a_{2}b_{2}}\lambda^{k}_{a_ {3}b_{3}}, \tag{25}\]
the Levi-Civita identity \(\epsilon^{a_{1}a_{2}a_{3}}\epsilon_{b_{1}b_{2}b_{3}}=3!\delta^{[a_{1}}_{b_{1}} \delta^{a_{2}}_{b_{2}}\delta^{a_{3}]}_{b_{3}}\) (the index position is irrelevant here), and recalling that the Gell-Mann matrices are traceless (\(\lambda^{i}_{aa}=0\)), one can find
\[\text{det}(H)=\frac{1}{3}\text{tr}\left(H^{3}\right), \tag{26}\]
and thus
\[\text{det}(H)=\frac{2}{3}d^{ijk}h^{i}h^{j}h^{k}. \tag{27}\]
Meanwhile, taking the derivative of the parametric equation (eqn 22) w.r.t \(h^{i}\) gives the needed expression
\[\frac{\partial\mathcal{E}_{n}}{\partial h^{i}}=\frac{2}{3}\frac{h^{i} \mathcal{E}_{n}+d^{ijk}h^{j}h^{k}}{\mathcal{E}_{n}^{2}-a_{1}}, \tag{28}\]
while different solutions to the parametric equation are used here compared to the original paper
\[\mathcal{E}_{n}=2\sqrt{a_{1}}\text{cos}\left[\frac{1}{3}\text{cos}^{-1}\left( \frac{a_{0}}{a_{1}^{3/2}}\right)-\frac{2\pi n}{3}\right],\ \ n\in\{0,1,2\}. \tag{29}\]
These are the solutions to a depressed cubic equation for real solutions, which must be real since \(H\) is Hermitian (one can also check that \(a_{0},a_{1}\in\mathbb{R}\), and \(\frac{a_{0}^{2}}{4}+\frac{a_{1}^{3}}{27}<0\), which imply the solutions are real). The evolution operator is then
\[\mathcal{U}(x)=\frac{1}{3}\sum_{n=0}^{2}\left(1+\frac{\mathcal{E}_{n}H+Y}{ \mathcal{E}_{n}^{2}-a_{1}}\right)\text{exp}\left(-i\frac{\mathcal{E}_{n}x}{2E }\right), \tag{30}\]
with
\[Y\equiv d^{ijk}h^{i}h^{j}\lambda^{k}=H^{2}-2a_{1}\boldsymbol{1}, \tag{31}\]
which can be shown from the first equation of (eqn 23) and multiplied by \(h^{i}h^{j}\) (summing over indices). This last relation (eqn 31) was not in Bushra Shafaq and Faisal Akram's paper [1].
Finally, assuming a (anti)neutrino is produced in a pure flavour state \(\psi_{\alpha\beta}(0)=\delta_{\alpha\beta}\) and recalling \(P_{\nu_{\alpha}\rightarrow\nu_{\beta}}(x)=\left|\psi_{\alpha\beta}(x)\right|^{2}\), one therefore has the transition amplitude
\[\begin{split} P_{\nu_{\alpha}\rightarrow\nu_{\beta}}(L,E)=& \sum_{n,m}\left(X_{n}\right)_{\beta\alpha}\left(X_{m}\right)^{*}_{\beta \alpha}\text{exp}\left[-i\frac{(\mathcal{E}_{n}-\mathcal{E}_{m})\,L}{2E} \right],\\ X_{n}=&\frac{1}{3}\left(\mathbf{1}+\frac{\mathcal{ E}_{n}H+Y}{\mathcal{E}_{n}^{2}-a_{1}}\right),\end{split} \tag{32}\]
where \(x=L\) is the propagation length, as usual. This equation is of course of the same form as the vacuum case, but writing out the effective mass differences and mixing angles is saved for a later section.
### Details and Simplifications in Vacuum
The rest of this paper departs from [1], and is entirely original work. Notice that variable quantities such as \(L\) and \(E\) only appear in the last expression (eqn 32), except for where \(E\) and \(N_{e}\) enter into \(A_{CC}\) at the beginning. Because of the structure of \(H\) in terms of \(A_{CC}\), it will turn out that most calculations can be performed with vacuum settings (\(A_{CC}=0\)), and small modifications added later to take into account matter effects (see the next section). Therefore, here we take a look at the details assuming a vacuum first, where all the associated quantities will be marked with a tilde for clarity \(H_{F}=\tilde{H}_{F}+\mathbb{A}\).
Here \(\tilde{H}_{F}\) is simply \(\tilde{H}_{F}=U\mathbb{M}U^{\dagger}\), and from the cyclic nature of the trace \(\text{tr}[\tilde{H}_{F}]=\text{tr}[\mathbb{M}]\), so that
\[\begin{split}&\tilde{H}=U\mathbb{M}U^{\dagger}-\frac{1}{3}\text{ tr}[\mathbb{M}]\mathbf{1},\\ &\text{tr}[\mathbb{M}]=\Delta m_{21}^{2}+\Delta m_{31}^{2},\end{split} \tag{33}\]
and thus the components are explicitly given by
\[\tilde{H}_{\alpha\beta}=\sum_{f=2,3}\Delta m_{f1}^{2}\left(U_{\alpha f}U^{*}_{ \beta f}-\frac{1}{3}\delta_{\alpha\beta}\right). \tag{34}\]
Now, \(\tilde{a}_{1}\) and \(\tilde{a}_{0}\) can be computed from \(\tilde{h}^{i}\) and \(d^{ijk}\), but for the vacuum case it is easier to use the definitions \(\tilde{a}_{1}=\frac{1}{6}\text{tr}\left[\tilde{H}^{2}\right]\) and \(\tilde{a}_{0}=\frac{1}{2}\text{det}\left(\tilde{H}\right)\). For \(\tilde{a}_{1}\) it is straightforward to show, using (eqn 33)
\[\tilde{a}_{1}=\frac{1}{6}\left[(\Delta m_{21}^{2})^{2}+(\Delta m_{31}^{2})^{2} -\Delta m_{21}^{2}\Delta m_{31}^{2}\right], \tag{35}\]
while for \(\tilde{a}_{0}\), the formula (eqn 21) for \(\text{det}\left(A-\lambda\mathbf{1}\right)\) can be reused, with \(A=U\mathbb{M}U^{\dagger}\) and \(\lambda=\frac{1}{3}\text{tr}[\mathbb{M}]\), so that
\[\begin{split}\tilde{a}_{0}=&\frac{1}{27}\left[( \Delta m_{21}^{2})^{3}+(\Delta m_{31}^{2})^{3}\right]\\ &-\frac{1}{18}\left[(\Delta m_{21}^{2})^{2}\Delta m_{31}^{2}+ \Delta m_{21}^{2}(\Delta m_{31}^{2})^{2}\right],\end{split} \tag{36}\]
where use was made of \(\text{det}\left(U\mathbb{M}U^{\dagger}\right)=\text{det}(U)\text{det}(\mathbb{ M})\text{det}(U^{\dagger})\) and \(\text{det}(\mathbb{M})=0\). Lastly, one can show that
\[\tilde{Y}_{\alpha\beta}=\frac{1}{3}\sum_{f=1}^{3}\left(\Delta m_{f1}^{2} \right)^{2}\left(U_{\alpha f}U^{*}_{\beta f}-\frac{1}{3}\delta_{\alpha\beta} \right), \tag{37}\]
where \(\left(\Delta m_{11}^{2}\right)^{2}\equiv 2\Delta m_{21}^{2}\Delta m_{31}^{2}\) is defined for compactness. So for a vacuum, these quantities can all be substituted in to compute \(\mathcal{E}_{n}\) (eqn 29), \(X_{n}\) and \(P_{\nu_{\alpha}\rightarrow\nu_{\beta}}(L)\) (eqn 32) directly. Notice also that the eigenvalues \(\mathcal{E}_{n}\) only depend on the mass differences here, as one would expect.
### Adding Matter Effects
The values calculated above must be corrected for matter effects. From \(H_{F}=\tilde{H}_{F}+\mathbb{A}\), the traceless matrix \(H\) can be related to the vacuum one \(\tilde{H}\) simply according to \(A_{CC}\)
\[H=\tilde{H}+\frac{1}{3}A_{CC}D,\quad D=\begin{pmatrix}2&0&0\\ 0&-1&0\\ 0&0&-1\end{pmatrix}. \tag{38}\]
Corrections to \(Y\) are also easier to see in matrix notation
\[Y=\tilde{Y}+\frac{1}{3}A_{CC}T+\frac{1}{9}A_{CC}^{2}D, \tag{39}\]
\[T=\begin{pmatrix}2\tilde{H}_{ee}&\tilde{H}_{e\mu}&\tilde{H}_{e\tau}\\ \tilde{H}_{e\mu}^{*}&2\tilde{H}_{\tau\tau}&-2\tilde{H}_{\mu\tau}\\ \tilde{H}_{e\tau}^{*}&-2\tilde{H}_{\mu\tau}^{*}&2\tilde{H}_{\mu\mu}\end{pmatrix}. \tag{40}\]
Since only the diagonal components of \(\tilde{H}\) change, from \(a_{1}=-\frac{1}{2}\text{tr}(H^{2})\) one can find
\[a_{1}=\tilde{a}_{1}+\frac{1}{3}\tilde{H}_{ee}A_{CC}+\frac{1}{9}A_{CC}^{2}, \tag{41}\]
and using the determinant definition of \(a_{0}\), it is modified by
\[\begin{split} a_{0}=&\tilde{a}_{0}+\frac{1}{6}A_{CC}\left( \tilde{H}_{ee}^{2}+2\tilde{H}_{\mu\mu}\tilde{H}_{\tau\tau}-2|\tilde{H}_{\mu \tau}|^{2}\right.\\ &\left.+|\tilde{H}_{e\mu}|^{2}+|\tilde{H}_{e\tau}|^{2}\right)+ \frac{1}{6}A_{CC}^{2}\tilde{H}_{ee}+\frac{1}{27}A_{CC}^{3},\end{split} \tag{42}\]
which one can find is simply
\[a_{0}=\tilde{a}_{0}+\frac{1}{2}\tilde{Y}_{ee}A_{CC}+\frac{1}{6}\tilde{H}_{ee}A_{ CC}^{2}+\frac{1}{27}A_{CC}^{3}. \tag{43}\]
Notice that since the diagonal components of \(\tilde{H}\) are real, so are those of \(\tilde{Y}\), and therefore \(a_{0}\) and \(a_{1}\) and, by extension \(\mathcal{E}_{n}\), are always real. \(X_{n}\) consequently always has real diagonal components, as expected. Finally, also note that \(Y\) and \(H\) (eqn 39, 40) have higher order matter corrections in their diagonal components. This means that survival probabilities appear to be more greatly affected by constant matter density than transition probabilities are.
## III Example algorithms
The following section contains a few specific examples of algorithms one can implement from the method derived above. This is to gather all the relevant information in one convenient place for any reader simply wishing to apply this method.
### Electron (Anti)Neutrino Survival Probability
What turns out to be the simplest example of how this can all be used in an algorithm is shown here. It is composed of two steps: the first performed once to compute some constant values, and the second using these values for each particular (anti)neutrino energy and/or electron density.
First one should compute the following four constant quantities, written here in terms of the mass differences and mixing angles of the standard PMNS matrix parametrisation:
\[\tilde{H}_{ee}=\Delta m_{21}^{2}\left(s_{12}^{2}c_{13}^{2}-\frac{1}{3}\right)+ \Delta m_{31}^{2}\left(s_{13}^{2}-\frac{1}{3}\right), \tag{44}\]
\[\tilde{Y}_{ee}=\frac{1}{3}\bigg{[} \left(\Delta m_{21}^{2}\right)^{2}\left(s_{12}^{2}c_{13}^{2}- \frac{1}{3}\right) \tag{45}\] \[+\left(\Delta m_{31}^{2}\right)^{2}\left(s_{13}^{2}-\frac{1}{3}\right)\] \[+2\Delta m_{21}^{2}\Delta m_{31}^{2}\left(c_{12}^{2}c_{13}^{2}- \frac{1}{3}\right)\bigg{]},\]
\[\tilde{a}_{0}= \frac{1}{27}\left[(\Delta m_{21}^{2})^{3}+(\Delta m_{31}^{2})^{3 }\right] \tag{46}\] \[-\frac{1}{18}\left[(\Delta m_{21}^{2})^{2}\Delta m_{31}^{2}+ \Delta m_{21}^{2}(\Delta m_{31}^{2})^{2}\right],\]
and
\[\tilde{a}_{1}=\frac{1}{6}\left[(\Delta m_{21}^{2})^{2}+(\Delta m_{31}^{2})^{2 }-\Delta m_{21}^{2}\Delta m_{31}^{2}\right]. \tag{47}\]
Then the next part is performed for a given electron (anti)neutrino energy \(E\) and matter electron density \(N_{e}\). First \(A_{CC}\) is computed:
\[A_{CC}=\pm 2\sqrt{2}G_{F}EN_{e}, \tag{48}\]
then the constants corrected for this:
\[H_{ee}=\tilde{H}_{ee}+\frac{2}{3}A_{CC}, \tag{49}\]
\[a_{0}=\tilde{a}_{0}+\frac{1}{2}\tilde{Y}_{ee}A_{CC}+\frac{1}{6}\tilde{H}_{ee} A_{CC}^{2}+\frac{1}{27}A_{CC}^{3}, \tag{50}\]
\[a_{1}=\tilde{a}_{1}+\frac{1}{3}\tilde{H}_{ee}A_{CC}+\frac{1}{9}A_{CC}^{2}, \tag{51}\]
\[Y_{ee}=\tilde{Y}_{ee}+\frac{2}{3}\tilde{H}_{ee}A_{CC}+\frac{2}{9}A_{CC}^{2}. \tag{52}\]
These can then be substituted into
\[\mathcal{E}_{n}=2\sqrt{a_{1}}\text{cos}\left[\frac{1}{3}\text{cos}^{-1}\left( \frac{a_{0}}{a_{1}^{3/2}}\right)-\frac{2\pi n}{3}\right],\ \ n\in\{0,1,2\}. \tag{53}\]
\[\left(X_{n}\right)_{ee}=\frac{1}{3}\left(1+\frac{\mathcal{E}_{n}H_{ee}+Y_{ee }}{\mathcal{E}_{n}^{2}-a_{1}}\right), \tag{54}\]
so that finally
\[P_{\tilde{\nu_{e}}\rightarrow\tilde{\nu_{e}}}=1-4\sum_{n>m}(X_{n})_{ee}(X_{m })_{ee}\text{sin}^{2}\left(\left(\mathcal{E}_{n}-\mathcal{E}_{m}\right)\frac{ L}{4E}\right). \tag{55}\]
The fact that the components \((X_{n})_{ee}\) are real was used to derive the more compact formula (eqn 55). Notice that neither \(\theta_{23}\), nor \(\delta_{13}\) enter this calculation, no matter the electron density. The (anti)electron neutrino survival probability is thus independent of these for constant densities.
### Muon (Anti)Neutrino Survival Probability
One can follow the exact same method for muon neutrinos, with a couple small tweaks. In the first step, the two following real numbers must also be computed
\[\tilde{H}_{\mu\mu}=\Delta m_{21}^{2}\bigg{(} c_{12}^{2}c_{23}^{2}+s_{12}^{2}s_{13}^{2}s_{23}^{2}-2s_{12}s_{13}s_{23}c_{12}c_{23} \text{cos}\delta \tag{56}\] \[-\frac{1}{3}\bigg{)}+\Delta m_{31}^{2}\bigg{(}s_{23}^{2}c_{13}^{2 }-\frac{1}{3}\bigg{)},\]
\[\tilde{Y}_{\mu\mu}=\frac{1}{3}\bigg{[} \left(\Delta m_{21}^{2}\right)^{2}\bigg{(}c_{12}^{2}c_{23}^{2}+s_{1 2}^{2}s_{13}^{2}s_{23}^{2} \tag{57}\] \[-2s_{12}s_{13}s_{23}c_{12}c_{23}\text{cos}\delta-\frac{1}{3} \bigg{)}\] \[+\left(\Delta m_{31}^{2}\right)^{2}\bigg{(}s_{23}^{2}c_{13}^{2}- \frac{1}{3}\bigg{)}\] \[+2\Delta m_{21}^{2}\Delta m_{31}^{2}\bigg{(}s_{12}^{2}c_{23}^{2}+ s_{13}^{2}s_{23}^{2}c_{12}^{2}\] \[+2s_{12}s_{13}s_{23}c_{12}c_{23}\text{cos}\delta-\frac{1}{3} \bigg{)}\bigg{]},\]
on top of the four constants \(\tilde{a}_{1}\), \(\tilde{a}_{0}\), \(\tilde{H}_{ee}\) and \(\tilde{Y}_{ee}\) which are needed for all flavour oscillations. Now, only \(a_{1}\) and \(a_{0}\) of these must be corrected with \(A_{CC}\) (eqn 50, 51), along with
\[H_{\mu\mu}=\tilde{H}_{\mu\mu}-\frac{1}{3}A_{CC}, \tag{58}\]
\[Y_{\mu\mu}=\tilde{Y}_{\mu\mu}-\frac{2}{3}\left(\tilde{H}_{ee}+\tilde{H}_{\mu\mu} \right)A_{CC}-\frac{1}{9}A_{CC}^{2}, \tag{59}\]
where \(\tilde{H}_{\tau\tau}=-\left(\tilde{H}_{ee}+\tilde{H}_{\mu\mu}\right)\) was used to arrive at (eqn 59). Then the eigenvalues \(\mathcal{E}_{n}\) are computed exactly as before (eqn 53), and all substituted into the formulae
\[\left(X_{n}\right)_{\mu\mu}=\frac{1}{3}\left(1+\frac{\mathcal{E}_{n}H_{\mu\mu} +Y_{\mu\mu}}{\mathcal{E}_{n}^{2}-a_{1}}\right), \tag{60}\]
\[P_{\tilde{\nu_{\mu}}\rightarrow\tilde{\nu_{\mu}}}=1-4\sum_{n>m}(X_{n})_{\mu} (X_{m})_{\mu}\text{sin}^{2}\left(\left(\mathcal{E}_{n}-\mathcal{E}_{m}\right) \frac{L}{4E}\right). \tag{61}\]
### Electron (Anti)Neutrino Appearance
Another more complex example is muon (anti)neutrino to electron (anti)neutrino transition probability. Just as before, first compute the vacuum values \(\tilde{a}_{1}\), \(\tilde{a}_{0}\), \(\tilde{H}_{ee}\) and \(\tilde{Y}_{ee}\). However, in this case since we are dealing with transition probabilities, one must additionally compute the two complex numbers
\[\tilde{H}_{e\mu}= \Delta m_{21}^{2}s_{12}c_{13}\left(c_{12}c_{23}-s_{12}s_{23}s_{1 3}e^{-i\delta}\right) \tag{62}\] \[+\Delta m_{31}^{2}s_{13}s_{23}c_{13}e^{-i\delta},\]
and
\[\tilde{Y}_{e\mu}=\frac{1}{3}\bigg{[} \left(\Delta m_{21}^{2}\right)^{2}s_{12}c_{13}\left(c_{12}c_{23}- s_{12}s_{13}s_{23}e^{-i\delta}\right) \tag{63}\] \[+\left(\Delta m_{31}^{2}\right)^{2}s_{13}s_{23}c_{13}e^{-i\delta}\] \[-2\Delta m_{21}^{2}\Delta m_{31}^{2}c_{12}c_{13}\left(s_{12}c_{23 }+s_{13}s_{23}c_{12}e^{-i\delta}\right)\bigg{]}.\]
There are thus overall eight real quantities \(\tilde{H}_{ee}\), \(\tilde{a}_{0}\), \(\tilde{a}_{1}\), \(\tilde{Y}_{ee}\), \(\Re[\tilde{Y}_{e\mu}]\), \(\Im[\tilde{Y}_{e\mu}]\), \(\Re[\tilde{H}_{e\mu}]\) and \(\Im[\tilde{H}_{e\mu}]\) that are pre-computed and passed on to be corrected at run time for matter effects.
With a given (anti)neutrino energy and electron density, \(A_{CC}\) is computed as in (eqn 48), then \(a_{0}\), \(a_{1}\) and \(\mathcal{E}_{n}\) are corrected and computed respectively just as previously (eqn 50, 51, 53). These are then substituted into
\[\left(X_{n}\right)_{e\mu}=\frac{1}{3}\frac{\left(\mathcal{E}_{n}+\frac{1}{3}A _{CC}\right)\tilde{H}_{e\mu}+\tilde{Y}_{e\mu}}{\mathcal{E}_{n}^{2}-a_{1}}, \tag{64}\]
so that finally
\[P_{\tilde{\nu_{\mu}}\rightarrow\tilde{\nu_{e}}}(L)= -4\sum_{n>m}\left(R_{n}R_{m}+I_{n}I_{m}\right)\text{sin}^{2} \left(\left(\mathcal{E}_{n}-\mathcal{E}_{m}\right)\frac{L}{4E}\right) \tag{65}\] \[\pm 2\sum_{n>m}\left(I_{n}R_{m}-R_{n}I_{m}\right)\text{sin}\left( \left(\mathcal{E}_{n}-\mathcal{E}_{m}\right)\frac{L}{2E}\right),\]
where the \(\pm\) is positive for neutrinos and negative for antineutrinos, while
\[R_{n}\equiv\Re\left[\left(X_{n}\right)_{e\mu}\right], \tag{66}\] \[I_{n}\equiv\Im\left[\left(X_{n}\right)_{e\mu}\right].\]
## IV Results
### Speed Comparison for Example Algorithm
In order to get a sense of the speed of this algorithm, calculations of various neutrino oscillation probabilities in constant matter density were performed by a the above algorithms, written in C++. The same was then done with the widely used GLoBES package [6] - also written in C++ - and the results and processing times of the two were compared. Use was made of the _glbConstantDensityProbability()_ function, which computes the transition or survival probability between any two neutrino flavours for constant matter density, neutrino energy and baseline. It does this by diagonalising the Hamiltonian with various numerical or analytic methods, depending on which is fastest [7; 9]. It is a fast and reliable method, but all matrix elements must be recomputed for each change in neutrino energy \(E\), baseline \(L\) and matter density \(\rho\), while this paper's method must only re-perform part of the calculations.
#### iv.1.1 Method
First, an initialisation step is performed, where the pre-computed constants above are calculated and GLoBES is initialised. This step was not timed since it need only be performed once and so will not scale with the number of calculations. However, note that the GLoBES initialisation takes longer since the package includes many more functionalities than just constant matter neutrino oscillations.
Second, for a given flavour transition, and a range of 100 neutrino energies, 100 baselines and 100 matter densities, the oscillation probabilities were computed using three different functions: the GLoBES function, a general flavour version of this algorithm (Section II), and then the version of this algorithm tailored to the specific flavour transition (Section III). The results and total computation times (CPU time as measured by the C++ standard library _std::clock()_ function) of these three were recorded. This process was repeated 50 times to obtain a measure of statistical uncertainty.
The ranges of neutrino energies \(E\), baselines \(L\) and matter densities \(\rho\) were evenly spaced values in some range
\[E_{\text{min}}\leq E\leq E_{\text{max}}, \tag{67}\] \[L_{\text{min}}\leq L\leq L_{\text{max}},\] \[\rho_{\text{min}}\leq\rho\leq\rho_{\text{max}},\]
where the minima were always fixed (\(E_{\text{min}}=0.5\) MeV, \(L_{\text{min}}=0.01\) km, \(\rho_{\text{min}}=0\) g/cm\({}^{3}\)). Therefore, for a given set of maxima (\(E_{\text{max}}\), \(L_{\text{max}}\), \(\rho_{\text{max}}\)), each function was called 50 million times (\(100\times 100\times 100\times 50\)).
#### iv.1.2 Results
As alluded to, the whole process was performed for various maximum values, to discern any \(E\), \(L\) or \(\rho\) dependence on the
results. Ten different values for each were used, according to
\[E_{\text{max}}\in\{x\ :\ E_{\text{min}}\leq x\leq 1000\text{ MeV}\}, \tag{68}\] \[L_{\text{max}}\in\{x\ :\ L_{\text{min}}\leq x\leq 1000\text{ km}\},\] \[\rho_{\text{max}}\in\{x\ :\ \rho_{\text{min}}\leq x\leq 100\text{ g/cm}^{3}\},\]
so that the whole method above was carried out one thousand times.
The computed probabilities were always exactly the same (having copied any unit conversion factors from the GLoBES code), so are not shown here. However, the computation times are presented in figure 1 for three example oscillations. Dependence on the three \(E_{\text{max}}\), \(L_{\text{max}}\) and \(\rho_{\text{max}}\) parameters is shown separately. The plot showing \(E_{\text{max}}\) dependence averages over all \(L_{\text{max}}\) and \(\rho_{\text{max}}\) dependence, and likewise for the other two cases. Statistical error was propagated throughout, and added quadratically to a systematic error or \(\sigma_{\text{sys}}=0.01\) s, being the resolution limit of the CPU time measuring function.
For zero matter density, the GLoBES calculation appears slightly faster, though is still within one standard deviation from this paper's algorithms. Every other configuration shows these to be almost or around twice as fast as GLoBES. To be specific, if \(T_{\text{GLoBES}}\) is the CPU time taken by the GLoBES function averaged over all the data above, and likewise \(T_{\text{general}}\) and \(T_{\text{specific}}\) are the CPU times taken by the general and specific flavour algorithms from this paper respectively,
\[\frac{T_{\text{GLoBES}}}{T_{\text{general}}}=1.82\pm 0.34,\ \ \ \ \frac{T_{\text{GLoBES}}}{T_{\text{ specific}}}=1.98\pm 0.36. \tag{69}\]
Most of the uncertainty comes from the variability in GLoBES' computation time.
Additionally, following discussion with Peter Denton, the \(\overset{(-)}{\nu}_{e}\) appearance algorithm was compared to other calculations described in [11]. Use was made of Peter Denton's code on github, where a fork was made ([https://github.com/Jamicus96/Nu-Pert-Compare](https://github.com/Jamicus96/Nu-Pert-Compare)) to add this paper's algorithm in two separate cases:
* The first step, or initialisation of quantities in vacuum, is performed separately, before any speed comparison (branch "compare_JP_precomp").
* The initialisation is included in the speed comparison (branch "compare_JP").
It was found that the former was marginally faster than the fastest exact calculation in Peter Denton's code package ("ZS") - on the order of 6% faster - while the latter slightly slower - around 17% slower. It was estimated that accounting for the initialisation time, this paper's algorithm becomes faster than the "ZS" algorithm after 3 loops (3 probability calculations for different neutrino energies). The differences here are small, and one must bear in mind that some approximate solutions described in [11], and included in the code, are significantly faster.
### Effective Parameters
From the correspondence between the PMNS matrix and \(X_{n}\) matrices found earlier (eqn 32), as well as that between the eigenvalue differences and the mass differences, one can find the effective parameters (including matter effects)
\[\widehat{\Delta m^{2}_{kj}}=\mathcal{E}_{m}-\mathcal{E}_{n}, \tag{70}\] \[\widehat{U}_{\alpha k}\widehat{U}^{*}_{\beta k}=\left(X_{n} \right)_{\alpha\beta},\]
for some relationship between \((k,j)\) and \((m,n)\) indices. To deduce this relationship, note that (70) must hold for the vac
Figure 1: Comparison of the CPU time taken by new neutrino oscillation algorithms against the same GLoBES calculation.
uum case, so the relationship need only be shown for that simplified case. Now, the eigenvalues of the traceless matrix \(\widetilde{H}\) (13) in the vacuum case are clearly
\[\begin{split}\lambda_{1}&=-\frac{1}{3}\left(\Delta m _{21}^{2}+\Delta m_{31}^{2}\right),\\ \lambda_{2}&=\frac{2}{3}\Delta m_{21}^{2}-\frac{1}{ 3}\Delta m_{31}^{2},\\ \lambda_{3}&=-\frac{1}{3}\Delta m_{21}^{2}+\frac{2}{ 3}\Delta m_{31}^{2},\end{split} \tag{71}\]
so the vacuum eigenvalues \(\tilde{\mathcal{E}}_{n}\) must be assigned to these in some order. Next, looking at the definition of \(\mathcal{E}_{n}\) (29), the \(\frac{1}{3}\text{cos}^{-1}(...)\) term is always between \(0\) and \(\frac{\pi}{3}\). This means that
\[\mathcal{E}_{0}\;>\;\mathcal{E}_{1}\;>\;\mathcal{E}_{2}, \tag{72}\]
and \(\mathcal{E}_{0}>0\) always hold. Therefore, the ordering of \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) determines their relationship. This ordering depends on the mass ordering itself: for Normal Ordering (NO) \(\lambda_{3}>0>\lambda_{2}>\lambda_{1}\), and for Inverted Ordering (IO) \(\lambda_{2}>\lambda_{1}>0>\lambda_{3}\). Therefore, the relationship between the indices in (70) is shown in table 1.
For example in normal ordering, \(\widehat{\Delta m_{21}^{2}}=\mathcal{E}_{1}-\mathcal{E}_{2}\) and \(|\widehat{U}_{\text{e3}}|^{2}=\left(X_{0}\right)_{ee}\). Thus, the effective mixing angles can be evaluated in terms of these too, such as in normal ordering for example:
\[\begin{split}\widehat{s_{13}}^{2}&=\left(X_{0} \right)_{ee},\\ \widehat{s_{12}}^{2}&=\frac{\left(X_{1}\right)_{ee}}{ \widehat{c_{13}}^{2}},\\ \widehat{s_{23}}^{2}&=\frac{\left(X_{0}\right)_{\mu \mu}}{\widehat{c_{13}}^{2}},\\ \text{cos}\widehat{\delta_{13}}&=\frac{\widehat{c_{12 }}^{2}\widehat{c_{23}}^{2}+\widehat{s_{12}}^{2}\widehat{s_{23}}^{2}\widehat{ s_{13}}^{2}-\left(X_{1}\right)_{\mu\mu}}{2\widehat{c_{12}}\widehat{c_{23}} \widehat{s_{12}}\widehat{s_{23}}\widehat{s_{13}}}.\end{split} \tag{73}\]
Notice that any dependence on energy or electron density in these parameters comes only from factors of \(A_{CC}\). Their values can thus be plotted on a simple graph against \(A_{CC}\propto EN_{e}\), without having to vary \(E\) and \(N_{e}\) independently. See figure (a)a for the fractional scaling of these parameters in the MeV neutrino energy scale (for lithospheric electron density). To see large scale absolute changes, such as mass differences changing ordering, one must go at least above the TeV scale, as shown in figure (b)b. As one might expect, "regime changes" (sudden changes in evolution of effective parameters) appear when \(A_{CC}\) reaches the same scale as the mass differences. Notice also that \(\widehat{\Delta m_{21}^{2}}\) and \(\widehat{s_{12}}^{-2}\) are the most strongly affected by matter in the MeV scale.
Furthermore, one can use these effective oscillation parameters in short and long baseline approximate formulae, just as one would with the vacuum constants. For example the long baseline approximation
\[P=1-\frac{1}{2}\text{sin}^{2}\left(2\theta_{13}\right)-\text{sin}^{2}\left(2 \theta_{12}\right)c_{13}^{2}\text{sin}^{2}\left(\frac{L\Delta m_{21}^{2}}{4E} \right), \tag{74}\]
becomes, for Normal Ordering,
\[P=1-4\left(X_{1}\right)_{ee}\left(X_{2}\right)_{ee}-2\left(X_{0}\right)_{ee} \text{sin}^{2}\left(\frac{L\left(\mathcal{E}_{1}-\mathcal{E}_{2}\right)}{4E} \right), \tag{75}\]
which follows the full matter effect oscillation formula more closely.
\begin{table}
\begin{tabular}{|c|c|c|} \hline (k, j) & NO (m, n) & IO (m, n) \\ \hline
3 & 0 & 2 \\
2 & 1 & 0 \\
1 & 2 & 1 \\ \hline \end{tabular}
\end{table}
Table 1: Index correspondence for effective parameters from equation 70, for Normal Ordering (NO) and Inverted Ordering (IO).
Conclusions
Clearly, the derived algorithm for constant matter density transition and survival probabilities is relatively simple and efficient compared with both numerical and previous analytic solutions [4; 5; 6; 7; 8; 9].
First the fact that this is an exact solution allows one to draw useful information such as anti electron neutrino survival probability being independent of the CP-phase for constant matter densities, and the effective parameters being only dependent on \(A_{CC}(E,N_{e})\) - not independently on \(E\) and \(N_{e}\).
Second, the performance of the example algorithms is noteworthy, computing roughly two times faster than the GLoBES package for non-zero matter density. Meanwhile, GLoBES is roughly as fast for zero matter density, being less than one standard deviation away.
At the very least, this presents a relatively easy to implement and fast tool to compute oscillation probabilities for approximately constant matter density profiles, which scales up well for large numbers of calculations. It does not require the use and implementation of an entire package such as GLoBES, when only a simple oscillation probability is desired.
|
2308.00080 | Volume of Tubes and Concentration of Measure in Riemannian Geometry | We investigate the notion of concentration locus introduced in
\cite{CacUrs22}, in the case of Riemann manifolds sequences and its
relationship with the volume of tubes. After providing a general formula for
the volume of a tube around a Riemannian submanifold of a Riemannian manifold,
we specialize it to the case of totally geodesic submanifolds of compact
symmetric spaces. In the case of codimension one, we prove explicitly
concentration. Then, we investigate for possible characterizations of
concentration loci in terms of Wasserstein and Box distances. | S. L. Cacciatori, P. Ursino | 2023-07-31T18:53:32Z | http://arxiv.org/abs/2308.00080v1 | # Volume of tubes and concentration of measure in Riemannian geometry
###### Abstract.
We investigate the notion of concentration locus introduced in [CacUrs22], in the case of Riemann manifolds sequences and its relationship with the volume of tubes. After providing a general formula for the volume of a tube around a Riemannian submanifold of a Riemannian manifold, we specialize it to the case of totally geodesic submanifolds of compact symmetric spaces. In the case of codimension one, we prove explicitly concentration. Then, we investigate for possible characterizations of concentration loci in terms of Wasserstein and Box distances.
## Introduction
A concentration locus is roughly speaking a sequence of sub-manifolds \((M_{n},\sigma_{n},g_{n})\) (where \(g_{n}\) is the geodetic metric and \(\mu_{n}\) the volume measure) which approximates the concentration behaviour of the manifolds \((N_{n},\mu_{n},g_{n})\) where they are embedded. In a sense, the concentration character of the "big" sequence is fully determined by the "thin" one. This phenomenon is particularly significant, from the point of view of applications, whenever it is possible to single out, inside a sequence of manifolds of unknown concentration behaviour, a sequence of much simpler sub-manifolds which is a concentration locus and therefore, provided the concentration behaviour of the sub-manifolds is known, it determines the concentration behaviour of the big one.
Both sequences can be regarded as sequences of metric-measurable spaces (mm-spaces). In the space \(\mathfrak{M}\) of all mm-spaces Gromov, in his celebrated green book [Gro99], defines the notion of observable distance (\(d_{conc}\) in [Shi]) which fully generalizes the classical phenomenon of concentration of measure to a point and Levy family (see for example [GM]). Practically, we say that a sequence of mm-spaces concentrates to an mm-space, whenever the former \(d_{conc}\) converges to the latter.
Our primary goal consists in finding a way to detect if a sequence of sub-manifolds is or is not a concentration locus by investigating the sequences of volumes of tubes built around the sub-manifolds, with decreasing rays.
The problem of calculating the volume of a tube in a Riemann manifold is very interesting in itself and it is treated in Section 1. We start from the well-known article of Weyl [We] and the beautiful book of Gray [Gray]. We succeed, using the approach of Gray, in finding a general formula for the volume of tubes which involves the codimension, the ray of the tube, the curvature of the ambient manifold and the killing curvatures of the sub-manifold. Unfortunately, it is widely useless in computing concentration locus unless you have an estimate of the curvature derivatives of every order. The case of symmetric spaces is much more practicable, instead. Indeed, we find a concrete formula and we are able to extend the system of coordinates used for calculating the volume of the tubes to the entire ambient manifold and therefore calculate the asymptotic conditions to detect a concentration locus. We believe that our results can be easily extended to the case of homogeneous spaces.
Observable distance is a very difficult tool to deal with, fortunately, there are more practical distances that are related to \(d_{conc}\). For example, Wasserstein distance \(d_{W}\), which is related to optimal transport, and \(d_{box}\), which makes \(\mathfrak{M}\) a complete metric space. The following holds \(d_{W}\Rightarrow d_{box}\Rightarrow d_{conc}\)
## 1. Introduction
Let \(X\) be a Banach space and let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space and let \(\mathbb{C}\) be a Banach space. Let \(\mathbb{C}\) be a Banach space.
\(\epsilon>0\) we set
\[B_{d}(x,\epsilon):=\{y\in X\mid d(x,y)<\epsilon\}\ \ B_{d}(A,\epsilon):=\{y\in X \mid\exists a\in A;d(x,y)<\epsilon\}.\]
Then the Hausdorff distance between any two subsets \(A,B\subseteq X\) is given by
\[d_{H}(A,B):=\inf\{\epsilon>0\mid B\subseteq B_{d}(A,\epsilon),\ A\subseteq B_{ d}(B,\epsilon)\}.\]
For \(l,r\geq 0\), we denote by \(Lip_{l}(X,d)\) the set of all \(l\)-Lipschitz real-valued functions on \((X,d)\), and we define
\[Lip_{l}^{\infty}(X,d):=Lip_{l}(X,d)\cap l^{\infty}(X),\ Lip_{l}^{r}(X,d):=\{f \in Lip_{l}(X,d)\mid\|f\|_{\infty}\leq r\}.\]
Moreover, we set \(Lip(X,d):=\bigcup\{Lip_{l}(X,d)\mid l\geq 0\}\) and \(Lip(X,d)^{\infty}:=Lip(X,d)\cap l^{\infty}(X)\).
Whenever \((X,d)\) is a separable metric space, the Wasserstein distance \(W_{1}(\mu,\nu)\)1 is a compatible metric for a weak topology on \(P(X)\) defined by
Footnote 1: Different names appearing in the literature include Monge-Kontorovich distance, bounded Lipschitz distance, mass transportation distance, and Fortet-Mourier distance [20]
\[W_{1}(\mu,\nu):=\sup_{f\in Lip_{1}^{1}(X,d)}\left|\int fd\mu-\int fd\nu\right| \ \ (\mu.\nu\in P(X)).\]
**Definition 1**.: _(Gromov-Milman [1]) A space with a metric \(g\) and a measure \(\mu\), or an mm-space, is a triple \((X,\mu,g)\), consisting of a set \(X\), a metric \(g\) on \(X\) and a probability Borel measure such that \((X,g)\) is a separable complete metric space._
Moreover, an mm-space \((X,\mu,d)\) is called compact if \((X,d)\) is compact, and fully supported if \(spt\mu=X\). Henceforth, we will denote by \(\lambda\) the Lebesgue measure on \([0,1)\).
A parametrization of an mm-space \((X,\mu,d)\) is a Borel measurable map \(\phi:[0,1)\to X\) such that \(\sharp\phi\lambda=\mu\). It is well known that any mm-space admits a parametrization (see, e.g. [12]). In the set of isomorphism classes of mm-spaces, \(\mathfrak{M}\), we can define the box distance, \(d_{box}\), that we are going to define ( [12]). For two pseudo-metrics \(\rho_{1}\) and \(\rho_{1}\) on the unit interval \(I\), we define \(d_{box}(\rho_{1},\rho_{2})\) to be the infimum of \(\epsilon>0\) satisfying that there exists a Borel subset \(I_{0}\subseteq I\) such that
1. \(|\rho_{1}(s,t),\rho_{2}(s,t)|\leq\epsilon\) for any \(s,t\in I_{0}\),
2. \(\mathfrak{L}^{1}(I_{0})\geq 1-\epsilon\) where \(\mathfrak{L}^{1}\) denotes the one-dimensional Lebesgue measure.
**Definition 2**.: _let \(X\) be a topological space with a Borel probability measure \(\mu_{X}\). A map \(\varphi:I\to X\) is called a parameter of \(X\) if \(\varphi\) is a Borel measurable map such that_
\(\sharp\varphi\mathfrak{L}^{1}=\mu_{X}\)__
We define box distance \(d_{box}\) between two isomorphism classes of mm-spaces \(X,Y\) to be the infimum of \(d_{box}(\sharp\varphi d_{X},\sharp\psi d_{Y})\) where \(\varphi:I\to X\) and \(\psi:I\to Y\) run over all parameters of \(X\) and \(Y\), respectively, and where \(\sharp\varphi d_{X}(s,t):=d_{X}(\varphi(s),\varphi(t))\).
**Definition 3**.: _(Gromov-Milman [1][1]) In the set of isomorphism classes of mm-spaces we can define the following distance:_
\[d_{conc}(X,Y):=\inf\{(me_{\lambda})_{H}(Lip_{1}(X)\circ\phi,Lip_{1}(Y)\circ \psi\mid\phi\text{ param. of }X,\psi\text{ param. of }Y\}.\]
Two mm-spaces \(X\) and \(Y\) are isomorphic if there exists an isomorphism between mm-spaces \((X,\mu,d),(Y,\nu,d^{\prime})\) i.e an isometry
\[f:(spt\mu,d\mid X)\rightarrow(spt\nu,d^{\prime}\mid Y)\]
such that \(\sharp f(\mu\mid spt\mu)=\nu\mid spt\nu\). A sequence of mm-spaces \((X_{n},\mu_{n},g_{n})\) is said to concentrate to an mm-space \((X,\mu,g)\) if
\[\lim_{n}d_{\text{conc}}(X_{n},X)=0.\]
In this case, we denote \((X,\mu,g)\) as a concentration set for the sequence of mm-spaces \((X_{n},\mu_{n},g_{n})\). Finally, let us recall the definition of Concentration Locus as defined in [10]
**Definition 4**.: _Let \(\{X_{n},\mu_{n}\}_{n\in\mathbb{N}}\) be a family of metric spaces with metrics \(d_{n}\), and Borel's measures \(\mu_{n}\) w.r.t. which nonempty open set have non-vanishing measure. Assume the measures to be normalized, \(\mu_{n}(X_{n})=1\). Let \(\{S_{n}\}_{n\in\mathbb{N}}\) be a family of proper closed subsets, \(S_{n}\subset X_{n}\). Fix a sequence \(\{\varepsilon_{n}\}_{n\in\mathbb{N}}\) such that \(\varepsilon_{n}>0\), \(\lim_{n\to\infty}\varepsilon_{n}=0\), and let \(\{U_{n}^{\varepsilon_{n}}\}_{n\in\mathbb{N}}\) be the sequence of tubular neighbourhoods of \(S_{n}\) of radius \(\varepsilon_{n}\). We say that the family \(\{S_{n}\}\) is a Concentration Locus if_
\[\lim_{n\to\infty}\mu_{n}(X_{n}-U_{n}^{\varepsilon_{n}})=0. \tag{0.1}\]
_Moreover, if such a sequence \(\varepsilon_{n}\) converges to 0 at rate \(k\) (so that \(\lim_{n\to\infty}n^{k}\varepsilon_{n}=c\) for some constant \(c\)), we say that the family \(\{S_{n}\}\) is a Concentration Locus at least at rate \(k\)._
## 1. Volume of tubes in compact manifolds.
Let \(M\) be a compact Riemannian submanifold \(M\subseteq N\) of codimension \(q\) of a manifold \(N\). We call \(M_{\varepsilon}\) the tube generated by all geodetic segments of length \(\epsilon\), outgoing perpendicularly from \(M\). It is a well-known result established by Weyl that the volume of the tube can be expressed in terms of the Lipschitz-Killing curvatures \(K_{2j}\) of \(M\) and the ambient space, and the codimension \(q\). More precisely:
**Theorem 1** (Weyl, [11]).: _Let \(M\) be a compact Riemannian submanifold of \(\mathbb{R}^{N}\) of codimension \(q=N-n\). Let \(M_{\varepsilon}\) a tubular neighbourhood of \(M\) of radius \(\varepsilon\). Then, for all \(r>0\) sufficiently small, it holds_
\[\operatorname{Vol}_{\mathbb{R}^{N}}(M_{\varepsilon})=\frac{\pi^{\frac{q}{2}} \varepsilon^{q}}{\Gamma(\frac{q}{2}+1)}\left(K_{0}(M)+\sum_{j=1}^{\lfloor n/ 2\rfloor}\frac{K_{2j}(M)\varepsilon^{2j}}{(q+2)(q+4)\cdots(q+2j)}\right), \tag{1.1}\]
_where_
\[K_{2j}(M)=\int_{M}k_{2j}(\Omega), \tag{1.2}\]
_are the integrated Lipschitz-Killing curvatures, and \(\Omega\) is the curvature 2-form of \(M\)._
Remember that if \(e^{a}\) is the dual basis to an orthonormal frame \(V_{a}\), \(a=1,\ldots,n\), then
\[k_{2j}(\Omega)=\frac{1}{2^{j}j!(n-2j)!}\sum_{\sigma\in S_{n}}\epsilon_{\sigma }\Omega_{\sigma(1)\sigma(2)}\wedge\cdots\wedge\Omega_{\sigma(2j-1)\sigma(2j )}e^{\sigma(2j+1)}\wedge\cdots\wedge e^{\sigma(n)}, \tag{1.3}\]
where \(S_{n}\) is the set of permutations of \(n\) elements and \(\epsilon_{\sigma}\) the sign of the permutation. In particular,
\[k_{0}(\Omega)=d\operatorname{Vol}_{M}, \tag{1.4}\]
is the volume form on \(M\),
\[k_{2}(\Omega)= \frac{1}{2}R\ d\operatorname{Vol}_{M}, \tag{1.5}\] \[k_{n}(\Omega)= \operatorname{Pf}(\Omega), \tag{1.6}\]
where \(R\) is the scalar curvature of \(M\) and \(\operatorname{Pf}\) the Pfaffian. Finally, notice that
\[\frac{\pi^{\frac{q}{2}}\varepsilon^{q}}{\Gamma(\frac{q}{2}+1)}=\operatorname{ Vol}_{\mathbb{R}^{q}}(D_{\varepsilon}) \tag{1.7}\]
is the volume of the \(q\) dimensional disc of radius \(\varepsilon\) in \(\mathbb{R}^{q}\). So, if we define the _mean Lipschitz-Killing curvatures_\(\kappa_{2j}\) as
\[\kappa_{2j}=\frac{K_{2j}(M)}{\operatorname{Vol}_{M}(M)}, \tag{1.8}\]
then we can rewrite Weyl's formula as
\[\operatorname{Vol}_{\mathbb{R}^{N}}(M_{\varepsilon})=\operatorname{Vol}_{M}(M) \operatorname{Vol}_{\mathbb{R}^{q}}(D_{\varepsilon})\left(1+\sum_{j=1}^{\lfloor n /2\rfloor}\frac{\kappa_{2j}(M)\varepsilon^{2j}}{(q+2)(q+4)\cdots(q+2j)} \right). \tag{1.9}\]
We are interested in understanding the volumes of tubular neighborhoods of submanifolds of compact manifolds (and, in general, on manifolds with positive curvature). However, it is interesting to do some general considerations on this formula before discussing the more general case.
Since there is no curvature in the directions of \(\mathbb{R}^{N}\) orthogonal to \(M\), the deformations of the volume are only due to the bending of \(M\). If \(M\) is flat, then the volume of the tube is just the product of the volumes of the submanifold and the disc. The terms in the parenthesis then give the contributions of the deformations of the tube to the volume, when we bend the tube along a curved \(M\). For example, if we bend the tube neighborhood of a segment to the one of a circle, the tube will be compressed along the most internal circle and stretched along the most external one. However, the volume doesn't change (if we don't change the length of the segment), indeed the scalar curvature of the circle is \(R_{S^{1}}=0\).
Now, since \(\kappa_{2j}\) are mean curvatures, we can get some hints about the dependence on curvatures by considering \(M\) to be a manifold of constant sectional curvature \(1/r^{2}\). In this case, one has
\[\varepsilon^{2j}\kappa_{2j}(\Omega)=\frac{n!}{2^{j}j!(n-2j)!}\left(\frac{ \varepsilon}{r}\right)^{2j}. \tag{1.10}\]
We want to see under which conditions the curvature terms become relevant in the formula (1.9) when \(n\) grows. In general, this also could imply that generically also \(q\) grows. Keeping \(j\) fixed, we see that Stirling's formula implies
\[\varepsilon^{2j}\kappa_{2j}(\Omega)\approx\frac{1}{2^{j}j!}\left(\frac{n \varepsilon}{r}\right)^{2j}. \tag{1.11}\]
If \(\varepsilon/r\) is small but constant, the curvature terms become dominant for large \(n\), at least if \(q\) is constant. Despite we are considering spheres embedded in \(\mathbb{R}^{n+q}\), let us for a moment imagine assuming \(q=1\) and that \(\mathbb{R}^{n+q}\) is replaced by \(S_{r}^{n+q}\). In this case, it is well known that the measure of the whole sphere concentrates in a tube of radius \(\varepsilon\sim rn^{-\frac{1}{2}}\) around the equator \(M\). In this situation, we see that
\[\varepsilon^{2j}\kappa_{2j}(\Omega)\approx\frac{1}{2^{j}j!}n^{j}, \tag{1.12}\]
so the curvature terms are dominant w.r.t. the \(1\). If also the codimension \(q\equiv q_{n}\) increases unboundedly, then, including the denominators, we see that the contributions are of the order
\[\frac{1}{2^{j}j!}(n/q_{n})^{j}, \tag{1.13}\]
so that the dominance of the curvatures persists if \(q_{n}\) grows slower than \(n^{1-a}\) for any fixed arbitrarily small but positive \(a\). This is also another well-known condition for concentration. Why should we consider these considerations acceptable if we replace the flat ambient space with spheres? The reason is that the spheres have curvature \(1/r^{2}\) much smaller than the inverse square radius of the tube, \(\sim n/r^{2}\). This suggests that in general, we may obtain the same results if we have an estimation of the
bound of the curvature of the ambient manifold \(N\) and a control on the error we make in using the flat formulas as a function of the radius of the tube.
Since we are interested in compact manifolds, we recall that Weyl deduced the exact formula for the case when the embedding space is a sphere. From this result we can deduce:
**Proposition 1**.: _Let \(M\subset S^{n+q}_{R}\) a \(n\) dimensional smooth compact submanifold of codimension \(q\) of a sphere of radius \(R\), and \(M_{\varepsilon}\) the tube of radius \(\varepsilon\) around \(M\). Then, we can write_
\[\operatorname{Vol}_{S^{n+q}_{R}}(M_{\varepsilon})=\operatorname{Vol}^{flat}(M_ {\varepsilon})\left(1+o(\varepsilon/R)\right), \tag{1.14}\]
_where \(\operatorname{Vol}^{flat}_{N}(M_{\varepsilon})\equiv\operatorname{Vol}_{ \mathbb{R}^{n+q}}(M_{\varepsilon})\) is given by Weyl's formula for the embedding in the flat \(\mathbb{R}^{n+q}\)._
**Remark:**_Before giving the proof, let us notice that this proposition has an immediate consequence: if \(\varepsilon\ll R\) then we can use the result of the previous section to deduce the properties of concentration of the measure around \(M\), up to a relative error controlled by \(\varepsilon/R\)._
Proof.: Let us set \(N=n+q\). In [We], Weyl deduced the following formula for the volume of a tube of "radius \(a\)" in a sphere of radius \(R\):
\[\operatorname{Vol}_{S^{N}}(M_{\varepsilon})=\frac{2\pi^{\frac{q}{2}}}{\Gamma (\frac{q}{2})}\sum_{j=0}^{\lfloor n/2\rfloor}\frac{K_{2j}(M)R^{2j+q}}{q(q+2) \cdots(q+2j-2)}\int_{0}^{\frac{\varepsilon}{R}}(\sin\rho)^{q+2j-1}(\cos\rho)^ {n-2j}d\rho. \tag{1.15}\]
Here, the radius \(a\) is not the geodesic radius along the sphere but the Euclidean radius in the tangent space. It is related to the geodesic radius \(\varepsilon\) through the relation \(a=R\tan\frac{\varepsilon}{R}\). If we use the change of variable \(x=\sin^{2}\rho/\sin^{2}(\varepsilon/R)\) we get
\[\int_{0}^{\frac{\varepsilon}{R}}(\sin\rho)^{q+2j-1}(\cos\rho)^{n- 2j}d\rho= \frac{1}{2}(\sin(\varepsilon/R))^{q+2j}\int_{0}^{1}dx\ \frac{x^{\frac{q}{2}+j-1}}{\left(1-\sin^{2}( \varepsilon/R)x\right)^{j-\frac{n-1}{2}}}\] \[= \frac{(\sin(\varepsilon/R))^{q+2j}}{q+2j}{}_{2}F_{1}\left(j+ \frac{q}{2},j-\frac{n-1}{2};j+\frac{q}{2};\sin^{2}\frac{\varepsilon}{R}\right). \tag{1.16}\]
Thus,
\[\operatorname{Vol}_{S^{N}}(M_{\varepsilon})=\frac{2\pi^{\frac{q}{2}}}{\Gamma (\frac{q}{2})}\sum_{j=0}^{\lfloor n/2\rfloor}\frac{K_{2j}(M)R^{2j+q}(\sin( \varepsilon/R))^{q+2j}}{q(q+2)\cdots(q+2j)}{}_{2}F_{1}\left(j+\frac{q}{2},j- \frac{n-1}{2};j+\frac{q}{2};\sin^{2}\frac{\varepsilon}{R}\right). \tag{1.17}\]
Using that \(\sin x=x+o(x)\) and that \({}_{2}F_{1}(a,b;c;x^{2})=1+o(x)\) for \(x\to 0\), and that the volume of the discs on a sphere are the same as in the tangent space up to correction of order \(\varepsilon/R\), we get the proof of our assert.
This corroborates our previous discussion, showing that we can use formula (1.9) when the radius of the tube is small compared with the radius of curvature of the sphere. Notice that the factor containing the mean curvatures seems to suggest that one should look for submanifolds having large curvatures in order to look for concentration phenomena. However, it is well-known that in the case of spheres, the concentration is on equators, which are totally geodesic submanifolds with the property that the extrinsic curvature vanishes. Therefore, in this case, the Lipschitz-Killing curvatures are all zero, contradicting our intuition. Before explaining why this happens, let us see how things work in the more general case.
### The general case
Let \(M\) be a Riemannian submanifold of codimension \(q\) in a compact Riemannian manifold \(N\) of dimension \(q+n\). Let \(M_{\varepsilon}\) a tube of radius \(\varepsilon\) around \(M\). We can coordinatize the tube as follows. Fix a point \(p\) on \(M\) with local coordinates \(\bar{x}\). Consider the normal bundle \(\mathcal{N}M\) of \(M\) in \(\Sigma\) and let \(\hat{n}_{j}(\bar{x})\), \(j=1,\ldots,q\), an orthonormal basis of \(\mathcal{N}_{p}M\). We can introduce polar coordinates \(\theta_{a}\), \(a=1,\ldots,q-1\), and director cosines \(\omega^{j}(\bar{\theta})\) to parametrise the arbitrary direction orthogonal to \(M\) as \(n(\bar{\theta};\bar{x})=\sum_{j}\omega^{j}(\bar{\theta})\hat{n}_{j}(\bar{x})\). Let us consider the geodesic \(\gamma_{\bar{x},\bar{\theta}}(t)\) in \(N\) starting at \(t=0\) from \(p\) in the direction \(n(\bar{\theta};\bar{x})\), where \(t\) is the geodesic length parameter. The tube of radius \(\varepsilon\) is defined by all such geodesics for \(t\leq\varepsilon\). For \(\varepsilon\) small enough it is well defined. We use the coordinates \((\bar{x},\bar{\theta},t)\) in the tube. In order to compute the volume of the tube in the measure of \(N\), we need to compute the measure in the given local coordinates. We do it assuming that the coordinate \(\underline{x}\) cover the whole \(M\) up to a subset of vanishing measure. This is not a restriction since this hypothesis can be replaced by the introduction of a partition of unity. In \(p\equiv\bar{x}\), let us choose an orthonormal frame in \(T_{p}M\), say \(e_{a}\), \(a=1,\ldots,n\), to be used as a frame of Fermi along the normal geodesic \(\gamma_{\bar{x},\bar{\theta}}(t)\). The Jacobian we are interested in is
\[J=(\partial_{\bar{x}}\gamma_{\bar{x},\bar{\theta}};\partial_{\bar{\theta}} \gamma_{\bar{x},\bar{\theta}};\partial_{t}\gamma_{\bar{x},\bar{\theta}}). \tag{1.18}\]
It is clear that by construction \(\partial_{\bar{\theta}}\gamma_{\bar{x},\bar{\theta}};\partial_{t}\gamma_{\bar {x},\bar{\theta}}\) just provides the measure of the volume form of the disc generated by the normal geodesics from \(p\), say \(dVol_{D_{\bar{x}}}(\bar{\theta},t)\). Notice that for generic \(M\) it is expected to depend on \(\bar{x}\). The remaining contribution can be computed by employing the Jacobi equation w.r.t. the Fermi frame. It is (the tilde just means we are restricting to the directions of the Fermi frame)
\[\ddot{\bar{J}}_{ab}+\sum_{c}R(\dot{\gamma},e_{a},\dot{\gamma},e_{c})\tilde{J} _{cb}=0, \tag{1.19}\]
where the dot indicates derivative w.r.t. to \(t\). Here \(\gamma\equiv\gamma_{\bar{x},\bar{\theta}}\) and \(e_{a}\) are to be intended as Fermi transported along the geodesic. Because of this, we have
\[\frac{d}{dt}R(\dot{\gamma},e_{a},\dot{\gamma},e_{c})=(\nabla_{\dot{\gamma}}R)( \dot{\gamma},e_{a},\dot{\gamma},e_{c}). \tag{1.20}\]
Notice that for \(N\) compact the matrix \(R(\dot{\gamma},e_{a},\dot{\gamma},e_{c})\) is symmetric and positive definite. In general, the solution of equation (1.19) is completely determined by the Cauchy data \(\tilde{J}_{cb}|_{t=0}\), \(\dot{\tilde{J}}_{cb}|_{t=0}\). For example, deriving (1.19) in \(t=0\), we get
\[\dddot{\tilde{J}}_{ab}+\sum_{c}R(\dot{\gamma},e_{a},\dot{\gamma},e_{c})\dot{ \tilde{J}}_{cb}+\sum_{c}(\nabla_{\dot{\gamma}}R)(\dot{\gamma},e_{a},\dot{ \gamma},e_{c})\tilde{J}_{cb}=0, \tag{1.21}\]
and iterating this operation, one gets \(\left.\frac{d^{n}\tilde{J}_{ab}}{dt^{n}}\right|_{t=0}\) as a function of \(\tilde{J}_{ab}\), \(\dot{\tilde{J}}_{ab}\) and \(R(\dot{\gamma},e_{a},\dot{\gamma},e_{c})\) and all its covariant derivatives up to order \(n-2\) along the direction \(\dot{\gamma}\), in the point \(p\). More in general, we can write formally the solution in the form
\[\tilde{J}(\bar{x},\bar{\theta},t)=J_{0}(\bar{x},\bar{\theta})+\sum_{j=1}^{ \infty}(A_{j}J_{0}+B_{j}\dot{J}_{0})(\bar{x},\bar{\theta})\frac{t^{j}}{j!}, \tag{1.22}\]
where \(J_{0}\), \(\dot{J}_{0}\), \(A_{j}\), \(B_{j}\) are \(n\times n\) matrix-valued functions of \((\bar{x},\bar{\theta},t)\) defined by
\[J_{0}= \tilde{J}(\bar{x},\bar{\theta},0), \tag{1.23}\] \[\dot{J}_{0}= \frac{d\tilde{J}}{dt}(\bar{x},\bar{\theta},0),\] (1.24) \[(A_{1})_{ab}= 0,\qquad(B_{1})_{ab}=\delta_{ab},\] (1.25) \[A_{j+1}= \nabla_{\dot{\gamma}}(A_{j})_{ab}-\sum_{c}(B_{j})_{ac}R(\dot{ \gamma},e_{c},\dot{\gamma},e_{b}),\] (1.26) \[B_{j+1}= \nabla_{\dot{\gamma}}(B_{j})_{ab}+(A_{j})_{ab}. \tag{1.27}\]
Let us assume the analyticity condition that for any given \((\bar{x},\bar{\theta})\) the series (1.22) has a strictly positive convergence radius. Since \(N\) is compact and \(M\) is closed, then also \(M\) is compact and there is a minimum positive radius \(\tau\), such that (1.22) converges uniformly in any region \(t\leq\varepsilon<\tau\). We can fix such an \(\varepsilon\) to define the tube. Moreover, notice that \(\tilde{J}(\bar{x},\bar{\theta},0)\) determines the change of variables along \(M\), and its determinant does not depend on \(\bar{\theta}\), while
\[\dot{\tilde{J}}(\bar{x},\bar{\theta},0)_{ab}=\sum_{c=1}^{n}\sum_{s=1}^{q} \omega^{s}(\bar{\theta})K^{s}_{ac}(\bar{x})\tilde{J}(\bar{x},\bar{\theta},0) _{cb}, \tag{1.28}\]
where \(K^{j}_{ab}\) is the second fundamental form of the embedding of \(M\) along the direction \(n^{j}\) and \(\omega^{j}\) are the director cosines defined above. Therefore, we have proven the following proposition:
**Proposition 2**.: _Let \(M\) be a Riemannian submanifold of codimension \(q\) in a compact Riemannian manifold \(N\) of dimension \(q+n\). Let \(M_{\varepsilon}\) be a tube of radius \(\varepsilon\) around \(M\), coordinatized as above. Then, the volume element \(dV\) in the tube is_
\[dV=dVol_{D_{2}}(\bar{\theta},t)dVol_{M}(\bar{x})\det\left(I_{n}+\sum_{j=1}^{ \infty}\left(A_{j}(\bar{x},\bar{\theta})+B_{j}(\bar{x},\bar{\theta})\sum_{s=1 }^{q}\omega^{s}(\bar{\theta})K^{s}(\bar{x})\right)\frac{t^{j}}{j!},\right), \tag{1.29}\]
_where \(K^{s}\) is the symmetric matrix with components \(K^{s}_{ab}\), that is the second fundamental form along the normal direction \(s\), \(I_{n}\) is the \(n\times n\) identity matrix._
This very general formula is clearly of poor practical usage since for applications one needs to have control of the Riemann tensor and all its covariant derivatives. In any case, we can see that if we look for the concentration of the measure around \(M\), the main ingredients entering into the game are the extrinsic curvatures \(K^{s}(\bar{x})\) and the volume of \(M\). Of course, large values of the curvatures may amplify the last factor. However, if the sign of the curvatures is constant because we are looking for convex regions, then, large values of the curvatures may correspond to a small value of the volume of \(M\). For example, it is well known that the measure of the spheres concentrates on equators which are totally geodesic subvarieties, thus having zero extrinsic curvatures. This maximizes the volume of \(M\). This suggests that the best candidates for the concentration of the measure are totally geodesic subvarieties. The contribution of the curvatures should then be to maximize the dependence on \(t\) in \(t=0\) through the coefficients \(A_{j}\). However, it is quite hard to say more in the general case, both because it is not guaranteed the existence of totally geodesic subvarieties and because it is quite hard to have a uniform control on the coefficients \(A_{j}\) and \(B_{j}\). For these reasons, we now move to specific examples.
### Compact Symmetric spaces
We want to apply our general formula to the case of compact symmetric spaces \(\Sigma=G/H\), where \(G\) is a compact Lie group and \(H\) is a symmetrically embedded subgroup. The reason is that they are simple enough to allow for a very explicit calculation of the coefficients \(A_{j}\) and \(B_{j}\), and, at the same time, they contain several totally geodesic submanifolds. In
a sense, they are the simplest generalizations of \(S^{N}=SO(N+1)/SO(N)\). Here we consider the case where \(G\). is a simple group, but our construction can be extended to semisimple groups in an obvious way. We assume that the dimension of \(\Sigma\) is \(N=n+q\) while \(M\) is an \(n\) dimensional submanifold. \(\Sigma\) is endowed by a metric \(g_{ij}\) that is invariant under both the left and the right translations generated by \(G\). Since \(G\) is compact, the metric is induced by the Killing form of \(Lie(G)\), up to a (negative) constant. The corresponding Riemann tensor is covariantly constant. In particular, \(\Sigma\) is an Einstein manifold with Ricci tensor \(R_{ij}=\frac{S}{N}g_{ij}\), where the scalar curvature \(S\) is a constant. We have the following orthogonal decomposition
\[Lie(G)=Lie(H)\oplus Lie(H)^{\perp}. \tag{1.30}\]
The elements of \(Lie(H)\) act as infinitesimal isometries leaving fixed the points of \(\Sigma\), while the elements of \(Lie(H)^{\perp}\) generate translations. At each point \(p\) of \(M\) we can take of vectors \(\{\vec{n}_{1},\ldots,\vec{n}_{q}\}\) forming a basis of the normal space of \(T_{p}M\) in \(T_{p}\Sigma\). We can take \(\vec{n}_{j}\) as elements of \(Lie(H)^{\perp}\). The disc of radius \(a\) in \(T_{p}M^{\perp}\) defined by
\[D_{a}(p)=\{x_{1}\vec{n}_{1}+\cdots+x_{q}\vec{n}_{q}|x_{1}^{2}+\cdots+x_{q}^{2} \leq a^{2}\} \tag{1.31}\]
is mapped to a geodesic disc by the exponential map. The orthogonal geodesics from \(p\) are thus of the form
\[\gamma(t)=e^{t\sum_{j}v^{j}\vec{n}_{j}}\cdot p \tag{1.32}\]
where \(\cdot\) indicates the action of the elements of \(G\) on \(\Sigma\), \(\sum_{j}(v^{j})^{2}=1\), and the exponential is in the sense of groups. Then, \(\varepsilon\) is the geodesic distance of \(\gamma(a)\) from \(p\).
The main point now is that, since the Riemann tensor is covariantly constant, we have
\[(\nabla_{\dot{\gamma}}R)(\dot{\gamma},e_{a},\dot{\gamma},e_{c})=0. \tag{1.33}\]
Therefore, the matrix
\[R(\dot{\gamma},e_{a},\dot{\gamma},e_{c})=R(n(\bar{x};\bar{\theta}),e_{a},n( \bar{x};\bar{\theta}),e_{c})_{\bar{x}}=:A(\bar{x},\bar{\theta})_{ac}\]
is constant in \(t\) and we need just to evaluate it in the point \(p\). Moreover, \(\Sigma\) is compact so that the matrix \(A\) is symmetric and positive definite. Hence, it exists an orthogonal matrix \(\Omega(\bar{x},\bar{\theta})\) such that
\[A(\bar{x},\bar{\theta})=\Omega(\bar{x},\bar{\theta})D^{2}(\bar{x},\bar{\theta })\Omega(\bar{x},\bar{\theta})^{T}, \tag{1.34}\]
where \(D^{2}\) is a diagonal matrix with positive eigenvalues \(d_{\bar{j}}^{2}(\bar{x},\bar{\theta})\). If, for any given real function \(f\), we define \(f(Dt)\) as the diagonal matrix having \(f(d_{j}t)\) as diagonal elements, and \(f(\sqrt{A}t)=\Omega f(Dt)\Omega^{T}\), then we get for the volume element \(dV\) in the tube is given by the following
**Proposition 3**.: _Let \(M\) be a Riemannian submanifold of codimension \(q\) in a compact Riemannian symmetric manifold \(\Sigma\) of dimension \(q+n\). Let \(M_{\varepsilon}\) be a tube of radius \(\varepsilon\) around \(M\), coordinatized as above. Then, the volume element \(dV\) in the tube is_
\[dV=dVol_{D_{\bar{x}}}(\bar{\theta},t)dVol_{M}(\bar{x})\det\left(\cos\biggl{(} \sqrt{A(\bar{x},\bar{\theta})}t\biggr{)}+\frac{\sin\biggl{(}\sqrt{A(\bar{x}, \theta)}t\biggr{)}}{\sqrt{A(\bar{x},\theta)}}\sum_{s=1}^{q}\omega^{s}(\bar{ \theta})K^{s}(\bar{x})\right). \tag{1.35}\]
This is what we get by a direct application of (1.29) with a constant matrix \(R\). Notice that in general, the volume element of the disc depends on its center \(\bar{x}\), as well as the matrix \(A(\bar{x},\bar{\theta})\).
We now restrict further ourselves to the case when \(M\) is a totally geodesic submanifold of \(\Sigma\). In this case \(K^{s}(\bar{x})=0\). Since the sub-manifold \(\Sigma\) is totally geodetic, any two points in \(M\) are connected by a geodetic of \(\Sigma\) which is also a geodesic for \(M\). Moreover, \(\Sigma\) is symmetric, so it has covariantly constant
Riemann tensor (and metric, obviously). Then \(dVol_{D_{\bar{x}}}(\bar{\theta},t)\) and \(A(\bar{x},\bar{\theta})\) are independent on \(\bar{x}\) and the formula further reduces to
\[dV=dVol_{D}(\bar{\theta},t)dVol_{M}(\bar{x})\prod_{a=1}^{n}\cos \bigl{(}d_{a}(\bar{\theta})t\bigr{)}. \tag{1.36}\]
Notice that for a compact symmetric space, analyticity is guaranteed, and \(t\) can thus be extended so that the above parametrization covers the whole manifold up to a measure null set (the set of focal points). Therefore, we can prove the following proposition. We can now notice that the range is such that the cosine factors remain non-negative. In this case, we can notice that for \(x\in[0,\pi]\) one has
\[\cos x\leq e^{-\frac{x^{2}}{2}}, \tag{1.37}\]
from which we get that everywhere
\[0\leq\prod_{a=1}^{n}\cos\bigl{(}d_{a}(\bar{\theta})t\bigr{)}\leq e ^{-\frac{t^{2}}{2}\sum_{a=1}^{n}d_{a}^{2}(\bar{\theta})} \tag{1.38}\]
**Proposition 4**.: _Let \((M_{n},\Sigma_{n})\) be a family, labeled by \(n\), of \(n\)-dimensional totally geodesic submanifolds \(M_{n}\) of symmetric spaces \(\Sigma_{n}\) of dimension \(n+1\) and constant diameter. Let \(M_{n}^{\varepsilon_{n}}\) be the tube of geodesic radius \(\varepsilon_{n}\) centered in \(M_{n}\). Finally, let \(m_{n}\) the Riemannian measure over \(\Sigma_{n}\) normalized so that \(\mu_{n}(\Sigma_{n})=1\). If \(\lim_{n\to\infty}\sqrt{n}\varepsilon_{n}=\infty\) then_
\[\lim_{n\to\infty}\mu(\Sigma-M_{n}^{\varepsilon_{n}})=0. \tag{1.39}\]
Proof.: Suppose we consider the measure (1.36) normalised to \(1\). We also assume that \(\Sigma_{n}\) has diameter \(L\). We need to consider
\[\lim_{n\to\infty}\int_{\Sigma_{n}}dVol_{D}(\bar{\theta},t)dVol_{M} (\bar{x})f_{n} \tag{1.40}\]
where
\[f_{n}=\prod_{a=1}^{n}\cos\bigl{(}d_{a}(\bar{\theta})t\bigr{)} \chi_{(\Sigma-M_{n}^{\varepsilon_{n}})}, \tag{1.41}\]
and \(\chi_{E}\) is the characteristic function of the set \(E\). Since the codimension of \(\Sigma\) is \(1\), we have that
\[\sum_{a=1}^{n}d_{a}^{2}(\bar{\theta})=\sum_{a=1}^{n}R(n(\bar{x}; \bar{\theta}),e_{a},n(\bar{x};\bar{\theta}),e_{c})_{\bar{x}}=Ric(n(\bar{x}; \bar{\theta}),n(\bar{x};\bar{\theta})),\]
where \(Ric\) is the Ricci tensor of \(M\). It is known that for a symmetric manifold of constant diameter and dimension \(s\), the Ricci tensor has the form \(R=(as+b)g\), \(g\) being the invariant metric and \(a>0\) and \(b\) constants independent on \(s\). For example, this follows easily from the calculations in [10] and [11]. Applied to our case and using (1.38) this shows that
\[f_{n}\leq e^{-\frac{t^{2}}{2}(an+b)}\chi_{(\Sigma-M_{n}^{\varepsilon_{n}})} \tag{1.42}\]
for some constants \(a>0\) and \(b\). Since the integral is extended in \(t\geq\varepsilon_{n}\), we get that the integrand goes to zero uniformly at worst as \(e^{-\frac{a}{2}\varepsilon_{n}^{2}n}\) when \(n\) diverges. Since \(\lim_{n\to\infty}\varepsilon_{n}^{2}n=0\), the assert is proved.
**Remark:** The assumption for the codimension to be \(1\) has been made to keep the proof technically simple. We believe that the same proposition is true for constant codimension \(q>1\) and we expect it to hold also for codimension \(q_{n}\), if it doesn't grow too fast with \(n\). We leave the investigation of this point for future work.
## 2. Characterization through wasserstein and box distance
Let \((M,\mu,g)\) be a Riemann manifold and \(C\subseteq M\) a submanifold. The projection map \(proj_{C}:M\to C\) is defined as a map such that \(d(x,proj_{C}(x))=d(x,C)\), where \(d\) is the geodetic distance associated with \(g\). For the existence and, more in general, a theory of distance functions in \(\mathbb{R}^{n}\) context see e.g. [CanSin]; for an extension of this theory to a Riemannian context see e.g. [ManteMen], [Fath].
**Proposition 5**.: _Let \(((N_{n},\mu_{n},g_{n}),M_{n})\) a sequence of Riemannian manifolds, with \(M_{n}\subsetneq N_{n}\) Riemannian submanifold with endowed measure \(\#proj_{N_{n},M_{n}}\mu_{n}\). Let \(d_{W_{2}}\) be the Wasserstein distance of order 2. If \(d_{W_{2}}(\mu_{n},\#proj_{N_{n},M_{n}}\mu_{n})\to 0\) then \(M_{n}\) is a concentration locus for \(N_{n}\)_
Proof.: If \(\pi_{n}\) is the geodesic projection on \(M_{n}\) and \(d_{n}\) the geodesic distance generated by the invariant metric \(g_{n}\), then, the cost to transport the mass \(m_{n}=\mu_{n}(N_{n}\setminus M_{n}^{\varepsilon_{n}})\) in terms of \(d_{W_{2}}^{2}\) is at least
\[\int_{N_{n}\setminus M_{n}^{\varepsilon_{n}}}d_{n}^{2}(x,\pi_{n}(x))d\mu_{n}( x)>\varepsilon_{n}^{2}m_{n},\]
for \(M_{n}^{\varepsilon_{n}}\) a tubular neighbourhood of radius \(\varepsilon_{n}\) of \(M_{n}\). In particular, for any fixed choice \(\varepsilon_{n}=\varepsilon>0\), since \(d_{W_{2}}(\mu_{n},\#proj_{N_{n},M_{n}}\mu_{n})\to 0\), we get that \(m_{n}\to 0\).
Proposition 5 cannot be reversed. Indeed, in case of a concentration locus \(M_{n}\subseteq N_{n}\), if the diameter of \(N_{n}\) is unbounded \(d_{W}\) could not converge to 0.
Nevertheless the following holds.
**Proposition 6**.: _Let \((N_{n},M_{n})\) a sequence as in the previous Proposition such that volumes and diameters of \(N_{n}\) are bounded by \(h>0\). If \(M_{n}\) is a concentration locus for \(N_{n}\) then for all \(n\in\mathbb{N}\)\(d_{W_{1}}(\mu_{n},\#proj\mu_{n})\to 0\)_
Proof.: By contradiction, suppose that for infinitely many \(n\)\(d_{W_{1}}(\mu_{n},\#proj\mu_{n})\geq k>0\). Since the distance function determines the optimal transport through the project function, \(\int_{N_{n}}d_{n}(x,\pi_{n}(x))d\mu_{n}(x)\geq k\), where \(\pi_{n}\) is the geodetic projection over \(M_{n}\) and \(d_{n}\) the geodetic distance generated by the invariant metric \(g_{n}\) on \(N_{n}\). Since \(M_{n}\) is a concentration locus for \(N_{n}\) we can set \(n^{\prime}\) in such a way for all \(n>n^{\prime}\)\(\epsilon_{n},\varepsilon_{n}<\frac{k}{4h}\) and, \(\mu_{n}(N_{n}\setminus M_{n}^{\varepsilon_{n}})<\epsilon_{n}\). Therefore,
\[k \leq\int_{N_{n}\setminus M_{n}^{\epsilon_{n}}}d_{n}(x,\pi_{n}(x) )d\mu_{n}(x)+\int_{M_{n}^{\epsilon_{n}}}d_{n}(x,\pi_{n}(x))d\mu_{n}(x)\] \[\leq\int_{N_{n}\setminus M_{n}^{\epsilon_{n}}}hd\mu_{n}(x)+\int _{M_{n}^{\epsilon_{n}}}\frac{k}{4h}d\mu_{n}(x)\leq 2h\frac{k}{4h}=\frac{k}{2},\]
which is a contradiction.
**Remark 1**.: _Observe that convergence in Wasserstein distance (Strassen's Theorem [Shi] implies Prohorov distance convergence) implies box distance convergence (Proposition 4.12 [Shi]), which in turn implies \(d_{conc}\) convergence (Proposition 5.5 (2) [Shi])._
_Therefore, let \(M\) be a complete separable metric space and (\(\mu_{i}\)) a sequence of Borel probability measures on \(M\). Consider the following three conditions: (1) \(\mu_{i}\) converges weakly to \(\mu\). (2) \((M,\mu_{i})\) box-converges to \((M,\mu)\). (3) \((M,\mu_{i})\)\(d_{conc}\)-converges (or concentrates) to \((M,\mu)\)._
_Then, the following implications hold: (1) \(\Rightarrow\) (2) \(\Rightarrow\) (3)._
_A counterexample of (2) \(\Rightarrow\) (1) is easy. Just take a sequence \(x_{i}\) in \(M\) and let \(\mu_{i}\) be the Dirac measure at \(x_{i}\). Then, all \((M,\mu_{i})\) are mm-isomorphic to each other, so that (2) holds. However (1) does not hold if \(x_{i}\) does not converge in \(M\)._
_A counterexample of (3) \(\Rightarrow\) (2) is the sequence of unit spheres \(S^{n}(1)\) with dimension \(n\) going to infinity. \(S^{n}(1)\)\(d_{conc}\)-converges to one-point space, but it is divergent for box-distance (see Cor. 5.20 [Shi])._
**Theorem 2**.: _Let \((N_{n},\mu_{n},g_{n})\) a sequence of Riemann manifolds with haar measure \(\mu_{n}\), geodesic distance \(d_{n}\) generated by the invariant metric \(g_{n}\) (they are in particular mm-spaces), and \(M_{n}\subseteq N_{n}\) sub manifolds with measures \(\sigma_{n}=\#proj\mu_{n}\) and metrics \(g^{\prime}_{n}=g_{n}|_{M_{n}}\). Provided that \(d_{box}((N_{n},\mu_{n}),(M_{n},\sigma_{n}))\to 0\) and assuming \((M_{n},\sigma_{n},g^{\prime}_{n})\rightarrow_{d_{conc}}(M,\sigma,g^{\prime})\) then \((N_{n},\mu_{n},g_{n})\rightarrow_{d_{conc}}(M,\sigma,g^{\prime})\)._
Proof.: By Proposition 5.5 [Shi], \(d_{conc}((N_{n},\mu_{n},g_{n}),(M_{n},\sigma_{n},g^{\prime}_{n}))\ \leq\ d_{box}((N_{n},\mu_{n},g_{n}),(M_{n},\sigma_{n},g^{\prime}_{n}))\). Hence, \(d_{conc}((N_{n},\mu_{n},g_{n}),(M_{n},\sigma_{n},g^{\prime}_{n}))\to 0\).
The following chain of inequalities
\[d_{conc}((N_{n},\mu_{n},g_{n}),(M,\sigma,g))\leq d_{conc}((N_{n},\mu_{n},g_{n} ),(M_{n},\sigma_{n},g^{\prime}_{n}))\ +\ d_{conc}((M_{n},\mu_{n},g_{n}),(M,\sigma,g))\]
plainly drives to the thesis.
**Remark 2**.: _The above Theorem holds also with the hypothesis \(d_{W_{1}}(\mu_{n},\sigma_{n})\to 0\). Indeed, we can consider \(\sigma_{n}\) as a measure \(\sigma^{\prime}_{n}\) on \(N_{n}\) having support \(spt(\sigma^{\prime}_{n})=M_{n}\) where it is equal to \(\sigma_{n}\). Without loss in generality, we can consider \((N_{n},\sigma^{\prime}_{n},g_{n})\) equal, as mm-spaces, to \((M_{n},\sigma_{n},g^{\prime}_{n})\), since they are mm-isomorphic. Indeed, two mm-spaces are mm-isomorphic if there exists an isometry between their supports of their respective measures (see [1] p.117)._
Now the following Corollary follows
**Corollary 1**.: _Let \((N_{n},M_{n})\) a sequence of Riemannian manifolds \(M_{n}\subsetneq N_{n}\) such that volumes and diameters of \(N_{n}\) are bounded by \(h\in\mathbb{R}\). If \(M_{n}\) is a concentration locus for \(N_{n}\) and \((M_{n},\sigma_{n},g^{\prime}_{n})\rightarrow_{d_{conc}}(M,\sigma,g^{\prime})\) then \((N_{n},\mu_{n},g_{n})\rightarrow_{d_{conc}}(M,\sigma,g^{\prime})\)._
Now we move towards a \(d_{box}\)-characterization.
**Proposition 7**.: _Let \(M_{n}\) be a concentration locus for \(N_{n}\), \(M_{n}\) totally geodesic submanifolds, with the properties that the geodesic distance on \(M_{n}\) is the same as in \(N_{n}\). Then, \(d_{box}(N_{n},M_{n})\) converges to 0._
Proof.: Let us first observe that if \(M_{n}^{\epsilon_{n}}\) is the tubular neighbourhood of radius \(\epsilon_{n}\), then, for \(s,t\in M_{n}^{\epsilon_{n}}\) we have
\[|d_{n}(s,t)-d^{\prime}_{n}(proj_{M_{n}}(s),proj_{M_{n}}(t))|\leq O(\epsilon_{ n}),\]
where \(d^{\prime}_{n}\) is the geodesic distance in \(M_{n}\). Indeed, inside the tube by elementary distance inequalities \(|d_{n}(s,t)-d_{n}(proj_{M_{n}}(s),proj_{M_{n}}(t))|\leq 2\epsilon\), and since by hypothesis \(d_{n}(proj_{M_{n}}(s),proj_{M_{n}}(t))=d^{\prime}_{n}(proj_{M_{n}}(s),proj_{M_ {n}}(t))\).
Now define two parameters, \(\phi,\psi\) for \(N_{n}\) and \(M_{n}\), respectively, \(\phi:[0,\epsilon_{n})\to N_{n}\setminus M_{n}^{\epsilon_{n}}\) and \(\phi:[\epsilon_{n},1)\rightarrow M_{n}^{\epsilon_{n}}\), where \(\epsilon_{n}\) is chosen so that \(\mu_{n}(N_{n}\setminus M_{n}^{\epsilon_{n}})\leq\epsilon_{n}\). This is always possible, since if we have two sequences \(a_{n}\) and \(b_{n}\) positive, converging to zero and such that \(\mu_{n}(N_{n}\setminus M_{n}^{a_{n}})\leq b_{n}\), then we can choose \(\epsilon_{n}=\max\{a_{n},b_{n}\}\). Let \(\psi:=proj_{M_{n}}\circ\phi\).
From the above inequality, it follows:
\[|d_{n}(\phi(s),\phi(t))-d^{\prime}_{n}(\psi(s),\psi(t))|\leq O(\epsilon_{n}).\]
This implies the thesis.
Combining Proposition 7 and Theorem 2 we get the following Corollary
**Corollary 2**.: _Let \((N_{n},\mu_{n},g_{n})\) be a sequence of mm-spaces, with \(M_{n}\) totally geodesic and with the same geodesic distance as in \(N_{n}\). Suppose that \(M_{n}\) define a concentration locus for \(N_{n}\) with a sequence of radii \(\epsilon_{n}\to 0\), and \((M_{n},\sigma_{n},g^{\prime}_{n})\rightarrow_{d_{conc}}(M,\sigma,g^{\prime})\). Then, \((N_{n},\mu_{n},g_{n})\rightarrow_{d_{conc}}(M,\sigma,g^{\prime})\)._
In [10], by using the Macdonald formula [11], it is shown that \(SU(n),Spin(n),Usp(n)\) have all totally geodesic concentration loci.
Observe that even under hypothesis \(d_{box}(N_{n},H_{n})\to 0\), for any \(N_{n}\in SO(n),SU(n),Spin(n)\), by Corollary 5.20 [22], all of them don't determine \(d_{box}\) divergent sequences.
Observe that Corollary 2 applies even if \(N_{n}\) is not a Levy family. In this case, the sequence concentrates anyway to \(M\) which is not necessarily a point. This unveils that the concentration phenomenon is far from being exhausted by the Levy families. In particular, if \(N_{n}\) are topological groups, by a Schneider result [12], \(M\) should be not only the concentration set but also an \(N\)-invariant subspace of \(S(N)\), where \(N\) is a second-countable topological group completion of \(\bigcup N_{n}\), and \(S(N)\) is the Samuel compactification of \(N\). If \(M\) is minimal, then it is the universal minimal flow of \(N\). Since concretely describable universal minimal flows are rather rare, this could be a way to construct them. These constructions can find interesting applications to the sequences of \(U(N)\) with different rescaled geometries.
## Acknowledgments
We are extremely grateful to Alessio Figalli for his insightful suggestions and precise remarks. We also thank Carlo Mantegazza and Takeshi Shioya for some helpful discussions.
|
2309.03878 | On generalized corners and matrix multiplication | Suppose that $S \subseteq [n]^2$ contains no three points of the form $(x,y),
(x,y+\delta), (x+\delta,y')$, where $\delta \neq 0$. How big can $S$ be?
Trivially, $n \le |S| \le n^2$. Slight improvements on these bounds are
obtained from Shkredov's upper bound for the corners problem [Shk06], which
shows that $|S| \le O(n^2/(\log \log n)^c)$ for some small $c > 0$, and a
construction due to Petrov [Pet23], which shows that $|S| \ge \Omega(n \log
n/\sqrt{\log \log n})$.
Could it be that for all $\varepsilon > 0$, $|S| \le O(n^{1+\varepsilon})$?
We show that if so, this would rule out obtaining $\omega = 2$ using a large
family of abelian groups in the group-theoretic framework of Cohn, Kleinberg,
Szegedy and Umans [CU03,CKSU05] (which is known to capture the best bounds on
$\omega$ to date), for which no barriers are currently known. Furthermore, an
upper bound of $O(n^{4/3 - \varepsilon})$ for any fixed $\varepsilon > 0$ would
rule out a conjectured approach to obtain $\omega = 2$ of [CKSU05]. Along the
way, we encounter several problems that have much stronger constraints and that
would already have these implications. | Kevin Pratt | 2023-09-07T17:41:56Z | http://arxiv.org/abs/2309.03878v1 | # On generalized corners and matrix multiplication
###### Abstract
Suppose that \(S\subseteq[n]^{2}\) contains no three points of the form \((x,y),(x,y+\delta),(x+\delta,y^{\prime})\), where \(\delta\neq 0\). How big can \(S\) be? Trivially, \(n\leq|S|\leq n^{2}\). Slight improvements on these bounds are obtained from Shkredov's upper bound for the corners problem [10], which shows that \(|S|\leq O(n^{2}/(\log\log n)^{c})\) for some small \(c>0\), and a construction due to Petrov [14], which shows that \(|S|\geq\Omega(n\log n/\sqrt{\log\log n})\).
Could it be that for all \(\varepsilon>0\), \(|S|\leq O(n^{1+\varepsilon})\)? We show that if so, this would rule out obtaining \(\omega=2\) using a large family of abelian groups in the group-theoretic framework of [13, 1] (which is known to capture the best bounds on \(\omega\) to date), for which no barriers are currently known. Furthermore, an upper bound of \(O(n^{4/3-\varepsilon})\) for any fixed \(\varepsilon>0\) would rule out a conjectured approach to obtain \(\omega=2\) of [1]. Along the way, we encounter several problems that have much stronger constraints and that would already have these implications.
## 1 Introduction
The exponent of matrix multiplication \(\omega\) is the smallest number such that for any \(\varepsilon>0\), there exists an algorithm for multiplying \(n\times n\) matrices using \(O(n^{\omega+\varepsilon})\) arithmetic operations. Since Strassen's initial discovery that \(\omega<3\)[12], there has been much work on understanding this fundamental constant, with the end goal being the determination of whether or not \(\omega=2\). It is currently known that \(2\leq\omega<2.3716\)[15].
The best upper bounds on \(\omega\) obtained since 1987 [16] can be understood as solutions to the following hypergraph packing problem. Let \(M_{n}\) be the _matrix multiplication hypergraph_, the tripartite 3-uniform hypergraph with parts \(X_{1}=X_{2}=X_{3}=[n]^{2}\), and where \(((i,j),(k,l),(m,n))\in X_{1}\times X_{2}\times X_{3}\) is a hyperedge if and only if \(j=k,l=m,n=i\). Given an abelian group \(G\), let \(X_{G}\) be its "addition hypergraph" with vertex sets \(G\sqcup G\sqcup G\), and where \((a_{1},a_{2},a_{3})\in G\times G\times G\) is a hyperedge exactly when \(a_{1}+a_{2}+a_{3}=0\). Suppose that \(X_{G}\) contains \(k\) disjoint induced copies of \(M_{n}\). Then
\[\omega<\log_{n}(|G|/k). \tag{1}\]
Phrased in terms of the group-theoretic approach proposed by Cohn and Umans [13] and further developed by Cohn, Kleinberg, Szegedy, and Umans [1], this is equivalent to proving upper bounds on \(\omega\) via _simultaneous triple product property_ (STPP) constructions in abelian groups. The above inequality was established in [1, Theorem 5.5]. It can be also be deduced via the _asymptotic sum inequality_ of [12].
From this perspective, the best bounds on \(\omega\) to date are obtained by taking \(G\) to be a large power of a cyclic group -- specifically, \(\mathbb{Z}_{7}^{\ell}\) with \(\ell\to\infty\). However, in [1]
ideas related to the resolution of the cap-set problem in additive combinatorics [1] were used to show that one cannot obtain \(\omega=2\) using groups of _bounded exponent_ -- such as \(\mathbb{Z}_{7}^{\ell}\) -- via this approach. This obstruction is due to the fact that when \(G\) has bounded exponent, there is power-savings over the trivial upper bound on the size of the largest induced matching in \(X_{G}\) (also called a _3-matching_[15], or a _tricolored sum-free set_[1]). For example, when \(G=\mathbb{Z}_{7}^{\ell}\) the largest induced matching has size at most \(O(6.16^{\ell})\). On the other hand, \(M_{n}\) contains an induced matching of size \(n^{2-o(1)}\): if we identify vertices in \(M_{n}\) with edges in the complete tripartite graph \(K_{n,n,n}\), an induced matching in \(M_{n}\) corresponds to a tripartite graph on at most \(3n\) vertices where every edge is contained in a unique triangle, and the number of vertices in the induced matching equals the number of edges in this graph. A well-known construction in extremal combinatorics yields such a graph with \(n^{2-o(1)}\) edges (see [11, Corollary 2.5.2])1 and hence \(M_{n}\) contains an induced matching of size \(n^{2-o(1)}\). Modulo minor details, the claimed barrier then follows, as an efficient packing of copies of \(M_{n}\) into \(X_{G}\) would imply the existence of a large induced matching in \(X_{G}\), a contradiction.2
Footnote 1: This is the Rusza-Szemeredi problem. The equivalence between induced matchings in \(M_{n}\) and this problem was independently noted in [1].
Footnote 2: The techniques involved in the resolution of the cap–set problem (in particular, slice rank) actually give stronger “tensor analogues” of this barrier; see [12, 13].
This is the only obstruction to obtaining \(\omega=2\) via the use of Equation (1) that we are aware of. Unfortunately,3 this barrier says nothing about the viability of general abelian groups, as their addition hypergraphs may contain large induced matchings. For example, if \(A\) is a 3-term arithmetic progression free (hereon abbreviated to 3AP-free) subset of \(G\), then the subsets \(A,A,-2A\) of the vertex sets of \(X_{G}\) induce a matching of size \(|A|\). Hence this barrier cannot apply to any group containing a 3AP-free subset of size \(|G|^{1-o(1)}\), such as \(\mathbb{Z}_{n}\)[1]. Could one achieve \(\omega=2\) using cyclic groups, or perhaps products of cyclic groups of growing orders?
Footnote 3: Or fortunately, for the optimist.
In this paper we identify problems in additive combinatorics whose answer we conjecture would rule out obtaining \(\omega=2\) using a large family of abelian groups for which the induced matching barrier is irrelevant. This family includes abelian groups with a bounded number of direct factors -- the "opposite" condition of that of having bounded exponent. These problems have not been studied before as far as we are aware. Aside from their connections to fast matrix multiplication, we find them intrinsically interesting. We now discuss the simplest-to-state such problem.
### A skew corners problem
The _corners problem_ in additive combinatorics asks for the size of the largest subset of \([n]^{2}\) containing no three points of the form
\[(x,y),(x,y+\delta),(x+\delta,y)\]
where \(\delta\neq 0\). Ajtai and Szemeredi [1] settled this problem up to factors of \(n^{o(1)}\) by proving an upper bound of \(o(n^{2})\) and a lower bound of \(n^{2-o(1)}\). This problem is significant as it was the first multidimensional case of Szemeredi's theorem to be established, and for its application to the number-on-forehead model in communication complexity [11].
Here is a subtle strengthening of the condition of the corners problem for which we know essentially nothing:
**Question 1.1**.: What is the size of the largest \(S\subseteq[n]^{2}\) which does not contain three points of the form
\[(x,y),(x,y+\delta),(x+\delta,y^{\prime})\]
with \(\delta\neq 0\)?
That is, not only must \(S\) avoid all corners, but given any two points in \(S\) lying on the same vertical line, the _entire vertical line_ passing through the third point that would form a corner with these two points must be absent from \(S\)! Naturally, we call such a set of points _skew corner-free_. See Figure 1 for an example of such a set.
Note that there is a trivial lower bound of \(n\), obtained by taking \(S\) to be all points lying on a vertical or horizontal line. We conjecture that this is almost optimal:
**Conjecture 1.2**.: _Fix any \(\varepsilon>0\). If \(S\) is skew corner-free, then \(|S|\leq O(n^{1+\varepsilon})\)._
A construction due to Petrov [10] (Proposition 4.16) shows that one can have \(|S|\geq\Omega(n\log n/\sqrt{\log\log n})\). On the other hand, the best upper bound we know is \(O(n^{2}/(\log\log n)^{0.0137\cdots})\), which follows immediately from Shkredov's upper bound on the corners problem [11].
Two of the main results of this paper are the following.
**Theorem 1.3**.: _If Conjecture 1.2 is true, then one cannot obtain \(\omega=2\) via STPP constructions in the family of groups \(\mathbb{Z}_{q}^{\ell}\), where \(q\) is a prime power._
Furthermore, a weakening of Conjecture 1.2 would rule out obtaining \(\omega=2\) using a specific type of STPP construction in arbitrary abelian groups. In [13], it was conjectured that this type of construction can be used to obtain \(\omega=2\).
**Theorem 1.4**.: _If the largest skew corner-free subset of \([n]^{2}\) has size \(O(n^{4/3-\varepsilon})\) for some \(\varepsilon>0\), then [13, Conjecture 4.7] is false._
In fact, seemingly much weaker conjectures than Conjecture 1.2 would already have these implications. The weakest conjecture we make is the following. Let \(\Delta_{n}\) be a triangular array of \(n(n+1)/2\) points. Suppose that we delete from \(\Delta_{n}\) sets of points lying on lines parallel to the sides of this array, such that the remaining set of points does not contain any equilateral trapezoid with sides parallel to the sides of the array (see Figure 4). For example, we might delete all lines in one direction but one. Then, what
Figure 1: The orange points form a skew corner-free subset of \([10]\times[10]\) of size \(24\). This is largest possible.
is the maximum number of points that can remain? By our example, one can achieve at least \(n\). We conjecture that this is essentially optimal (Conjecture 4.1). Another condition we introduce, which is intermediate between this and being skew-corner free, is that of a skew corner-free subset of a triangular grid (see Figure 2).
### Paper overview
In Section 2 we review the group-theoretic approach of [13, 14]. In Section 2.1 we record a very weak lower bound for this approach, which follows easily from the removal lemma in groups of [15]. This lower bound becomes much stronger in \(\mathbb{Z}_{q}^{\ell}\) (Corollary 2.10), thanks to the improved bounds on the removal lemma of [10], and we make later use of this fact.
In Section 3 we note that the matrix multiplication hypergraph \(M_{n}\) is an extremal solution to a certain forbidden hypergraph problem. This was our motivating observation. We define the "value" of a group, \(\operatorname{val}(G)\), which captures this forbidden hypergraph problem in a group-theoretic context. This quantity equals the maximum number of triangles in an induced subhypergraph of \(X_{G}\) that does not contain the trifore hypergraph or a cycle of \(4\) triangles (see Figure 3). This can also be expressed in terms of the group operation slightly awkwardly (Definition 3.2). The trivial bounds are that \(|G|\leq\operatorname{val}(G)\leq|G|^{3/2}\); using the removal lemma of [15], the upper bound can be improved to \(o(|G|^{3/2})\) (Proposition 3.7). STPP constructions yield lower bounds on the quantity \(\operatorname{val}(G)\) (Proposition 3.3), so ultimately it is upper bounds on \(\operatorname{val}(G)\) that we are interested in as a means towards barriers. The quantity \(\operatorname{val}(G)\) is super-multiplicative under direct product (Proposition 3.5), which is one reason why power-improvements over the trivial bound seem to be easier to obtain in direct products of groups.
We then focus on the case of abelian groups in Section 4. We show that a bound of
Figure 2: The 90 orange points form a skew corner-free subset of the triangular grid \(\Delta_{45}\) (Definition 4.12): for any two orange points on the same line parallel to one of the sides of the grid, the line parallel to this side and passing through a third point that would form an equilateral triangle with these two points contains no orange points. This is largest-possible among subsets of \(\Delta_{45}\) that are symmetric under the \(S_{3}\) action on \(\Delta_{n}\).
\(\omega=2\) using the family of groups \(\mathbb{Z}_{q}^{\ell}\) would imply that \(\operatorname{val}(\mathbb{Z}_{n})\geq\Omega(n^{1+c})\) for some \(c>0\) Theorem 4.4. We also show that a proof of \(\omega=2\) via _simultaneous double product property_ constructions [10] in any family of abelian groups would imply that \(\operatorname{val}(\mathbb{Z}_{n})\geq\Omega(n^{4/3-\varepsilon})\) for any given \(\varepsilon>0\) (Theorem 4.7). We thank Chris Umans for mentioning a related fact to us, which motivated this result. We then relate \(\operatorname{val}(\mathbb{Z}_{n})\) to various questions about sets of points in the plane, including Question 1.1 (Definitions 4.10, 4.12 and 4.14). This gives Theorems 1.3 and 1.4. We also give an example which shows that one cannot hope to prove strong upper bounds on \(\operatorname{val}(\mathbb{Z}_{n})\) via a certain "asymmetric" averaging argument (Proposition 4.8).
The take-away of this paper is that STPP constructions yield subsets of \(G\times G\) which satisfy dramatically stronger properties than that of being corner-free. While subsets satisfying these stronger properties do not imply STPP constructions in any obvious way, we believe that understanding them will be a stepping stone to understanding the power of the group-theoretic approach, and possibly towards improved upper bounds on \(\omega\).
## 2 Background
Bounds on \(\omega\) from the group-theoretic approach are obtained by designing subsets of groups satisfying the following condition.
**Definition 2.1**.: A collection of triples of subsets \(S_{i},T_{i},U_{i}\) of a group \(G\) satisfy the simultaneous triple product property (or STPP for short) if
1. For each \(i\), the sets \(S_{i},T_{i},U_{i}\) satisfy the _triple product property_: if \(ss^{\prime-1}tt^{\prime-1}uu^{\prime-1}=I\) with \(s,s^{\prime}\in S_{i},t,t^{\prime}\in T_{i},u,u^{\prime}\in U_{i}\), then \(s=s^{\prime},t=t^{\prime},u=u^{\prime}\).
2. Setting \(S_{i}=A_{i}B_{i}^{-1},T_{j}=B_{j}C_{j}^{-1},U_{k}=C_{k}A_{k}^{-1}\), \[s_{i}t_{j}u_{k}=I\iff i=j=k\] for all \(s_{i}\in S_{i},t_{j}\in T_{j},u_{k}\in U_{k}\).
The crucial fact is the following:
**Theorem 2.2**.: _[_10_, Theorem 5.5]_ _If \(S_{i},T_{i},U_{i}\subseteq G\) satisfy the STPP, then_
\[\sum_{i}(|S_{i}||T_{i}||U_{i}|)^{\omega/3}\leq\sum d_{i}^{\omega}\]
_where \(d_{i}\)'s are the dimensions of the irreducible representations of \(G\)._
The conditions of the STPP imply that the sets involved satisfy a simple "packing bound" (see the discussion preceding [1, Definition 2.3]).
**Proposition 2.3**.: _If \(S_{i},T_{i},U_{i}\) satisfy the STPP in a group \(G\), then \(\sum_{i}|S_{i}||T_{i}|\leq|G|\), \(\sum_{i}|T_{i}||U_{i}|\leq|G|\), and \(\sum_{i}|U_{i}||S_{i}|\leq|G|\)._
A particular type of STPP construction can be obtained from pairs of sets satisfying a condition termed the _simultaneous double product property_ in [10].
**Definition 2.4**.: We say that sets \((A_{i},B_{i})_{i=1}^{n}\) satisfy the simultaneous double product property (or SDPP for short) if
1. For all \(i\), \(aa^{\prime-1}=bb^{\prime-1}\) only has the solution \(a=a^{\prime},b=b^{\prime}\) for \(a,a^{\prime}\in A_{i},b,b^{\prime}\in B_{i}\),
2. \(a_{i}(a^{\prime}_{j})^{-1}b_{j}(b^{\prime}_{k})^{-1}=1\) implies \(i=k\), where \(a_{i}\in A_{i},a^{\prime}_{j}\in A_{j},b_{j}\in B_{j},b^{\prime}_{k}\in B_{k}\).
In [10] it was conjectured that one can achieve \(\omega=2\) using SDPP constructions in abelian groups. This amounts to the following.
**Conjecture 2.5**.: _[_10_, Conjecture 4.7]_ _For arbitrarily large \(n\), there exists an abelian group \(G\) of order \(n^{2-o(1)}\) and \(n\) pairs of sets \(A_{i},B_{i}\) where \(|A_{i}||B_{i}|>n^{2-o(1)}\) satisfy the SDPP._
If \(G\) is a finite group, we let \(X_{G}\) denote the tripartite \(3\)-uniform hypergraph with vertex parts \(X_{1}=X_{2}=X_{3}=G\), and where \((g_{1},g_{2},g_{3})\) is a hyperedge (a _triangle_) whenever \(g_{1}g_{2}g_{3}=I\). In the event that \(G\) is nonabelian, it is important that we fix some ordering on the parts of \(X_{G}\) here. Recall that a \(3\)-uniform hypergraph is said to be _linear_ if any two vertices are contained in at most one hyperedge. For example, \(X_{G}\) is linear. The matrix multiplication hypergraph \(M_{p,q,r}\) is defined to be the supporting hypergraph of the matrix multiplication tensor; i.e. it is the hypergraph with parts \([p]\times[q],[q]\times[r],[r]\times[p]\), and where \(((i,j),(k,l),(m,n))\) is a hyperedge if and only if \(j=k,l=m,n=i\). If \(X\) is a hypergraph, we sometimes write \(E(X)\) for the set of hyperedges of \(X\).
It is convenient to view STPP constructions from a hypergraph perspective.
**Proposition 2.6**.: _There exist sets \(S_{i},T_{i},U_{i}\subseteq G\), satisfying the STPP if and only if \(X_{G}\) contains as an induced subhypergraph the disjoint union of \(M_{|S_{i}|,|T_{i}|,|U_{i}|}\)._
Proof.: It follows from the first condition of the STPP that for all \(i\), the subhypergraph induced by \(A_{i}:=S_{i}T_{i}^{-1},B_{i}:=T_{i}U_{i}^{-1},C_{i}:=U_{i}S_{i}^{-1}\) equals \(M_{|S_{i}|,|T_{i}|,|U_{i}|}\). The second condition implies that \(A_{i}\) and \(A_{j}\) are disjoint when \(i\neq j\), and similarly for the subsets of the other parts. The second condition also implies that the only hyperedges in the subhypergraph induced by \(\sqcup_{i}A_{i},\sqcup_{i}B_{i},\sqcup_{i}C_{i}\) are between sets of the form \(A_{i},B_{i},C_{i}\), so the claim follows.
Conversely, suppose that \(\sqcup_{i}A_{i},\sqcup_{i}B_{i},\sqcup_{i}C_{i}\) induce disjoint hypergraphs \(M_{p_{i},q_{i},r_{i}}\). Fix some \(i\), and for shorthand write \(A:=A_{i},B:=B_{i},C:=C_{i}\) and let \(p:=p_{i},q:=q_{i},r:=r_{i}\). Since \(A,B,C\) induce \(M_{p,q,r}\), we can by definition write \(A=\{a_{ij}\}_{i\in[p],j\in[q]},B=\{b_{ij}\}_{i\in[q],j\in[r]},C=\{c_{ij}\}_{i \in[r],j\in[p]}\in G\) where
\[a_{ij}b_{kl}c_{mn}=I\iff j=k,l=m,n=i. \tag{2}\]
We claim that there exist \(X=\{x_{i}\}_{i\in[p]},Y=\{y_{j}\}_{j\in[q]},Z=\{z_{k}\}_{k\in[r]}\) such that \(a_{ij}=x_{i}y_{j}^{-1}\), \(b_{jk}=y_{j}z_{k}^{-1}\), \(c_{ki}=z_{k}x_{i}^{-1}\) for all \(i\in[p],j\in[q],k\in[r]\). This can be accomplished by taking \(x_{0}=1,x_{i}=a_{i0}a_{00}^{-1}\) for \(i>0\), \(y_{i}=a_{0i}^{-1},z_{i}=c_{i0}\). Furthermore, Equation (2) implies that \(X,Y,Z\) will satisfy the TPP. This shows that for each \(i\) there are \(X_{i},Y_{i},Z_{i}\) such that \(A_{1,i}=X_{i}Y_{i}^{-1},A_{2,i}=Y_{i}Z_{i}^{-1},A_{3,i}=Z_{i}X_{i}^{-1}\), and \(X_{i},Y_{i},Z_{i}\) satisfy the TPP. The fact that they induce a disjoint union of hypergraphs implies that if \(a\in A_{i},b\in B_{j},c\in C_{k}\), then \(abc=I\) implies \(i=j=k\), which implies the second condition in the definition of the STPP.
**Remark 2.7**.: The second direction of this proposition is essentially the fact that a complete \(2\)-dimensional simplicial complex has trivial \(1\)-cohomology with coefficients in any group.
### Triangle Removal and the Group-Theoretic approach
In [10], a nonabelian generalization of Green's arithmetic removal lemma [11] was shown to follow from the directed graph removal lemma of Alon and Shapira [1]. Specifically, they showed the following:
**Theorem 2.8**.: _Let \(G\) be a finite group of order \(N\). Let \(A_{1},\ldots,A_{m},m\geq 2\), be sets of elements of \(G\) and let \(g\) be an arbitrary element of \(G\). If the equation \(x_{1}x_{2}\cdots x_{m}=g\) has \(o(N^{m-1})\) solutions with \(x_{i}\in A_{i}\), then there are subsets \(A_{i}^{\prime}\subseteq A_{i}\) with \(|A_{i}\setminus A_{i}^{\prime}|=o(N)\) such that there is no solution of the equation \(x_{1}x_{2}\cdots x_{m}=g\) with \(x_{i}\in A_{i}^{\prime}\)._
The best quantitative bounds for this theorem are due to Fox [10], and imply that if there are at most \(\delta N^{m-1}\) solutions to \(x_{1}\cdots x_{m}=g\), one can remove subsets of \(A_{i}\) of size \(\varepsilon N\) and eliminate all solutions, when \(\delta^{-1}\) is a tower of twos of height \(O(\log\varepsilon^{-1})\).
Theorem 2.8 implies the following.
**Corollary 2.9**.: _If \(X_{i},Y_{i},Z_{i}\) satisfy the STPP in a group \(G\) of order \(n\), then at least one of \(\sum|X_{i}||Y_{i}|,\sum|X_{i}||Z_{i}|,\sum|Y_{i}||Z_{i}|\) is at most \(o(n)\)._
Proof.: Let \(A_{1}=\sqcup_{i}X_{i}Y_{i}^{-1},A_{2}=\sqcup_{i}Y_{i}Z_{i}^{-1},A_{3}=\sqcup_ {i}Z_{i}X_{i}^{-1}\). By definition of the STPP, the equation \(x_{1}x_{2}x_{3}=I\) with \(x_{i}\in A_{i}\) has \(\sum_{i}|X_{i}||Y_{i}||Z_{i}|\) solutions. By the packing bound Proposition 2.3, \(\sum_{i}|X_{i}||Y_{i}|\leq n,\sum_{i}|Y_{i}||Z_{i}|\leq n,\sum_{i}|Z_{i}||X_{i}|\leq n\), so by Cauchy-Schwarz there are at most \(n^{3/2}=o(n^{2})\) solutions to \(a_{1}a_{2}a_{3}=I\).
Now suppose that \(B_{j}\subseteq A_{j}\) satisfy \(|B_{j}|/|A_{j}|>0.9999\); we will show that there is a solution to \(b_{1}b_{2}b_{3}=I\). For more than a \(0.99\) fraction of the values of \(i\) we must have \(|B_{1}\cap X_{i}Y_{i}^{-1}|/|X_{i}Y_{i}^{-1}|>0.99\) (because \(0.99\cdot 1+0.01\cdot 0.99=0.9999\)) and similarly for the other sets. Hence by the pigeonhole principle there is some \(i\) for which \(|B_{1}\cap X_{i}Y_{i}^{-1}|/|X_{i}Y_{i}^{-1}|>0.99,|B_{2}\cap Y_{i}Z_{i}^{-1}| /|Y_{i}Z_{i}^{-1}|>0.99,|B_{3}\cap Z_{i}X_{i}^{-1}|/|Y_{i}Z_{i}^{-1}|>0.99.\) Now consider the tripartite graph with parts \(X_{i},Y_{i},Z_{i}\), where \((x,y)\) is an edge between \(X_{i}\) and \(Y_{i}\) if \(xy^{-1}\in B_{1}\cap X_{i}Y_{i}^{-1}\), \((y,z)\) is an edge between \(Y_{I},Z_{i}\) when \(yz^{-1}\in B_{2}\cap Y_{i}Z_{i}^{-1}\), and \((z,x)\) is an edge when \(zx^{-1}\in B_{3}\cap Z_{i}X_{i}^{-1}\). Note that the existence of a triangle in this graph implies that there is a solution to \(b_{1}b_{2}b_{3}=I\). First, note that at least \(0.9|X_{i}|\) vertices in \(X_{i}\) have at least \(0.9|Y_{i}|\) neighbors in \(Y_{i}\). (If this were not the case, there would be at most \(0.9|X_{i}||Y_{i}|+0.1\cdot 0.9\cdot|X_{i}||Y_{i}|\leq 0.99|X_{i}||Y_{i}|\) edges between \(X_{i}\) and \(Y_{i}\), and hence \(|B_{1}\cap X_{i}Y_{i}^{-1}|/|X_{i}Y_{i}^{-1}|\leq 0.99\), a contradiction.) Similarly, at least \(0.9|X_{i}|\) vertices in \(X_{i}\) have at least \(0.9|Z_{i}|\) neighbors in \(Z_{i}\). Hence at least \(0.8|X_{i}|\) vertices in \(X_{i}\) have \(0.9|Y_{i}|\) neighbors in \(Y_{i}\) and \(0.9|Z_{i}|\) neighbors in \(Z_{i}\). Pick any such vertex \(x_{0}\in X_{i}\). There must be an edge between a neighbor of \(x_{0}\) in \(Y_{i}\) and a neighbor of \(x_{0}\) in \(Z_{i}\), since if not, there would be at most \(|Y_{i}||Z_{i}|-0.9^{2}|Y_{i}||Z_{i}|=0.19|Y_{i}||Z_{i}|\) edges between \(Y_{i}\) and \(Z_{i}\). Thus we have found our triangle.
By Theorem 2.8, we can delete subsets of \(A_{i}\) of size \(o(n)\) to eliminate all solutions to \(x_{1}x_{2}x_{3}=I\). On the other hand, any three subsets of the \(A_{i}\)'s of density \(0.9999\) contain some such solution. Hence we must have \(|A_{i}|=o(n)\) for some \(i\).
As a corollary of this proof, we have the following.
**Corollary 2.10**.: _There exists an absolute constant \(C>1\) such that if \(X_{i},Y_{i},Z_{i}\) satisfy the STPP in \(\mathbb{Z}_{q}^{\ell}\), then at least one of \(\sum|X_{i}||Y_{i}|,\sum|X_{i}||Z_{i}|,\sum|Y_{i}||Z_{i}|\) is at most \((q/C)^{\ell}\)._
Proof.: The proof of Corollary 2.9 shows that \(A_{1}=\sqcup_{i}X_{i}Y_{i}^{-1},A_{2}=\sqcup_{i}Y_{i}Z_{i}^{-1},A_{3}=\sqcup_ {i}Z_{i}X_{i}^{-1}\) have the following properties: there are at most \(q^{3n/2}\) solutions to \(a_{1}+a_{2}+a_{3}=0\)
and any subsets of \(A_{1},A_{2},A_{3}\) of density \(0.9999\) each contain some such solution. At the same time, by [17, Theorem 1], if \(A_{1},A_{2},A_{3}\subseteq\mathbb{Z}_{q}^{\ell}\) and there are less than \(\delta q^{2n}\) solutions to \(a_{1}+a_{2}+a_{3}=0\), then we may remove \(\varepsilon q^{\ell}\) elements from \(A_{1}\cup A_{2}\cup A_{3}\) and eliminate all solutions, when \(\delta=(\varepsilon/3)^{\Theta(\log q)}\).4 In our setting, \(\delta=q^{-n/2}\) and so \(\varepsilon=3q^{\Theta(-n/\log q)}\leq 3C^{\prime-n}\) for some universal \(C^{\prime}\). Hence it must have been the case that one of \(A_{1},A_{2},A_{3}\) had size at most \((q/C)^{\ell}\) to begin with, for some universal \(C\).
Footnote 4: While [17, Theorem 1] is only stated for \(\mathbb{Z}_{p}^{\ell}\), it extends to \(\mathbb{Z}_{q}^{\ell}\) by the same argument via the use of [1, Theorem A’].
One can interpret Corollary 2.9 as saying that the best upper bound on the rank of a direct sum of matrix multiplication tensors provable via the group-theoretic approach is superlinear. We remark the only important property of the matrix multiplication hypergraph for this result was that it satisfies a very weak "regularity" condition. Specifically, considerations similar to those of the proof of Corollary 2.9 show the following:
**Theorem 2.11**.: _Let \(\varepsilon>0\). Let \(G\) be a group of order \(n\). Let \(X=\sqcup_{i=1}^{3}A_{i}\) be a tripartite hypergraph with \(o(n^{2})\) triangles such that for any \(Y_{i}\subseteq A_{i}\) with \(|Y_{i}|/n\geq 1-\varepsilon\), there exists \(y_{i}\in Y_{i}\) such that \((y_{1},y_{2},y_{3})\in E(X)\). Then if \(X\) is an induced subhypergraph of \(X_{G}\), \(|A_{i}|\leq o(n)\) for \(i=1,2,3\)._
## 3 Equilateral trapezoid-freeness in hypergraphs and groups
We begin with the observation that the matrix multiplication hypergraph is an extremal solution to a certain forbidden hypergraph problem.
**Proposition 3.1**.: _Let \(X\) be a linear tripartite hypergraph with parts of size \(N\) such that any two vertices from different parts are incident to at most one common vertex in the third part. Then the number of triangles in \(X\) is at most \(N^{3/2}\). Furthermore, when \(N\) is a square, an extremal example is the matrix multiplication hypergraph \(M_{N^{1/2}}\)._
The hypergraphs satisfying the condition of Proposition 3.1 can be equivalently characterized as the linear hypergraphs that do not contain copies of the hypergraphs in Figure 3. We remark that the proof of the upper bound in Proposition 3.1 is closely related to the upper bound on the Turan density of the \(4\)-cycle.
Proof.: Restricting our attention to one of the parts \(X_{1}\) of \(X\), let \(d_{v}\) be the number of triangles that vertex \(v\in X_{1}\) is contained in. Each \(v\in X_{1}\) is contained in \(d_{v}\) triangles, where the vertices of these triangles belonging to \(X_{2}\) and \(X_{3}\) are distinct (as \(X\) is linear). Additionally, no pair of such vertices in \(X_{2}\) and \(X_{3}\) can be contained in a triangle incident to another vertex \(u\in X_{1}\), so there are \(2\binom{d_{v}}{2}\) pairs of vertices in \(X_{2}\) and \(X_{3}\) that are contained in no common triangle. Let \((x_{2},x_{3})\) be some such pair of vertices. Observe that furthermore, for all \(u\neq v\in X_{1}\), the set of vertices in \(X_{2}\) and \(X_{3}\) incident to the set of triangles containing \(u\) cannot also contain both \(x_{2}\) and \(x_{3}\). For if this happened, there would be triangles \((v,x_{2},x_{3}^{\prime}),(v,x_{2}^{\prime},x_{3}),(u,x_{2},x_{3}^{\prime\prime }),(u,x_{2}^{\prime\prime},x_{3})\), and then \(x_{2}\) and \(x_{3}\) violate the constraint. The total number of triangles equals \(m:=\sum_{v\in X_{1}}d_{v}\), and by the prior observations it follows that \(\sum 2\binom{d_{v}}{2}+m\leq N^{2}\). So \(\sum d_{v}(d_{v}-1)+m=\sum d_{v}^{2}\leq N^{2}\). The conclusion follows from Cauchy-Schwarz.
To see that \(M_{N^{1/2}}\) is extremal, note that it contains \(N^{3/2}\) triangles, has parts of size \(N\), and is linear. To see that it satisfies the second condition, let \((i,j)\) be a vertex in the first part, and let \((k,l)\) be a vertex in the second part. Then \((i,j)\) is contained in a common triangle with exactly the vertices in the third part of the form \((*,i)\), and \((k,l)\) is incident to exactly the vertices in the third part of the form \((l,*)\). Hence \((l,i)\) is the unique neighbor of both. The same argument shows the claim for vertices in any two parts.
The key definition in this paper is that of an "equilateral trapezoid-free" triple of subsets of a group. The reason for this name will eventually be explained in Section 4.
**Definition 3.2**.: Let \(A,B,C\subseteq G\). We call \((A,B,C)\) equilateral trapezoid-free if the subhypergraph of \(X_{G}\) induced by \(A\subseteq X_{1},B\subseteq X_{2},C\subseteq X_{3}\) satisfies the conditions of Proposition 3.1. Equivalently, \((A,B,C)\) is equilateral trapezoid-free if for any fixed \(a^{\prime}\in A,b^{\prime}\in B,c^{\prime}\in C\), the following systems of equations in the variables \(a\in A,b\in B,c\in C\) each have at most one solution:
\[I =a^{\prime}bc=ab^{\prime}c,\] \[I =a^{\prime}bc=abc^{\prime},\] \[I =ab^{\prime}c=abc^{\prime}.\]
Let \(\operatorname{val}(G)\) be the maximum number of solutions to \(abc=I\) over all equilateral trapezoid-free triples \((A,B,C)\).
The relevance of \(\operatorname{val}(G)\) to \(\omega\) is due to the following.
**Proposition 3.3**.: _Suppose that \(X_{G}\) contains disjoint induced subhypergraphs \(M_{n_{i},m_{i},p_{i}}\). Then, \(\operatorname{val}(G)\geq\sum_{i}n_{i}m_{i}p_{i}\)._
Proof.: By the same reasoning as in the second part of the proof of Proposition 3.1, \(M_{n_{i},m_{i},p_{i}}\) satisfies the constraints of Definition 3.2 and contains \(n_{i}m_{i}p_{i}\) hyperedges. As the disjoint union of these hypergraphs satisfies these constraints as well, the claim follows.
In fact, STPP constructions are essentially the only approach we know of for proving lower bounds on \(\operatorname{val}(G)\).
To start, we have the following trivial bounds.
Figure 3: The forbidden hypergraphs in Proposition 3.1, up to permutations of the three parts (represented by different colors).
**Proposition 3.4**.: _For any group \(G\), \(|G|\leq\operatorname{val}(G)\leq|G|^{3/2}\)._
Proof.: The lower bound is obtained by the triple \((\{I\},G,G)\). The upper bound follows from Proposition 3.1.
The following super-multiplicative behavior of \(\operatorname{val}\) is easily checked.
**Proposition 3.5**.: _If \((A,B,C)\) is equilateral trapezoid-free in \(G\), and \((A^{\prime},B^{\prime},C^{\prime})\) is equilateral trapezoid-free in \(H\), then \((A\times A^{\prime},B\times B^{\prime},C\times C^{\prime})\) is a equilateral trapezoid-free in \(G\times H\)._
It is also easily seen that being equilateral trapezoid-free is preserved by cyclic permutations of the three sets.
**Proposition 3.6**.: _If \((A,B,C)\) is equilateral trapezoid-free, then so is \((B,C,A)\)._
By an application of Theorem 2.11 combined with the observation that near-extremal solutions to Proposition 3.1 are highly "regular", we have the following weak improvement to the trivial upper bound of \(|G|^{3/2}\).
**Proposition 3.7**.: _For any group \(G\), \(\operatorname{val}(G)\leq o(|G|^{3/2})\)._
Proof.: Suppose for contradiction that there exists \(\varepsilon_{0}>0\) such that \(\operatorname{val}(G)>\varepsilon_{0}|G|^{3/2}\), and let \(A_{0},B_{0},C_{0}\subseteq G\) witness \(\operatorname{val}(G)=\varepsilon_{0}|G|^{3/2}\). Next consider the triple \((A,B,C):=(A_{0}\times B_{0}\times C_{0},B_{0}\times C_{0}\times A_{0},C_{0} \times A_{0}\times B_{0})\), which is equilateral-trapezoid free inside of \(H:=G^{3}\) by Proposition 3.5 and Proposition 3.6, and witnesses \(\operatorname{val}(H)\geq\varepsilon|H|^{3/2}\) where \(\varepsilon:=\varepsilon_{0}^{3}\). Let \(|H|=N\). Let \(X\) be the tripartite hypergraph with parts \(A,B,C\) and where there is a triangle between all triples \((a,b,c)\) where \(abc=I\). Let \(n:=|A|=|B|=|C|\). By Proposition 3.1 we must have \(n\geq\varepsilon^{2/3}N\). Note that the number of triangles in \(X\) equals \(\varepsilon N^{3/2}\geq\varepsilon n^{3/2}\). In what follows, the degree of a vertex in \(X\) refers to the number of triangles containing it.
Let \(Y\) be the random variable that is uniformly distributed over the multiset of vertex degrees from one part of \(X\), say \(A\). Then \(\mathbb{E}[Y]\geq\varepsilon n^{1/2}\) and \(\mathbb{E}[Y^{2}]\leq n\) (this second inequality follows from the use of Cauchy-Schwarz in the proof of Proposition 3.1). By the Payley-Zygmund inequality, for any \(\theta>0\), \(\Pr(Y>\theta\cdot\varepsilon n^{1/2})\geq(1-\theta^{2})\varepsilon^{2}\). Taking \(\theta=1/2\), we conclude that at least \(p\cdot n:=3n\varepsilon^{2}/4\) vertices in \(A\) have degree at least \(\varepsilon n^{1/2}/2\). This holds for \(B\) and \(C\) as well.
Now let \(S,T,\) and \(U\) be any subsets of \(A,B,C\) of size at least \(n(1-p/\lambda)\); we'll pick \(\lambda\in\mathbb{N}\) later. Then the number of triangles incident to any one of these sets, say \(S\), is at least
\[np(1-\lambda^{-1})\cdot\varepsilon n^{1/2}/2=(3/8)n^{3/2}\varepsilon^{3}(1- \lambda^{-1}),\]
and the number of triangles incident to \([n]\setminus T\) or \([n]\setminus U\), sets of size at most \(np/\lambda\), is at most
\[(n^{2}\cdot np/\lambda)^{1/2}=(3^{1/2}/2)n^{3/2}\varepsilon\lambda^{-1/2}\]
by Cauchy-Schwarz. It follows that the number of triangles with one vertex in each of \(S,T,U\) is at least
\[(3/8)n^{3/2}\varepsilon^{3}(1-\lambda^{-1})-2\cdot(3^{1/2}/2)n^{3/2} \varepsilon\lambda^{-1/2}\]
which is greater than \(1\) for \(\lambda\gg\varepsilon^{-4}\). In summary, between any three subsets of \(A,B,C\) size roughly \(n(1-\varepsilon^{6})\), there is a triangle.
Recall that \(n\geq\varepsilon^{2/3}N\). Since \(X\) has at most \(N^{3/2}\leq o(N^{2})\) triangles, by Theorem 2.11 we can remove \(o(N)=o(n)\) vertices to remove all triangles. But by what we have just shown, after deleting this few vertices some triangle will remain, a contradiction.
**Remark 3.8**.: By combining this proof with [11], it follows that for fixed \(n\) and some \(\varepsilon>0\), \(\operatorname{val}(\mathbb{Z}_{n}^{\ell})\leq O(n^{3/2(1-\varepsilon)\ell})\).
## 4 \(\operatorname{val}(\mathbb{Z}_{n})\) and its applications
Our weakest conjecture is the following.
**Conjecture 4.1**.: _For all \(\varepsilon>0\), \(\operatorname{val}(\mathbb{Z}_{n})\leq O(n^{1+\varepsilon})\)._
In this section we give our potential applications of this conjecture. We then introduce several related quantities and make preliminary progress on understanding them.
While the quantity \(\operatorname{val}(\mathbb{Z}_{n})\) may seem opaque from Definition 3.2, it can easily be visualized. This is done by first considering the natural notion of an equilateral trapezoid-free subset of the plane, which is convenient to introduce sooner rather than later. Throughout this section, we let \(\Delta_{n+1}=\{(a,b,c)\in\mathbb{Z}_{\geq 0}^{3}:a+b+c=n\}\). A subset of \(\Delta_{n+1}\) is said to be corner-free if it contains no configuration \((x+\delta,y,z),(x,y+\delta,z),(x,y,z+\delta)\).
**Definition 4.2**.: Let \(A,B,C\subseteq\{0,\dots,n\}\). We call \((A,B,C)\) an equilateral trapezoid-free triple if for any fixed \(a^{\prime},b^{\prime},c^{\prime}\), the following systems of equations in the variables \(a\in A,b\in B,c\in C\) each have at most one solution:
\[n =a^{\prime}+b+c=a+b^{\prime}+c\] \[n =a^{\prime}+b+c=a+b+c^{\prime}\] \[n =a+b^{\prime}+c=a+b+c^{\prime}.\]
Let \(\operatorname{val}(n)\) be the maximum number of solutions to \(a+b+c=n\) over all equilateral trapezoid-free triples \((A,B,C)\).
We may visualize equilateral trapezoid-free sets as follows. Draw \(\Delta_{n+1}\) in the plane as a triangular grid of points. Sets \(A,B,C\) correspond to collections of lines parallel to the sides of \(\Delta_{n+1}\), and a solution \(a+b+c=n\) corresponds a point in \(\Delta_{n+1}\) contained in one line in each of these three directions. Let \(S\subseteq\Delta_{n+1}\) be the collection of all such points. A violation of a constraint of Definition 4.2 corresponds to either a subset of \(3\) points in \(S\) forming an equilateral triangle with sides parallel to the sides of \(\Delta_{n+1}\), or a subset of \(4\) points with sides parallel to the sides of \(\Delta_{n+1}\) forming an equilateral trapezoid. Equivalently, we are deleting lines parallel to the sides of \(\Delta_{n+1}\) to eliminate all of such configurations, while leaving as many points as possible. The maximum possible number of points left equals \(\operatorname{val}(n)\). See Figure 4.
The following shows that \(\operatorname{val}(n)\) and \(\operatorname{val}(\mathbb{Z}_{n})\) are essentially the same.
**Proposition 4.3**.:
1. \(\operatorname{val}(n)\geq\operatorname{val}(n-1)\)_._
2. \(1+2\cdot\operatorname{val}(2n)\geq\operatorname{val}(\mathbb{Z}_{n})\geq \operatorname{val}(\lfloor n/3\rfloor)\)_._
3. _For_ \(n\geq 6n^{\prime}\)_,_ \(\operatorname{val}(\mathbb{Z}_{n})\geq\operatorname{val}(\mathbb{Z}_{n}^{ \prime})/2-1\)__
Proof.: Suppose that \(\operatorname{val}(n)\) is witnessed by sets \(A,B,C\). For \(N>n\), \(A+(N-n),B,C\) then witness \(\operatorname{val}(N)\geq\operatorname{val}(n)\), which shows (1). If we take \(N=3n\), we have that \(A+2n\subseteq\{0,\dots,N\}\) and \(B,C\subseteq\{0,\dots,N/3\}\), so \(a+b+c\leq 5N/3<2N\). Since \(a+2n+b+c=0\) mod \(N\iff a+2n+b+c=N\), this implies that the sets
are equilateral trapezoid-free when viewed as subsets of \(\mathbb{Z}_{N}\). This shows one direction of (2). In the other direction, suppose \(\operatorname{val}(\mathbb{Z}_{n})\) is witnessed by \(A,B,C\bmod n\). There are at least \((\operatorname{val}(\mathbb{Z}_{n})-1)/2\) solutions to one of \(a+b+c=n,a+b+c=2n\); let \(N\) be the right-hand side of the most frequently satisfied equation. Since every solution to \(a+b+c=N\) is a solution to \(a+b+c=0\bmod n\), \(A,B,C\) must be equilateral trapezoid-free when viewed as subsets of \(\{0,\ldots,N\}\). This shows the other direction of (2).
Finally, (3) follows from (1) and (2).
**Theorem 4.4**.: _Suppose that one can achieve \(\omega=2\) via STPP constructions in the family of groups \(\mathbb{Z}_{q}^{\ell}\), \(q\) a prime power. Then there exists a constant \(c>0\) such that \(\operatorname{val}(\mathbb{Z}_{n})\geq\Omega(n^{1+c})\)._
Proof.: By Corollary 2.9 and Corollary 2.10, any STPP construction with sets \(X_{i},Y_{i},Z_{i}\) satisfies \(\sum|X_{i}||Y_{i}|\leq(q/C)^{\ell}\) (we choose the \(X\) and \(Y\) sets without loss of generality) where \(C\) is an absolute constant. By Holder's inequality, \(\sum(|X_{i}||Y_{i}||Z_{i}|)^{2/3}\leq q^{2\ell/3}(q/C)^{\ell/3}=(q/C^{1/3})^{\ell}\). If we can obtain \(\omega<3-\alpha\) via Theorem 2.2, then
\[q^{\ell}<\sum(|X_{i}||Y_{i}||Z_{i}|)^{2/3\cdot\alpha+(1-\alpha)} =\sum(|X_{i}||Y_{i}||Z_{i}|)^{2/3\cdot\alpha}(|X_{i}||Y_{i}||Z_{i} |)^{1-\alpha}\] \[\leq(\sum(|X_{i}||Y_{i}||Z_{i}|)^{2/3})^{\alpha}(\sum|X_{i}||Y_{i }||Z_{i}|)^{1-\alpha}\] \[\leq(q/C^{1/3})^{\alpha\ell}\operatorname{val}(G)^{1-\alpha}\]
so \(\operatorname{val}(G)>q^{\ell}(C^{\alpha/3(1-\alpha)})^{\ell}\). By choosing \(\alpha\) sufficiently close to \(1\), \(\operatorname{val}(G)>q^{\ell}4^{\ell}\). By taking \(k\)-fold products of the sets defining the STPP constructions (using that products of STPPs are STPPs [1, Lemma 5.4]), we find that \(\operatorname{val}(\mathbb{Z}_{q}^{k\ell})>(4q)^{k\ell}\) for all \(k\).
Let \(N=k\ell\). Consider the embedding \(\varphi:\mathbb{Z}_{q}^{N}\to\mathbb{Z}_{(3q)^{N}}\) defined by \(\varphi(x_{1},\ldots,x_{N})=x_{1}+x_{2}3q+\cdots+x_{n}(3q)^{N-1}\). Since \(\sum y_{i}(3q)^{i-1}\) has a unique such expression in \(\mathbb{Z}_{(3q)^{N}}\) when \(y_{i}<3q\), it follows that
\[a_{1}+a_{2}+a_{3}\neq a_{4}+a_{5}+a_{6}\implies\varphi(a_{1})+\varphi(a_{2}) +\varphi(a_{3})\neq\varphi(a_{4})+\varphi(a_{5})+\varphi(a_{6}).\]
Hence the image of an STPP under \(\varphi\) is an STPP inside of \(\mathbb{Z}_{(3q)^{N}}\), so \(\operatorname{val}(\mathbb{Z}_{(3q)^{N}})>(4q)^{N}\). Because this holds for some particular \(q\) and all \(N=k\ell\), by part (3) of Proposition 4.3 the theorem follows.
**Corollary 4.5**.: _Suppose that there is a family of STPP constructions obtaining \(\omega=2\) in a family of abelian groups with a bounded number of direct factors. Then there exists a constant \(c>0\) such that \(\operatorname{val}(\mathbb{Z}_{n})\geq\Omega(n^{1+c})\)._
Figure 4: Left: some forbidden trapezoids and triangles in \(\Delta_{8}\). Right: a trapezoid-free subset of \(\Delta_{8}\) of size \(8\) obtained by deleting all lines but one along one direction.
Proof.: Suppose we have a family of STPP construction in groups of the form \(G=\mathbb{Z}_{m_{1}}\times\cdots\times\mathbb{Z}_{m_{\ell}}\), with \(\ell\) fixed. We can then obtain an STPP construction in \(\mathbb{Z}_{p}^{\ell}\), where \(p\) is the smallest prime greater than \(\max_{i\in k}3m_{i}\), by taking the image of this STPP under the map sending \((x_{1},\ldots,x_{k})\to x_{1}+x_{2}p+\cdots+x_{k}p^{k-1}\). As \(k\) is fixed, it follows from Bertrand's postulate that \(p^{k}\leq O(|G|)\). The inequality Theorem 2.2 then implies that one can also obtain \(\omega=2\) in the family of groups \(\mathbb{Z}_{p}^{\ell}\), so we conclude by Theorem 4.4.
**Remark 4.6**.: Although we expect that Theorem 4.4 is true when the hypothesis is extended to arbitrary abelian groups, we do not know how to generalize to e.g. \(\mathbb{Z}_{n}^{\ell}\) for arbitrary \(n\). This is due to the fact that better bounds on the size of \(3\)-matchings in cyclic groups with prime power modului are known than for general moduli (compare Theorems A and A' in [1]). To the best of our knowledge, it is an open problem whether the known bounds for non-prime power moduli are tight. For prime power moduli, the known bounds are tight by [11].
Next we show that sufficiently strong simultaneous double product property constructions, which are known to prove \(\omega<2.48\)[1, Proposition 4.5], imply strong lower bounds on \(\operatorname{val}(\mathbb{Z}_{n})\). We thank Chris Umans for informing us of the fact that if Conjecture 2.5 is true, then it is true in cyclic groups, which motivated the following theorem.
**Theorem 4.7**.: _If Conjecture 2.5 is true, then for any \(\varepsilon>0\), \(\operatorname{val}(\mathbb{Z}_{n})\geq O(n^{4/3-\varepsilon})\)._
Proof.: We begin by recalling how to turn an SDPP construction into an STPP construction [1, Section 6.2]. Let \(S\subset\Delta_{n}\) be corner-free and of size \(n^{2-o(1)}\). For all \(v=(v_{1},v_{2},v_{3})\in S\), define the following subsets of \(G^{3}\):
\[A_{v} =A_{v_{1}}\times\{1\}\times B_{v_{3}},\] \[B_{v} =B_{b_{1}}\times A_{v_{2}}\times\{1\},\] \[C_{v} =\{1\}\times B_{v_{2}}\times A_{v_{3}}.\]
It can be verified that the sets \((A_{v},B_{v},C_{v})_{v\in S}\) satisfy the STPP. Hence Conjecture 2.5 yields an STPP with \(n^{2-o(1)}\) triples of sets of size \(n^{2-o(1)}\), inside a group of size \(n^{6-o(1)}\).
Now consider the map from \(G^{3}=\mathbb{Z}_{m_{1}}\times\cdots\times\mathbb{Z}_{m_{k}}\), where \(m_{1}\leq m_{2}\leq\cdots\leq m_{k}\), to \(G^{\prime}:=\mathbb{Z}_{\prod_{3}m_{i}}\) sending \((x_{1},\ldots,x_{k})\) to \(x_{1}+(3m_{1})x_{2}+(3m_{1})(3m_{2})x_{3}+\cdots\). First, the image of sets satisfying the STPP under this map still satisfy the STPP. This shows that \(\operatorname{val}(G^{\prime})>n^{2-o(1)}\cdot n^{3(2-o(1))}=n^{8-o(1)}\). Second, for all fixed \(c>0\) and \(\ell\in\mathbb{N}\), \(G^{3}\) cannot contain a subgroup of size \(|G^{3}|^{c}\) generated by elements of order at most \(\ell\) by [1, Proposition 4.2]. Hence the number of \(m_{i}\)'s which are at most \(\ell\) is at most \(\log_{2}(|G^{3}|^{c})\). The number of \(m_{i}\)'s which are greater than \(\ell\) is trivially less than \(\log_{\ell}|G^{3}|\). So,
\[|G^{\prime}|=\prod_{m_{i}\leq\ell}3m_{i}\prod_{m_{i}>\ell}3m_{i}\leq 3^{\log_{ 2}(|G^{3}|^{c})+\log_{\ell}|G^{3}|}\cdot|G^{3}|.\]
By taking \(c\) sufficiently small and \(\ell\) sufficiently large, this is at most \(n^{6+\delta}\) for any desired \(\delta>0\). The claimed bound follows.
Note that here there is no restriction on the family of abelian groups in consideration, unlike there was in the previous theorem.
### Relaxations of \(\operatorname{val}(\mathbb{Z}_{n})\)
In this section we explore some strengthenings of Conjecture 4.1 which may be easier to understand. We start by discussing an over-strengthening of Conjecture 4.1 which _cannot_ give any barriers. We then discuss a few strengthenings for which our knowledge is embarrassingly bad, including the notions of skew-corner free sets from the introduction.
Considerations of the proof of the \(n^{3/2}\) upper bound of Proposition 3.4 reveal that it actually held for a (possibly) much weaker problem, where one only requires that the expected number of solutions of one of the three systems of two equations in Definition 3.2 is at most \(1\). We begin by noting that this upper bound is essentially best-possible for this weakened problem. In other words, one cannot hope to prove Conjecture 4.1 via an "asymmetric" averaging argument.
**Proposition 4.8**.: _There exist \(A,B,C\subseteq\mathbb{Z}_{n}\) such that_
\[\mathbb{E}_{a^{\prime}\in A,b^{\prime}\in B}\left[\#\{(a,b,c):0=a^{\prime}+b+ c=a+b^{\prime}+c\}\right]\leq 1\]
_and there are \(n^{3/2-o(1)}\) solutions to the equation \(a+b+c=0\) with \(a\in A,b\in B,c\in C\)._
Proof.: Let \(r(A,B,c)\) denote the number of representations of \(c\) as \(a+b\). First note that the proposition is equivalent to the statement that \(\sum_{c\in C}r(A,B,-c)^{2}\leq|A||B|\) and \(\sum_{c\in C}r(A,B,-c)=n^{3/2-o(1)}\).
Let \(S\subset[n]\) be 3AP-free and of size \(n^{1-o(1)}\). Consider the sets
\[A=B=[3n^{2},4n^{2}]\cup\bigcup_{x\in S}[xn,xn+n/2],C=-\{2xn+y:x\in S,y\in[n]\}\]
regarded as subsets of \(\mathbb{Z}_{100n^{2}}\). By definition, for any \(x\in S\) and \(y\in[n]\), \(-(2xn+y)=c\in C\). If we have any representation \(-c=a+b\), then \(a,b<3n^{2}\). So we have \(a=x_{1}n+y_{1},b=x_{2}n+y_{1}\) with \(x_{1},x_{2}\in S\) and \(1\leq y_{1},y_{2}\leq n\). So \((x_{1}+x_{2})n+(y_{1}+y_{2})=2xn+y\), and then we are forced to have \(x_{1}+x_{2}=2x\) and \(y_{1}+y_{2}=y\). But because \(S\) is 3AP-free, we must have \(x_{1}=x_{2}=x\). Hence \(r(A,B,-c)\) is exactly the number of solutions to \(y=y_{1}+y_{2}\) with \(y_{1},y_{2}\in[n]\), which is \(\Omega(n)\) for \(\Omega(n)\) choices of \(y\in[n]\). Hence \(\sum_{c\in C}r(A,B,-c)=\Theta(|S|n^{2})=n^{3-o(1)}\). Also, we have that \(\sum_{c\in C}r(A,B,-c)^{2}=n^{4-o(1)}<|A||B|=\Theta(n^{4})\), and we are done.
Can one find a construction achieving \(n^{3/2-o(1)}\) for the averaging version of Definition 3.2 that involves all three systems of equations? That is:
**Question 4.9**.: What is the maximum over all \(A,B,C\subseteq\mathbb{Z}_{n}\) satisfying
\[\mathbb{E}_{a^{\prime}\in A,b^{\prime}\in B}\left[\#\{(a,b,c):0=a ^{\prime}+b+c=a+b^{\prime}+c\}\right]\leq 1,\] \[\mathbb{E}_{a^{\prime}\in A,c^{\prime}\in C}\left[\#\{(a,b,c):0=a ^{\prime}+b+c=a+b+c^{\prime}\}\right]\leq 1,\] \[\mathbb{E}_{b^{\prime}\in B,c^{\prime}\in C}\left[\#\{(a,b,c):0=a +b^{\prime}+c=a+b+c^{\prime}\}\right]\leq 1,\]
of the number of solutions to \(a+b+c=0\)?
There are a number of relaxations of the quantity \(\operatorname{val}(n)\) for which we know basically nothing. A first relaxation that still seems very stringent is that of a _triforce-free_ triple, defined as follows.
**Definition 4.10**.: Let \(A,B,C\subseteq\{0,\ldots,n\}\). We say that \((A,B,C)\) is triforce-free if there is no solution to
\[a+b+c^{\prime}=a+b^{\prime}+c=a^{\prime}+b+c=n\]
with \(a\neq a^{\prime},b\neq b^{\prime},c\neq c^{\prime}\). We write \(\operatorname{val}(\operatorname{\boldsymbol{\triangle}},n)\) for the maximum over all such \(A,B,C\) of the number of solutions to \(a+b+c=n\).
This condition just says that \(\{(a,b,c)\in A\times B\times C:a+b+c=n\}\subseteq\Delta_{n+1}\) is corner-free. Equivalently, \((A,B,C)\) is triforce-free if the hypergraph with parts \(A,B,C\) and triangles between any triples summing to \(n\) does not contain the triforce hypergraph (the second hypergraph in Figure 3). As every equilateral trapezoid-free triple of sets also has this property, we have the following.
**Proposition 4.11**.: \(\operatorname{val}(\operatorname{\boldsymbol{\triangle}},n)\geq\operatorname {val}(n)\)_._
Here is an even weaker notion than that of being triforce-free. We thank Ryan O'Donnell for suggesting this definition.
**Definition 4.12**.: We call \(S\subseteq\Delta_{n}\) skew-corner free if for \((a,b,c),(a,b^{\prime},c^{\prime})\in S\), it holds that \((a+b-b^{\prime},b^{\prime\prime},c^{\prime\prime})\notin S\) for all \(b^{\prime\prime},c^{\prime\prime}\), and this remains true after any permutation of the coordinates of \(S\).
Pictorially, this says that for any two points lying on an axis-aligned line in \(\Delta_{n}\), the parallel line passing through a third point that would form a corner with these two points must contain no points. As Definition 4.10 yields corner-free subsets of \(\Delta_{n}\) obtained by deleting axis-aligned lines, it follows that this is a relaxation of being triforce-free. More formally, we have the following.
**Proposition 4.13**.: _The largest skew-corner free subset of \(\Delta_{n+1}\) is at least \(\operatorname{val}(\operatorname{\boldsymbol{\triangle}},n)\)._
Proof.: Suppose that \(A,B,C\subset\{0,\ldots,n\}\) satisfy the conditions of Definition 4.10, and let \(S=\{(a,b,c)\subset A\times B\times C:a+b+c=n\}\subseteq\Delta_{n+1}\). Suppose for contradiction that \((a,b,c),(a,b^{\prime},c^{\prime})\in S\) and \((a-b+b^{\prime},b^{\prime\prime},c^{\prime\prime})\in S\). Since \(a-b+b^{\prime}\in A,b\in B,c^{\prime}\in C\) and \((a-b+b^{\prime})+b+c^{\prime}=a+b^{\prime}+c^{\prime}=n\), it follows that \((a-b+b^{\prime},b,c^{\prime})\in S\). But this is impossible: the three solutions \(a+b^{\prime}+c^{\prime}=n,a+b+c=n,(a-b+b^{\prime})+b+c^{\prime}\) violate Definition 4.10. One reasons similarly about other permutations of coordinates.
The best lower bound that we know on the size of the largest skew-corner free subset of \(\Delta_{n}\) is \(\Omega(n)\); \(n\) is obtained trivially by taking one line on the side of \(\Delta_{n}\), and it is not hard to improve this to \(3n/2\). We have found examples exceeding these bounds with computer search (see Figure 2).
If we weaken Definition 4.12 by dropping the requirement that the condition holds for all permutations of coordinates, we are led to the following notion.
**Definition 4.14**.: We say \(S\subset[n]^{2}\) is skew corner-free if it contains no configuration \((x,y),(x,y+d),(x+d,y^{\prime})\) with \(d\neq 0\).
**Proposition 4.15**.: _The largest skew corner-free subset of \([n]^{2}\) is at least as big as the largest skew corner-free subset of \(\Delta_{n}\)._
Proof.: Given a skew corner-free set \(S\subseteq\Delta_{n}\), let \(S^{\prime}\) be its projection onto the first two coordinates. This is a subset of \(\{0,\ldots,n-1\}^{2}\) of size \(|S|\). By definition, it contains no points \((a,b),(a,b^{\prime}),(a+b-b^{\prime},b^{\prime\prime})\). By shifting each point by \((1,1)\) we obtain a subset of \([n]^{2}\) with this property.
As a consequence, we have Theorems 1.3 and 1.4.
Proof of Theorem 1.3 and Theorem 1.4.: By Theorem 4.4, if \(\omega=2\) via STPP constructions in \(\mathbb{Z}_{q}^{\ell}\), then \(\operatorname{val}(\mathbb{Z}_{n})\geq\Omega(n^{1+c})\). By Proposition 4.3, \(\operatorname{val}(\mathbb{Z}_{n})=\Theta(\operatorname{val}(n))\), and by Propositions 4.13 and 4.15, \(\operatorname{val}(n)\) is at most the size of the largest skew corner-free subset of \([n]^{2}\). This proves Theorem 1.3. One similarly concludes Theorem 1.4 by using Theorem 4.7.
We have the following nontrivial lower bound for this relaxed problem, due to a MathOverflow answer of Fedor Petrov [20].
**Proposition 4.16**.: _There is a skew corner-free subset of \([n]^{2}\) of size \(\Omega(n\log n/\sqrt{\log\log n})\)._
Proof.: \(A\subseteq[n]\) is called _primitive_ if for all \(a\neq a^{\prime}\in A,a\nmid a^{\prime}\). It is easily seen that if \(A\) is primitive then the set of points \((a,ka)\subseteq[n]^{2}\) for all \(k\leq n/a\) avoids the forbidden configurations. This gives a subset of size \(n\sum_{a\in A}1/a\). At the same time, there exists a \(c>0\) and a primitive set \(A\) where \(\sum_{a\in A}1/a>c\log n/(\log\log n)^{1/2}\)[2]. We note that this is best-possible, matching (up to the constant) an upper bound on \(\sum_{a\in A}1/a\) for primitive \(A\) due to Behrend [1].
This construction breaks when we strengthen the definition of skew corner-freeness in \([n]^{2}\) to forbid skew corners with two points parallel to the \(x\) axis. This corresponds to the following notion.
**Definition 4.17**.: We say \(S\subset[n]^{2}\) is bi-skew corner-free if it contains no configurations \((x,y),(x,y+d),(x+d,y^{\prime})\) or \((x,y),(x+d,y),(x^{\prime},y+d)\), with \(d\neq 0\).
As far as we know, it is possible that the largest bi-skew corner-free subset has size \(O(n)\).
## 5 Acknowledgments
I thank Ryan O'Donnell for many useful discussions about these problems, and for suggesting Definition 4.12. I also thank Chris Umans for making comments which motivated this paper early on, and in particular, which motivated Theorem 4.7.
|
2309.04871 | A study about black hole solutions with nonconstant transversal
curvature and its conserved charges in Lovelock gravity | In this work, the analysis of some new static black hole solutions of
Lovelock gravity with nonconstant curvature transverse section is presented. It
will be shown that the finiteness of the charges and the action principle rely
on the existence of constraints on the geometry of the transverse sections.
Finally, in this context, some new sound solutions with nonconstant curvature
transverse sections that deviate from the previously known geometries are
discussed. | R. Aros, Milko Estrada | 2023-09-09T20:15:32Z | http://arxiv.org/abs/2309.04871v3 | A study about black hole solutions with nonconstant transversal curvature and its conserved charges in Lovelock gravity.
###### Abstract
In this work, the analysis of some new static black hole solutions of Lovelock gravity with nonconstant curvature transverse section is presented. It will be shown that the finiteness of the charges and the action principle rely on the existence of constraints on the geometry of the transverse sections. Finally, in this context, some new sound solutions with nonconstant curvature transverse sections that deviate from the previously known geometries are discussed.
pacs: 04.50.+h, 04.70.Bw
## I Introduction
During the last decades, there is no doubt that the study of asymptotically AdS spaces, their corresponding conformal infinities, and the potential conformal field theories that can be formulated on those spaces, have been mainstream topics in theoretical physics and mathematics. For some recent interesting examples see [1] and references therein.
Being the AdS/CFT correspondence [2], or holography in general [3; 4], the main motivation behind the study of asymptotically AdS spaces, a large body of work has been devoted to exploring spaces whose asymptotic regions could match a representant of a (equivalent class) of a relevant conformal (differential) manifold. With this in mind, the evidence that non-trivial (conformal) geometries play a fundamental role in the value of the conformal anomalies [5; 6; 7; 8], certainly motivated the study of asymptotic AdS solution with nontrivial transverse sections [9; 10]. It is worth to emphasize that this is not in conflict with the existence of static spaces, nor, in principle, with the Fefferman-Graham expansion [11] for asymptotic Einstein spaces. However, this required the extension of Birkhoff's theorem[12].
To continue it is worth mentioning that, as is known, not any theory of gravity has second-order EOM, which, in turn, could determine the existence of non-casual solutions. In the cases of interest, nonetheless, this is solved by only considering the families of causal well-behaved solutions and ignoring the rest. Within the families of asymptotic AdS solutions, the usual subsets are only required to be asymptotically Einstein spaces. However, this is not the final situation, and one can further extend the spectrum of solutions by recalling the existence of additional theories of gravity with second-order EOM, to include solutions that asymptotically, though well-behaved, do not converge into an Einstein space. See [13; 14; 15] and references therein. It must be noticed that the presence of some particular matter fields, such as scalar fields, could modify that asymptotic behavior. See for instance [16] or [17]. For the vacuum solutions, this whole scenario can be visualized, for spaces with constant curvature transverse section, by using Schwarzschild coordinates. This is
\[ds^{2}=-f(r)^{2}dt^{2}+\frac{1}{f(r)^{2}}dr^{2}+r^{2}(\bar{g}_{ij}dy^{i}dy^{j}), \tag{1}\]
where \(\bar{g}_{ij}dy^{i}dy^{j}\) stands for the line element of a transverse section of constant curvature \(\Sigma_{\gamma}\) (with \(\gamma=\pm 1,0\)). Now, in order to Eq.(1) be an asymptotically locally AdS (**ALAdS**) space it must be satisfied
\[\lim_{r\rightarrow\infty}f(r)^{2}\sim\gamma+\frac{r^{2}}{l^{2}}-\frac{C_{2}}{ r^{a}},\]
with \(a>0\). Remarkably, Lovelock gravity [18] is the simplest case where the general statement above can be confirmed. In this case [14; 15],
\[\lim_{r\rightarrow\infty}f(r)\sim\gamma+\frac{r^{2}}{l^{2}}-\left(\frac{C_{2} }{r^{d-2k-1}}\right)^{1/k}. \tag{2}\]
To finish it is worth recalling that extending the spectrum of solutions is not usually straightforward. This is because, on-shell, any proper solution must define a finite action principle and satisfy suitable boundary conditions that yield a well-defined variation principle. Furthermore, its associated conserved charges must be finite. The crux for any ALAdS space is that a regularization process must be introduced to attain the finiteness of the action and the conserved charges and have a well-defined variational principle. Moreover, this is mandatory to achieve a dual CFT interpretation within the AdS/CFT conjecture. This has been thoroughly studied in the literature in several different ways. See for instance [19; 20; 21] and more recently [22; 23; 24; 25].
In the next sections, this work will analyze some new black hole solutions of Lovelock gravity with nonconstant curvature transverse section and the conditions under which the finiteness of the action principle and the conserved charges is attained. Because of its general appliance, this work follows the method of regularization described in [26; 27; 20].
### Gravity
Among the many possible theories of gravity that are worth consideration, Lovelock gravities have a substantial role due to retaining in higher dimensions the essential features GR has. For instance, their equations of motion are second-order differential equations. The action principle on \(d\)-dimensional manifold \((\mathcal{M},g)\) is given by
\[\mathbf{L}=\sum_{p=0}^{[\frac{d-1}{2}]}\alpha_{p}L_{p} \tag{3}\]
where \([X]\) stands for the integer part of \(X\) and
\[L_{p}=\frac{1}{2^{p}}\delta_{\mu_{1}\ldots\mu_{2p}}^{\mu_{1}\ldots\mu_{2p}}R^{ \nu_{1}\nu_{2}}_{\phantom{\nu_{1}}\mu_{1}\mu_{2}}\ldots R^{\nu_{2p_{p-1}\nu_{ 2p}}^{\mu_{2p-1}\nu_{2p}}}\sqrt{g},\]
with \(R^{\nu_{1}\nu_{2}}_{\phantom{\nu_{1}}\mu_{1}\mu_{2}}\) the Riemann tensor and \(\{\alpha_{p}\}\) a set of arbitrary constants.
For shortness the Lovelock Lagrangian in Eq.(3) can be written in terms of the Riemann two-form curvature, \(R^{ab}=d\omega^{ab}+\omega^{a}_{\phantom{a}c}\omega^{cb}\), and the vielbein \(e^{a}\). See for instance [28]. In this formalism, the Lovelock action reads
\[\mathbf{L}=\sum_{p=0}^{n-1}\alpha_{p}\,\epsilon_{a_{1}\ldots a_{2n}}\left[(R) ^{p}\left(\frac{e}{l}\right)^{2n-2p}\right]^{a_{1}\ldots a_{2n}}, \tag{4}\]
where
\[\left[(R)^{p}\left(\frac{e}{l}\right)^{2n-2p}\right]^{a_{1}\ldots a_{2n}}=R^{ a_{1}a_{2}}\wedge\ldots R^{a_{2p-1}a_{2p}}\wedge\frac{e^{\,a_{2p+1}}}{l} \wedge\ldots\wedge\frac{e^{\,a_{2n}}}{l}.\]
The analysis of the corresponding equations of motion is depicted in appendix A. Unfortunately, it is straightforward to show that the action principle above, either in even or odd dimensions, is ill-defined on an ALAdS space and thus it must be supplemented by a suitable boundary term \(\Omega\) to attain a proper action principle. A suitable formalism in odd dimensions for the construction of \(\Omega\) is described in appendix B. See Refs.[26; 27; 20]. In even dimensions, this is much simpler and suffixes the addition of the corresponding Euler density [29].
To continue, the form of the Noether current, associated with the invariance under diffeomorphisms \(x\to x+\xi\), is given by
\[{}^{*}\mathbf{J}_{\xi}=-d\left(I_{\xi}w^{ab}\tau_{ab}+I_{\xi}\Omega\right), \tag{5}\]
where
\[\tau_{ab}=\frac{\partial\mathbf{L}}{\partial R^{ab}}.\]
### Horizon, Killing vectors and Boundary Conditions
Let \(\mathcal{M}\) be a static black hole geometry of topology \(\mathbb{R}\times\Sigma\). Being \(\mathcal{M}\) a black hole geometry it must have an internal boundary to accommodate the presence of a horizon, _i.e._, \(\partial\mathcal{M}=\partial\mathcal{M}_{\infty}\oplus\partial\mathcal{M}_{H}\). For simplicity, it will be assumed
that \(\partial\Sigma=\partial\Sigma_{\infty}\oplus\partial\Sigma_{H}\), which denotes the spatial infinity and the event horizon respectively. The boundary of \(\mathcal{M}\) is therefore given by
\[\partial\mathcal{M}=\partial\Sigma\times\mathbb{R}=\partial\Sigma_{H}\times \mathbb{R}\cup\partial\Sigma_{\infty}\times\mathbb{R}.\]
As mentioned above, \({}^{*}\mathbf{J}_{\xi}\) can be constructed for any \(\xi\). However, to define a physical conserved charge some conditions must be satisfied. First, it is necessary that \(\mathcal{M}\), has, at least asymptotically, a timelike symmetry. Second, \(\xi\) must generate an isometry, namely a Killing vector. Furthermore, \(\xi\) must be compatible with preserving the boundary conditions. With all this in mind, following [30; 31; 32], \(\xi\) will be considered the null generator of the event horizon [33]. This implies, in first-order formalism, that
\[I_{\xi}\omega^{a}_{\ \ b}\xi^{b}\big{|}_{\partial\Sigma_{H}}=\left.\kappa\xi^{a} \right|_{\partial\Sigma_{H}} \tag{6}\]
where \(\kappa=4\pi T\) is the surface gravity with \(T\) the temperature of the horizon. Notice that fixing \(\omega^{ab}\) at the horizon fixes \(T\). Furthermore, it must be stressed that fixing \(\omega^{ab}\) is a suitable boundary condition at the horizon and equivalent to fixing the second fundamental form or the extrinsic curvature on the horizon. See [34].
Now, considering the asymptotic region, any asymptotically local AdS space satisfies
\[\lim_{x\rightarrow\partial\mathcal{M}_{\infty}}R^{\mu\nu}_{\ \ \ \alpha\beta} \rightarrow-\frac{1}{l^{2}}\delta^{\mu\nu}_{\alpha\beta}, \tag{7}\]
where \(l\) represents an effective AdS radius.
To continue with the discussion, let be a Schwarzschild ansatz with coordinate system \(x^{\mu}=(t,r,y^{i})\),
\[ds^{2}=-f(r)dt^{2}+\frac{1}{f(r)}dr^{2}+r^{2}(\hat{g}_{ij}dy^{i}dy^{j}), \tag{8}\]
where \(\hat{g}_{ij}dy^{i}dy^{j}\) is the line element of an arbitrary transverse section \(\Sigma\). It is straightforward to observe that \(f(r)\geq 0\) determines a well-defined region, being \(\xi=\partial_{t}\) the null generator of the horizon due to \(\xi\cdot\xi=-f(r)\). In this way, \(\partial\Sigma_{H}\), is defined by a _radius_\(r=r_{+}\) subjected to \(f(r_{+})=0\). If \(f(r)\) has more than one root, it will be assumed that \(r=r_{+}\) is the largest one.
The asymptotic region is defined by \(r\rightarrow\infty\) and would be considered a pseudo asymptotically locally AdS. The condition (7) determines, to leading orders, that
\[\lim_{r\rightarrow\infty}f(r)\sim\Gamma+\frac{r^{2}}{l^{2}}+O(r^{-a}), \tag{9}\]
where \(a>0\) and \(\Gamma\) a universal constant to be determined[35].
### Beyond the constant curvature
To study spaces with nonconstant curvature transverse sections \(\Sigma\), it is convenient to define the set of constants \(c(q)\)[12]
\[c(q) = \int_{\Sigma}\frac{(d-2q-2)!}{2^{q}}\delta^{i_{1}\dots i_{2q}}_{ i_{1}\dots i_{2q}}\hat{R}^{i_{1}i_{2}}_{\ j_{1}j_{2}}\dots\hat{R}^{i_{2q-1}i_{2q}}_{\ j_{2q-1}j_{2q}}\sqrt{\hat{g}}d^{d-2}y \tag{10}\] \[= \int_{\Sigma}\varepsilon_{a_{1}\dots a_{d-2}}\hat{R}^{a_{1}a_{2} }\dots\hat{R}^{a_{d-1}a_{2q}}\hat{e}^{a_{2q+1}}\dots\hat{e}^{a_{d-2}} \tag{11}\]
where \(q=0,\dots[(d-2)/2]\) and \(\hat{R}^{i_{1}i_{2}}_{\ j_{1}j_{2}}\) stands for the Riemann tensor of \(\Sigma\). Firstly, it can be noticed that
\[c(0)=(d-2)!Vol(\Sigma)\ \text{with}\ Vol(\Sigma)\ \text{finite}.\]
## II Gauss Bonnet gravity
As an introduction to the analysis, this section will address the solutions of Gauss-Bonnet gravity with nonconstant transversal curvature. This was presented in [9]. As new results, in this section, it will be computed the Noether charge for \(d=5\) and the necessary conditions to have a finite Noether charge and action principle for \(d>5\).
Einstein-Gauss-Bonnet (EGB) gravity is defined by arbitrary \(\alpha_{p}\) for \(p=0,1,2\) and \(\alpha_{p}=0\)\(\forall p>2\). For the constant transversal curvature case, as was discussed in [13; 14], this theory has ALAdS solutions provided certain conditions are satisfied. The solution, for \(d\geq 5\), is given by
\[f(r)=\frac{b(1)}{b(0)}+\frac{\alpha_{1}}{2\alpha_{2}}r^{2}-\frac{\sqrt{4\alpha_ {2}^{2}(b(1)^{2}-b(2)b(0))+b(0)^{2}(\alpha_{1}^{2}-4\alpha_{0}\alpha_{2})r^{4} +\frac{8m\alpha_{2}b(0)}{r^{d-5}}}}{2b(0)\alpha_{2}}. \tag{12}\]
which coincides with Ref.[9] for \(b(0)=\alpha_{1}=1\) and
\[b(1)=\frac{(d-4)!}{2(d-2)!}\int\hat{R}\sqrt{\tilde{g}}d^{d-2}y\ \mbox{and}\ b(2)=\frac{(d-6)!}{4(d-2)!}\int \delta_{i_{1}\ldots i_{4}}^{i_{1}\ldots i_{4}}\hat{R}_{\ \ j_{1}j_{2}}^{i_{1}i_{2}}\hat{R}_{\ \ j_{3}j_{4}}^{i_{3}i_{4}}\sqrt{\tilde{g}}d^{d-2}y,\]
respectively.
The asymptotic form Eq.(12) for \(d\geq 5\) is given by
\[\lim_{r\rightarrow\infty}f(r) = \frac{b(1)}{b(0)}+\frac{\alpha_{1}}{\alpha_{2}}\left(1-\sqrt{1-4 \frac{\alpha_{0}\alpha_{2}}{\alpha_{1}^{2}}}\right)r^{2}-\left(\left(\frac{b( 1)}{b(0)}\right)^{2}\frac{\alpha_{2}}{\alpha_{1}}\frac{(b(2)b(0)-b(1)^{2})}{ \sqrt{1-4\frac{\alpha_{0}\alpha_{2}}{\alpha_{1}^{2}}}}\right)\frac{1}{r^{2}} \tag{13}\] \[- \frac{2m}{b(0)\alpha_{1}\sqrt{1-4\frac{\alpha_{0}\alpha_{2}}{ \alpha_{1}^{2}}}}\frac{1}{r^{d-3}}+\ldots\]
Here, if this solution is to describe an ALAdS space, it becomes mandatory that \(\alpha_{1}^{2}>4\alpha_{0}\alpha_{2}\). This yields an effective AdS radius given by
\[\frac{1}{l_{\rm eff}^{2}}=\frac{1}{l^{2}}=\frac{\alpha_{1}}{\alpha_{2}}\left( 1-\sqrt{1-4\frac{\alpha_{0}\alpha_{2}}{\alpha_{1}^{2}}}\right).\]
One can also notice that \(b(1)/b(0)\) had replaced the value of the constant curvature of the transverse section in Eq.(2) [14].
Continuing, the general behavior mentioned of Eq.(9) is satisfied by Eq.(13), as expected. Next, beyond the first two terms, one can also notice the presence of a term of \(O(r^{-2})\), and one of \(O(r^{3-d})\), the later the one expected for the Schwarzschild solution in \(d\) dimensions. This hints that the term \(O(r^{-2})\) must be removed for \(d>5\). To confirm this, the charges and action principle will be evaluated on this solution.
### Five dimensions
As mentioned above, in \(d=5\) the asymptotic form, see Eq.(13), does match the asymptotia of GR, but also \(b(2)=0\). The conserved charge associated with \(\xi=\partial_{t}\) is finite and given by
\[Q(\partial_{t})=E=m+\frac{b(1)^{2}}{8b(0)^{2}}\left(2\alpha_{2}-l_{\rm eff}^{2} \alpha_{1}\right). \tag{14}\]
Here, one can notice that \(E\) corresponds to the mass \(m\) plus the _would-be vacuum energy_ of the solution. This is the generalization of the result obtained in [36] in the case of a constant curvature transversal section. The action principle is also finite and given by
\[I_{reg}=-\frac{b(1)^{2}}{4b(0)}l_{\rm eff}^{2}(2l_{\rm eff}^{2}\alpha_{0}+1). \tag{15}\]
This shows that for \(d=5\) the finiteness of both action principle and charges, for the EGB solution with nonconstant transversal curvature, can be attained for any \(b(1)\in\mathbb{R}\) as expected due to the asymptotic form (13) does indeed match the asymptotia of GR.
### More than five dimensions
In \(d>5\)\(b(2)\neq 0\) and the Noether charge \(Q(\partial_{t})\), computed following Eq.B9, diverges as
\[\lim_{r\rightarrow\infty}Q(\partial_{t})\approx(b(2)b(0)-b(1)^{2})r^{d-5}+{\rm finite }+\ldots. \tag{16}\]
By computing the action principle it can be checked that this also diverges by a term proportional to \(b(2)b(0)-b(1)^{2}\). Therefore, a proper action principle can be attained provided
\[b(2)=\frac{b(1)^{2}}{b(0)}. \tag{17}\]
This condition is trivially yielded by any constant curvature transverse section.
It must be noticed that Eq. (17) represents a nontrivial constraint for any non-constant curvature manifold. This also confirms that the term of O(\(r^{-2}\)) for \(d>5\) represents an obstruction to be removed to have sound and meaningful solutions.
## III Higher order Lovelock gravities
To continue the analysis of the EOM, and to address the higher-order Lovelock gravities, it is necessary to unveil a different structure of the EOM. Firstly, it is necessary to manifest the presence of the \(c(q)\) mentioned above. Second, one can notice that the EOM, for \(\alpha_{p}=0\) for \(p>q\), can be expressed in the _pseudo-polynomial_ fashion
\[({\cal E}^{q}_{d})^{\alpha}_{\beta}=\alpha_{q}\delta^{\alpha\nu_{1}\ldots\nu_ {2q}}_{\beta\mu_{1}\ldots\mu_{2q}}(R^{\mu_{1}\mu_{2}}_{\nu_{1}\nu_{2}}+\beta_ {1}\delta^{\mu_{1}\mu_{2}}_{\nu_{1}\nu_{2}})\ldots(R^{\mu_{2q-1}\mu_{2q}}_{\nu _{2q-1}\nu_{2q}}+\beta_{q}\delta^{\mu_{2q-1}\mu_{2q}}_{\nu_{2q-1}\nu_{2q}})=0, \tag{18}\]
where \(q\in[1\ldots[(d-1)/2]]\) depends on the particular Lovelock gravity considered. It must be stressed that \(\beta_{i}\)'s coefficients cannot be obtained from the \(\alpha_{p}\)'s in general for \(d\geq 9\)[37]. Fortunately, see below, the cases of interest can be analyzed without much ado until 8 dimensions.
Before proceeding it is worth to stress that among the trivial/ground state solutions of Eqs.(18), namely \(R^{\mu\nu}_{\phantom{\mu\nu}\alpha\beta}=-\beta_{i}\delta^{\mu\nu}_{\alpha\beta}\), only the subset \(\beta_{i}>0\) is of interest in this work, as those are locally AdS spaces. \(\beta_{i}<0\) and \(\beta_{i}=0\), represent the locally de Sitter and flat solutions. Unfortunately, their analyses cannot be extrapolated from the one presented here. On top of that, there are also potentially (trivial) nonphysical solutions to Eq.(18), since starting from \(\forall\alpha_{p}\in\mathbb{R}\), in general, some of the \(\beta_{i}\)'s might be complex numbers with non-vanishing imaginary parts.
## IV Higher order AdS equations
In order to simplify the analysis it will set \(\beta_{i}=l^{-2}>0\) to have the familiar form
\[\lim_{x\rightarrow\partial M_{\infty}}R^{\mu\nu}_{\phantom{\mu\nu}\alpha\beta }\rightarrow-\frac{1}{l^{2}}\delta^{\mu\nu}_{\alpha\beta}. \tag{19}\]
In general, some of the \(\beta_{i}\) can be repeated and thus EOM can present degeneration around an AdS ground state. This has been studied previously in a different way in [14] for spaces with constant curvature transversal sections. To address this situation is useful to reshape the EOM into the form
\[({\cal E}^{(q,k)}_{d})^{\alpha}_{\beta}=\alpha_{q}\left(R+\frac{\delta}{l^{2}} \right)^{\mu_{1}\ldots\mu_{2k}}_{\nu_{1}\ldots\nu_{2k}}\left(\sum_{j=0}^{q-k} K_{j}\left((2)^{2(q-k-j)}\delta^{\alpha\nu_{1}\ldots\nu_{2k+2j}}_{\beta\mu_{1} \ldots\mu_{2k+2j}}R^{\mu_{2k+1}\mu_{2k+2j}}_{\nu_{2k+2}\nu_{2k+2}}\ldots R^{ \mu_{2k+2j-1}\nu_{2k+2j}}_{\nu_{2k+2j}-1}\right)\right)=0, \tag{20}\]
where \(\{K_{j}\}\) is a set of dimensional constants and
\[\left(R+\frac{\delta}{l^{2}}\right)^{\mu_{1}\ldots\mu_{2k}}_{\nu_{1}\ldots\nu _{2k}}=\left(R^{\mu_{1}\mu_{2}}_{\nu_{1}\nu_{2}}+\frac{1}{l^{2}}\delta^{\mu_{1} \mu_{2}}_{\nu_{1}\nu_{2}}\right)\ldots\left(R^{\mu_{2k-1}\mu_{2k}}_{\nu_{2k-1 }\nu_{2k}}+\frac{1}{l^{2}}\delta^{\mu_{2k-1}\mu_{2k}}_{\nu_{2k-1}\nu_{2k}} \right).\]
It can be noticed that the degeneration and the AdS asymptotia of the solutions are both manifest in Eq.(20). It is worth mentioning that, unlike for the set \(\{\beta\}\), see [38], there is a well-defined relation between \(\alpha_{p}\) and \(K_{i}\) given by
\[\alpha_{p}=\alpha_{q}\frac{1}{d-2p}\sum_{i=0}^{[d/2]-k}\binom{k}{p-i}K_{i}. \tag{21}\]
for \(p\leq q\) and \(\alpha_{p}=0\) for \(p>q\).
### The static ansatz
The use of the static ansatz displayed above, see Eq.(8), simplifies the equations significatively. For instance, this implies that \(L_{p}\), Eq.(3), can be written as
\[\int L_{p} = \int dt\wedge dr\left(\frac{d^{2}}{dr^{2}}\sum_{q=0}^{p}{p\choose q }(-f)^{p-q}c(q)\left(\frac{r}{l}\right)^{d-2p}\right) \tag{22}\] \[= \int dt\left.\left(\frac{d}{dr}\sum_{q=0}^{p}{p\choose q}(-f)^{p- q}c(q)\left(\frac{r}{l}\right)^{d-2p}\right)\right|_{r=r_{+}}^{r_{\max}},\]
with the \(c(q)\) are given by Eq.(10). Since it is satisfied by definition that \(f(r_{+})=0\) then
\[\left.\left(\frac{d}{dr}\sum_{q=0}^{p}{p\choose q}(-f)^{p-q}c(q)\left(\frac{r} {l}\right)^{d-2p}\right)\right|_{r=r_{+}}=c(p-1)p\frac{df(r_{+})}{dr}\left( \frac{r_{+}}{l}\right)^{d-2p}+c(p)\frac{d-2p}{l}\left(\frac{r_{+}}{l}\right)^ {d-2p-1}\]
This result gives rise to the operational definition of
\[P(r_{+}) = \beta\sum_{p=0}^{[(d-2)/2]}c(p-1)p\frac{df(r_{+})}{dr}\left(\frac {r_{+}}{l}\right)^{d-2p}+c(p)\frac{d-2p}{l}\left(\frac{r_{+}}{l}\right)^{d-2p-1}\] \[= \sum_{p=0}^{[(d-2)/2]}4\pi c(p-1)p\left(\frac{r_{+}}{l}\right)^{d -2p}+\beta c(p)\frac{d-2p}{l}\left(\frac{r_{+}}{l}\right)^{d-2p-1}\]
where \(\beta=4\pi\left(\frac{df(r_{+})}{dr}\right)^{-1}\) is the Euclidean period. \(c(-1)=0\) by definition.
The equations of motion (20) can be also written in a relatively simple form. For instance,
\[(\mathcal{E}_{d}^{(q,k)})_{0}^{0}\sim\alpha_{q}\sum_{i=0}^{q-k}K_{i}\left( \sum_{s=0}^{k}\sum_{t=0}^{i}c(s+t){i\choose t}{k\choose s}\frac{d}{dr}\left( \left(\frac{r^{2}}{l^{2}}-f(r)\right)^{k-s}(-f(r))^{i-t}\left(\frac{r}{l} \right)^{d-2k-2i-1}\right)\right). \tag{23}\]
The rest of the components present a similar, and compatible, structure. Upon direct integration of these EOM is obtained
\[\alpha_{q}\sum_{i=0}^{q-k}K_{i}\left(\sum_{s=0}^{k}\sum_{t=0}^{i}c(s+t){i \choose t}{k\choose s}\left(\left(\frac{r^{2}}{l^{2}}-f(r)\right)^{k-s}(-f(r ))^{i-t}\left(\frac{r}{l}\right)^{d-2k-2i-1}\right)\right)=C, \tag{24}\]
with \(C\) an arbitrary integration constant.
A noteworthy feature of these equations (24) is that their higher power can only increase each time a new odd dimension is reached [39]. Therefore, the highest power on \(f(r)\), see Eq.(24), of this equation in even dimensions must coincide with one of the odd dimensions below.
Naively it seems that obtaining \(f(r)\) would only require solving Eq.(24). Unfortunately, \(f(r)\) can only be obtained **algebraically** if the order of Eq.(24) is lower than \(5\), _i.e._, if and only if \(q\leq 4\). This restricts the general case to dimensions lower than \(9\). It must be emphasized that this does not mean the lack of solutions for \(d\geq 9\), but that solution can only be obtained for particular sets of coefficients. Because of this, in what follows, only the five and seven-dimensional cases, and some of their extensions, will be discussed in detail. The even dimensional \(d\leq 8\) cases will be omitted because they are direct from the odd-dimensional cases for \(d\leq 7\). As mentioned above, this is due to their corresponding equations, see Eq.(24), which contain the same powers in \(f(r)\) and only differ in the power of \(r\).
Five dimensions
Before continuing it could be useful to recall that in \(d=5\) there are three Lovelock theories to consider. Their corresponding equations of motion are given by
\[K_{0}\alpha_{1}\delta^{\alpha\nu_{1}\nu_{2}}_{\beta\mu_{1}\mu_{2}} \left(R^{\mu_{1}\mu_{2}}_{\nu_{1}\nu_{2}}+\frac{1}{l^{2}}\delta^{\mu_{1}\mu_{2} }_{\nu_{1}\nu_{2}}\right) = 0 \tag{25}\] \[\alpha_{2}\delta^{\alpha\nu_{1}\ldots\nu_{4}}_{\beta\mu_{1}\ldots \mu_{4}}\left(R^{\mu_{1}\mu_{2}}_{\nu_{1}\nu_{2}}+\frac{1}{l^{2}}\delta^{\mu_{1 }\mu_{2}}_{\nu_{1}\nu_{2}}\right)\left(K_{1}R^{\mu_{3}\mu_{4}}_{\nu_{3}\nu_{3 }}+K_{0}\delta^{\mu_{3}\mu_{4}}_{\nu_{3}\nu_{4}}\right) = 0\] (26) \[K_{0}\alpha_{2}\delta^{\alpha\nu_{1}\ldots\nu_{4}}_{\beta\mu_{1} \ldots\mu_{4}}\left(R^{\mu_{1}\mu_{2}}_{\nu_{1}\nu_{2}}+\frac{1}{l^{2}}\delta ^{\mu_{1}\mu_{2}}_{\nu_{1}\nu_{2}}\right)\left(R^{\mu_{3}\mu_{4}}_{\nu_{3}\nu_ {4}}+\frac{1}{l^{2}}\delta^{\mu_{3}\mu_{4}}_{\nu_{3}\nu_{4}}\right) = 0 \tag{27}\]
Here Eq.(25 are the EOM of \(5d\) general relativity. The Eqs.(26) correspond to the case EH action plus a general Gauss-Bonnet term, already discussed in section II. Finally, it can be also recognized that the Eqs.(27) are the EOM of Chern-Simons gravity [40].
As mentioned before, the intention is to analyze the solutions along the branches whose asymptotic behavior is given by \(R^{\mu_{1}\mu_{2}}_{\nu_{1}\nu_{2}}\rightarrow-\frac{1}{l^{2}}\delta^{\mu_{1} \mu_{2}}_{\nu_{1}\nu_{2}}\) for nonconstant curvature transverse sections. As discussed previously, in \(d=5\) Eq.(12) is the solution to Eq.(26) with the desired asymptotia,
\[\lim_{r\rightarrow\infty}f(r)\sim\frac{c(1)}{c(0)}+\frac{r^{2}}{l^{2}}-\frac {C}{r^{2}},\]
but has no restrictions on the values of \(c(q)\) in \(d=5\) due to \(c(q)=0\) for \(q>1\). As was computed above, the energy for this case is given by equation (14).
### Chern Simons
In five dimensions, however, there is also Chern Simons gravity, see [13], which is not covered by the discussion in section II nor has been discussed so far in the literature for nonconstant transversal curvature for \(d=5\). Its solution can be obtained from the relation (24) which in this case is
\[\left(\frac{r^{2}}{l^{2}}-f(r)\right)c(1)+\frac{1}{2}\left(\frac{r^{2}}{l^{2} }-f(r)\right)^{2}c(0)=M, \tag{28}\]
where \(M\) is an integration constant. The solution is given by
\[f(r)=\frac{c(1)}{c(0)}+\frac{r^{2}}{l^{2}}-\sqrt{\left(\frac{c(1)}{c(0)} \right)^{2}-\frac{2M}{c(0)}}. \tag{29}\]
The process of regularization to compute the energy can be carried out following the standard procedure depicted in the appendix B and references therein. The result is given by
\[Q\left(\partial_{t}\right)=K_{0}Ml^{2}-K_{0}\frac{c(1)^{2}}{2c(0)}l^{2}=Ml^{2}+ E_{\rm vacuum}, \tag{30}\]
where the _dynamical mass_, \(Ml^{2}\), and the energy of the vacuum \(E_{\rm vacuum}\) has been split. The presence of vacuum energy is a known fact of AdS gravity, see Ref. [36].
It must be stressed that, as previously mentioned for GR and GB-GR, neither the regulation process nor the finiteness of the energy, impose any restriction on the values of \(c(1)\) and \(c(0)\).
## VI Seven and higher dimensions
In seven dimensions there are a significantly larger number of cases to consider. These are
\[K_{0}\alpha_{1}\delta^{\alpha\nu_{1}\nu_{2}}_{\beta\mu_{1}\mu_{2}} \left(R^{\mu_{1}\mu_{2}}_{\nu_{1}\nu_{2}}+\frac{1}{l^{2}}\delta^{\mu_{1}\mu_{2}}_ {\nu_{1}\nu_{2}}\right) = 0 \tag{31}\] \[\alpha_{2}\delta^{\alpha\nu_{1}...\nu_{4}}_{\beta\mu_{1}...\mu_{4 }}\left(R^{\mu_{1}\mu_{2}}_{\nu_{1}\nu_{2}}+\frac{1}{l^{2}}\delta^{\mu_{1}\mu_{ 2}}_{\nu_{1}\nu_{2}}\right)\left(K_{1}R^{\mu_{3}\mu_{4}}_{\nu_{3}\nu_{3}}+K_{0 }\delta^{\mu_{3}\mu_{4}}_{\nu_{3}\nu_{4}}\right) = 0\] (32) \[K_{0}\alpha_{2}\delta^{\alpha\nu_{1}...\nu_{4}}_{\beta\mu_{1}... \mu_{4}}\left(R^{\mu_{1}\mu_{2}}_{\nu_{1}\nu_{2}}+\frac{1}{l^{2}}\delta^{\mu_{1 }\mu_{2}}_{\nu_{1}\nu_{2}}\right)\left(R^{\mu_{3}\mu_{4}}_{\nu_{3}\nu_{4}}+ \frac{1}{l^{2}}\delta^{\mu_{1}\mu_{2}}_{\nu_{3}\nu_{4}}\right) = 0\] (33) \[\alpha_{3}\delta^{\alpha\nu_{1}...\nu_{4}}_{\beta\mu_{1}...\mu_{ 4}}\left(R^{\mu_{1}\mu_{2}}_{\nu_{1}\nu_{2}}+\frac{1}{l^{2}}\delta^{\mu_{1} \mu_{2}}_{\nu_{1}\nu_{2}}\right)\left(K_{2}R^{\mu_{3}\mu_{4}}_{\nu_{3}\nu_{4}} R^{\mu_{5}\mu_{6}}_{\nu_{5}\nu_{6}}+K_{1}R^{\mu_{3}\mu_{4}}_{\nu_{3}\nu_{4}} \delta^{\mu_{5}\mu_{6}}_{\nu_{5}\nu_{6}}+K_{0}\delta^{\mu_{3}\mu_{4}}_{\nu_{3} \nu_{4}}\delta^{\mu_{5}\mu_{6}}_{\nu_{5}\nu_{6}}\right) = 0\] (34) \[\alpha_{3}\delta^{\alpha\nu_{1}...\nu_{4}}_{\beta\mu_{1}...\mu_{ 4}}\left(R^{\mu_{1}\mu_{2}}_{\nu_{1}\nu_{2}}+\frac{1}{l^{2}}\delta^{\mu_{1} \mu_{2}}_{\nu_{1}\nu_{2}}\right)\left(R^{\mu_{3}\mu_{4}}_{\nu_{3}\nu_{4}}+ \frac{1}{l^{2}}\delta^{\mu_{3}\mu_{4}}_{\nu_{3}\nu_{4}}\right)\left(K_{1}R^{ \mu_{4}\mu_{6}}_{\nu_{5}\nu_{6}}+K_{0}\delta^{\mu_{5}\mu_{6}}_{\nu_{5}\nu_{6}}\right) = 0\] (35) \[K_{0}\alpha_{3}\delta^{\alpha\nu_{1}...\nu_{4}}_{\beta\mu_{1}... \mu_{4}}\left(R^{\mu_{1}\mu_{2}}_{\nu_{1}\nu_{2}}+\frac{1}{l^{2}}\delta^{\mu_ {1}\mu_{2}}_{\nu_{1}\nu_{2}}\right)\left(R^{\mu_{3}\mu_{4}}_{\nu_{3}\nu_{4}}+ \frac{1}{l^{2}}\delta^{\mu_{3}\mu_{4}}_{\nu_{3}\nu_{4}}\right)\left(R^{\mu_{ 5}\mu_{6}}_{\nu_{5}\nu_{6}}+\frac{1}{l^{2}}\delta^{\mu_{5}\mu_{6}}_{\nu_{5}\nu _{6}}\right) = 0 \tag{36}\]
It is worth mentioning, for clarity, that Eq.(31) represents the Einstein theory as Eq(25) also does in \(d=5\). Equations (32) and (33) correspond to the EGB gravity with different couplings, with Eqs.(32) representing the GB gravity already discussed in section II. Finally, Equations (34), (35), and (36) correspond to cubic gravity, with Eqs.(36) representing Chern Simons gravity in seven dimensions. These were initially discussed for constant transversal curvature in [13]. It is worth stressing, however, that this occurs only in \(d=7\). In higher dimensions, these EOMs only correspond to a 3-fold degenerated cubic gravity.
Before returning to the discussion of the solutions with nonconstant curvature transverse sections, it is worth recalling that for solutions with constant curvature transverse sections is known that
* the three Eqs.(31,32) and (34) share a branch satisfying Eq.(19) and whose solutions _agree_ up to order \(O(r^{-4})\). This will be called the EH branch or the \(k=1\) case, and
* Eqs.(33) and (35) have solutions that share a branch, of second order, satisfying Eq.(19) and have the same asymptotic behavior up to order \(O(r^{-1})\). This corresponds to the \(k=2\) solution discussed in [14] for constant transversal curvature.
In what follows the analysis of the nonconstant curvature transverse sections will be discussed. Specifically, some new solutions will be classified according to their degeneration on their effective cosmological constants. Afterward, the constraints on the \(c(i)\)'s necessary to ensure sound action principles for each of the solutions will be studied.
### Einstein like solutions
Firstly, the solutions that share the asymptotic behavior of General Relativity in \(d=7\) will be analyzed. These will be nicknamed Einstein-like Solutions and the corresponding EOM are 32) and (34) respectively. Later, it will be determined the constraints for these solutions to have sound action principles.
#### vi.1.1 **Gauss Bonnet in seven dimensions**
The solution to Gauss-Bonnet gravity, Eq.32), sharing one branch with EH can be extracted from the algebraic expression Eq.(24), _i.e._,
\[K_{0}\left(c(0)\left(\frac{r^{2}}{l^{2}}-f(r)\right)+c(1)\right) \frac{r^{4}}{l^{4}} +\] \[K_{1}\left(-c(0)\left(\frac{r^{2}}{l^{2}}-f(r)\right)f(r)+c(1)\left( \frac{r^{2}}{l^{2}}-2f(r)\right)+c(2)\right)\frac{r^{2}}{l^{2}} = M.\]
The solution has already been discussed in section II and therefore there is not much ado but to recall that this solution only has finite conserved charges and action principle provided \(c(1)^{2}=c(2)c(0)\) is satisfied. As mentioned in section II this constraint is also mandatory for higher dimensions solutions of GB gravity as well.
Cubic gravity in seven dimensions
In \(d=7\) one can consider as well Eqs. (34), which corresponds to \(q=3\), a cubic gravity that shares one branch with EH. As before, the solution can be extracted from Eq.(24), yielding the cubic equation for \(f(r)\),
\[K_{0}\left(c(0)\left(\frac{r^{2}}{l^{2}}-f(r)\right)+c(1)\right) \frac{r^{4}}{l^{4}} +\] \[K_{1}\left(-c(0)\left(\frac{r^{2}}{l^{2}}-f(r)\right)f(r)+c(1) \left(\frac{r^{2}}{l^{2}}-2f(r)\right)+c(2)\right)\frac{r^{2}}{l^{2}} +\] \[K_{2}\left(c(0)\left(\frac{r^{2}}{l^{2}}-f(r)\right)f(r)^{2}-2c(1) \left(\frac{r^{2}}{l^{2}}-\frac{1}{2}f(r)\right)f(r)+c(2)\left(\frac{r^{2}}{l^{ 2}}-3f(r)\right)+c(3)\right) = M.\]
It is noteworthy to mention that the integration constant has been split into \(M\) and \(K_{2}c(3)\). In \(d=7\) this is merely an artifact that maintains the general form of \(d>7\) where \(K_{2}c(3)\) has real meaning.
In this case, even though the exact expression of \(f(r)\) can be obtained explicitly that will be omitted because that is cumbersome and shed no light on the discussion. Fortunately, the asymptotic form of \(f(r)\) contains enough information to address the problem of finiteness. That asymptotic form is given by
\[\lim_{r\rightarrow\infty}\sim\frac{c(1)}{c(0)}+\frac{r^{2}}{l^{2}}+(c(1)^{2}- c(2)c(0))\frac{A}{r^{2}}+\frac{B}{r^{4}}+\ldots \tag{37}\]
Here \(B\) is a constant depending on \(K_{i}\)'s. \(A\) is proportional to \(M\) and a function of \(c(i)\)'s and \(K_{i}\)'s.
One can observe that order \(r^{-2}\) spoils the corresponding EH asymptotia, namely \(r^{d-3}\sim r^{-4}\), unless \(c(1)^{2}=c(2)c(0)\). This constraint is reinforced by checking that the conserved charges
\[\lim_{r\rightarrow\infty}Q(\partial_{t}) \approx \left(c(1)^{2}-c(0)\,c(2)\right)\left(-c(0)\left(K(0)-K(1)+K(2) \right)A+\frac{l^{2}\left(3K(0)-7K(1)+11K(2)\right)}{c(0)^{2}}\right)r^{3}\] \[- Mc(0)\left(K(0)-K(1)+K(2)\right)\] \[+ \frac{c(1)\,l^{4}\left(-3K(0)+7K(1)-35K(2)\right)\left(3c(0)\,c(2 )-2c(1)^{2}\right)}{24c(0)^{2}}\]
and the action principle,
\[I = \lim_{r\rightarrow\infty}\beta\left(c(1)^{2}-c(0)\,c(2)\right) \left(\frac{\left(K(0)-K(1)+K(2)\right)l^{2}}{4c(0)}\right)r^{2}\] \[+ -\frac{3\beta l^{4}\left(\left(-\frac{2K(0)}{3}+\frac{14K(1)}{9}- \frac{22K(2)}{9}\right)c(1)^{3}+c(0)\,c(2)\left(K(0)-\frac{7K(1)}{3}+\frac{11 K(2)}{3}\right)c(1)+\frac{8c(3)c(0)^{2}K(2)}{3}\right)}{8c(0)^{2}}\] \[- P(r_{+}),\]
are both finite provided \(c(1)^{2}=c(2)c(0)\).
To finish this discussion it must be emphasized that in this case there is no restriction on \(c(3)\), as occurred for \(c(2)\) in \(d=5\).
#### iii.2.3 Higher Dimensions
The higher dimensional (\(d>7\)) extension of the solution above can be done directly from Eq.(24). Its asymptotic form, for \(d>7\), is given by
\[\lim_{r\rightarrow\infty}f(r)\sim\frac{c(1)}{c(0)}+\frac{r^{2}}{l^{2}}+(c(1) ^{2}-c(2)c(0))\frac{A}{r^{2}}+\left((c(3)-\frac{c(1)^{3}}{c(0)^{2}}\right) \frac{B}{r^{4}}+\frac{D}{r^{d-3}}+\ldots \tag{38}\]
It can be observed that this expression differs from the expected EH behavior (\(r^{-(d-3)}\)) by two terms, seemingly implying that for \(d>7\) it must be satisfied additionally \(c(3)c(0)^{2}=c(1)^{3}\).
It is direct, but cumbersome, to check that the finiteness of the conserved charges and action principle requires \(c(1)^{2}=c(2)c(0)\) and \(c(3)c(0)^{2}=c(1)^{3}\).
Final Comments
It is worth noticing that each of the conditions mentioned above is trivially satisfied if a constant curvature transverse section were considered. Furthermore, it is not clear that any other than a constant curvature manifold can satisfy them.
The generality of the constraints \(c(1)^{2}=c(2)c(0)\) and \(c(3)c(0)^{2}=c(1)^{3}\), being valid for GR-GB and cubic gravity in \(d=7\) may be foreseen, within the Einstein branch, the rise of further constraints as the higher order on \(R\) in the Lovelock theory increases. In this way, quartic gravity would require a constraint of \(c(4)\) for \(d>9\), and so on. Unfortunately, in general, this cannot be confirmed analytically because the equation for \(f(r)\) cannot be solved for powers higher than four.
### Second order degenerated solutions
Solutions presenting a second-order degeneration on the ground state, _i.e._, the solutions of (33) will be called second-order degenerated solutions. Before proceeding it is worth recalling in the case of a constant curvature transverse section the slope is given by \(O(r^{-1})\)[14].
#### iv.2.1 Seven Dimensions
The static solution to Eqs. (33) can be obtained, see Eq.(24), by solving the algebraic relation
\[K_{0}\left(c(0)\left(\frac{r^{2}}{l^{2}}-f(r)\right)^{2}+2c(1)\left(\frac{r^ {2}}{l^{2}}-f(r)\right)+c(2)\right)\frac{r^{2}}{l^{2}}=M. \tag{39}\]
The physical solution is given by
\[f(r)=\frac{c(1)}{c(0)}+\frac{r^{2}}{l^{2}}-\sqrt{\frac{Ml^{2}}{K_{0}c(0)r^{2 }}+\frac{1}{c(0)^{2}}(c(1)^{2}-c(2)c(0))} \tag{40}\]
This solution presents two clear cases, \(c(1)^{2}=c(2)c(0)\) and \(c(1)^{2}\neq c(2)c(0)\).
* For \(c(1)^{2}=c(2)c(0)\) the solution is given by \[f(r)=\frac{c(1)}{c(0)}+\frac{r^{2}}{l^{2}}-\frac{1}{r}\sqrt{\frac{Ml^{2}}{K_{ 0}c(0)}}\] which shares the slope \(r^{-1}\) of the constant curvature transverse section solution [14; 15]. The conserved charge is finite and given by \[Q(\partial_{t})=Ml^{4}+l^{4}\frac{K_{0}c(1)^{3}}{6c(0)^{2}},\] (41) connecting \(M\), the integration constant, with the mass/energy of the solution. In the same fashion, it is direct to check that the action principle is also finite and given by \[I=\beta\left(l^{4}\frac{K_{0}c(1)^{3}}{6c(0)^{2}}\right)-P(r_{+}),\] where \(P(r_{+})\) is finite and \(\beta\) is the inverse of the Euclidean period.
* In comparison the case above, the case \(c(1)^{2}\neq c(2)c(0)\) presents some unique features. These can be observed in \[\lim_{r\rightarrow\infty}f(r)\sim\frac{c(1)}{c(0)}-\sqrt{\frac{c(1)^{2}-c(2) c(0)}{c(0)^{2}}}+\frac{r^{2}}{l^{2}}-\frac{M}{r^{2}}\frac{l^{2}}{2K_{0}\sqrt{c(1) ^{2}-c(2)c(0)}}+M^{2}\frac{C_{4}}{r^{4}}+\ldots.,\] (42)
where \(C_{4}\) is a constant depending on \((c(1)^{2}-c(2)c(0))^{-3/2}\). One can notice that this form not only misses completely the \(O(r^{-1})\) but this has been replaced by an order \(O(r^{-2})\). Surprisedly, this does not affect the existence of a finite conserved charge \(Q(\partial_{t})\), which is given by \[Q(\partial_{t})=Ml^{2}+Ev,\] (43) with \[Ev = \frac{K_{0}l^{4}}{3c(0)^{2}}\left(c(1)^{2}-c(2)c(0)\right)^{\frac {3}{2}}\] \[+ \frac{K_{0}l^{4}}{6c(0)^{2}}c(1)\left(3c(2)c(0)-2c(1)^{2}\right)\] establishing that \(M\), the integration constant in Eq.(39), is related with the mass/energy of the solution. Furthermore, \(Ev\) has a soft limit to the previous case. The action principle is finite as well and is given by \[I = \beta\left(\frac{K_{0}l^{4}}{3c(0)^{2}}\left(c(1)^{2}-c(2)c(0) \right)^{\frac{3}{2}}\right.\] \[+ \left.\frac{K_{0}l^{4}}{6c(0)^{2}}c(1)\left(3c(2)c(0)-2c(1)^{2} \right)\right)-P(r_{+})\]
#### iii.2.2 **Higher dimensions**
In dimensions \(d>7\), one can observe the same basic features observed in 7 dimensions, with the sole exception that the coefficients \(c(i)\), with \(i>2\), play a role in the renormalization processes. The general solution is given by
\[f(r)=\frac{c(1)}{c(0)}+\frac{r^{2}}{l^{2}}-\sqrt{\frac{Ml^{d-5}}{K_{0}c(0)r^{d -5}}+\frac{1}{c(0)^{2}}(c(1)^{2}-c(2)c(0))}\]
As before, the asymptotic behavior splits according \(c(1)^{2}=c(2)c(0)\) or not.
* For \(c(1)^{2}=c(2)c(0)\) the solution is given by \[f(r)=\frac{c(1)}{c(0)}+\frac{r^{2}}{l^{2}}-\sqrt{\frac{Ml^{d-3}}{K_{0}c(0)r^{ d-3}}},\] which matches the asymptotia of the constant curvature transverse section case. For this solution, though it is a bit long, it can be demonstrated that it has a finite conserved charge \[Q(\partial_{t})=Ml^{2}+\frac{l^{d-3}K_{0}c\left(1\right)^{(d-1)/2}}{2^{(d-1)/ 2}c(0)^{(d-3)/2}}\] The action principle is also finite and given by \[I=\beta\frac{l^{d-3}K_{0}c(1)^{(d-1)/2}}{2^{(d-1)/2}c(0)^{(d-3)/2}}-P(r_{+}).\] where \(\beta\) is the Euclidean period. These results only confirm, as expected, that \(c(1)^{2}=c(2)c(0)\) yields an analogous situation as the corresponding constant curvature constant transverse section solution in [15].
* The case \(c(1)^{2}\neq c(2)c(0)\) has the different asymptotic behavior given by \[\lim_{r\rightarrow\infty}f(r)=\frac{c(1)}{c(0)}-\sqrt{\frac{c(1)^{2}-c(2)c(0) }{c(0)^{2}}}+\frac{r^{2}}{l^{2}}-\frac{M}{2K_{0}\sqrt{c(1)^{2}-c(2)c(0)}}\left( \frac{l^{d-5}}{r^{d-5}}\right)+M^{2}C_{d-2}\left(\frac{l^{d-3}}{r^{d-3}} \right)+\ldots.\]
Regardless of this completely different behavior, still the associated conserved charge is finite providing some relations between the \(c(i)\), \(i>2\), are satisfied. Remarkably, these relations cannot be satisfied by a constant curvature transverse section, and therefore these solutions represent a completely new family of well-defined solutions. If those restrictions are satisfied then the conserved charge is given by
\[Q(\partial_{t})=Ml^{d-3}+E_{v}\]
where \(E_{v}\) is a finite, but cumbersome, function of \(c(1)\) and \(c(2)\). For instance, in \(d=11\) the finiteness requires that
\[c(3) = -\frac{2c(0)^{2}\,\sqrt{\frac{c(1)^{2}-c(2)c(0)}{c(0)^{2}}}\,c(2) -2c(1)^{2}\,c(0)\,\sqrt{\frac{c(1)^{2}-c(2)c(0)}{c(0)^{2}}}-3c(1)\,c(0)\,c(2)+2 c(1)^{3}}{c(0)^{2}}\] \[c(4) = \frac{-8\sqrt{\frac{c(1)^{2}-c(2)c(0)}{c(0)^{2}}}\,c(2)\,c(1)-3c( 2)^{2}}{c(0)}+\frac{4c(1)^{2}\left(2c(1)\,\sqrt{\frac{c(1)^{2}-c(2)c(0)}{c(0)^ {2}}}+3c(2)\right)}{c(0)^{2}}-\frac{8c(1)^{4}}{c(0)^{3}}\]
which yields
\[E_{v} = \left(\left(-\frac{15c(1)}{32c(0)^{2}}+\frac{\sqrt{c(1)^{2}-c(2) \,c(0)}}{8c(0)^{2}}\right)c(2)^{2}\right.\] \[+ \left.\left(\frac{5c(1)^{3}}{4c(0)^{3}}-\frac{7\sqrt{c(1)^{2}-c(2 )\,c(0)}\,c(1)^{2}}{8c(0)^{3}}\right)c(2)\right.\] \[- \left.\frac{3c(1)^{5}}{4c(0)^{4}}+\frac{3\sqrt{c(1)^{2}-c(2)\,c( 0)}\,c(1)^{4}}{4c(0)^{4}}\right)K(0)\,l^{8}\]
By the same token, in \(d=11\) the action principle is also finite and given by
\[I = \beta\left(\left(-\frac{15c(1)}{32c(0)^{2}}+\frac{\sqrt{c(1)^{2}- c(2)\,c(0)}}{8c(0)^{2}}\right)c(2)^{2}\right.\] \[+ \left.\left(\frac{5c(1)^{3}}{4c(0)^{3}}-\frac{7\sqrt{c(1)^{2}-c(2 )\,c(0)}\,c(1)^{2}}{8c(0)^{3}}\right)c(2)\right.\] \[- \left.\frac{3c(1)^{5}}{4c(0)^{4}}+\frac{3\sqrt{c(1)^{2}-c(2)\,c( 0)}\,c(1)^{4}}{4c(0)^{4}}\right)K(0)\,l^{8}\] \[- P(r_{+})\]
### Second Order like Solutions in seven dimensions
It will be understood by _second order-like degenerated solutions_ those with the same asymptotic behavior of second-order degeneration, namely the previous case, but whose equations of motion have \(R^{3}\), or higher, powers on the Riemann tensor ( _i.e_ at least cubic gravity) and have nonconstant transversal curvature. This case is described by the equation (35). This can occur only for \(d\geq 7\). On the other hand, unfortunately, in general, the exact expression of \(f(r)\) cannot be obtained for \(d>8\), leaving only \(d=7,8\) as sound options. In this section only cubic gravity with \(k=2\) in \(d=7\) will be explored as \(d=8\) with \(k=2\) essentially has the features. In \(d=7\) the solution satisfies the algebraic equation, see Eq.(24),
\[K_{0}\left(c(0)\left(\frac{r^{2}}{l^{2}}-f(r)\right)^{2}+2c(1) \left(\frac{r^{2}}{l^{2}}-f(r)\right)+c(2)\right)\frac{r^{2}}{l^{2}} + \tag{45}\] \[K_{1}\left(-c(0)\left(\frac{r^{2}}{l^{2}}-f(r)\right)^{2}f(r)-2c (1)\left(\frac{r^{2}}{l^{2}}-f(r)\right)f(r)+c(1)\left(\frac{r^{2}}{l^{2}}-f( r)\right)^{2}+c(2)\left(2\frac{r^{2}}{l^{2}}-3f(r)\right)+c(3)\right) = Z_{0}.\]
As in some previous cases, here the integration constants have been split into \(K_{1}c(3)\) and \(Z_{0}\) just to preserve the form of the solution for \(d>7\). It must be emphasized that the solution exists for \(d>8\) provided the higher power of the Riemann tensor is not increased.
Unfortunately, the exact form of \(f(r)\), in this case, is remarkably cumbersome and thus does not provide any hindsight. However, its asymptotic form provides enough information to perform the analysis. As previously, there is a split between the cases \(c(1)^{2}=c(2)c(0)\) and \(c(1)^{2}\neq c(2)c(0)\).
* For \(c(1)^{2}=c(2)c(0)\) the asymptotic form is given by \[\lim_{r\rightarrow\infty}f(r)\sim\frac{c(1)}{c(0)}+\frac{r^{2}}{l^{2}}-\left( \frac{l}{r}\right)\left(\sqrt{\frac{M}{c(0)\left(K\left(0\right)-K\left(1 \right)\right)}}\right)+\ldots\] (46) It is direct to check that this asymptotic form yields a finite action principle and finite conserved charge. One can notice that the presence of \(c(3)c(0)^{2}-c(1)^{3}\), previously noticed in subsection VI.1 as a constraint to ensure an Einstein-like behavior, in this case however it only shifts the value of the effective mass as \[Q(\partial_{t})=M\,l^{4}+\frac{\left(K(0)-7K(1)\right)l^{4}c(1)^{3}}{6c(0)^{2}}\] (47) The action principle is given by \[I=\beta\left(\frac{\left(c(1)^{3}(K(0)-K(1))-6c(3)\,c(0)^{2}\,K(1)\right)l^{4 }}{6c(0)^{2}}\right)-P(r_{+})\] (48) It is direct to notice that this reproduces the features of the constant curvature transverse section solution.
* For \(c(1)^{2}\neq c(2)c(0)\) the asymptotic form is given by \[\lim_{r\rightarrow\infty}f(r)\sim\frac{c(1)}{c(0)}-\sqrt{\frac{c(1)^{2}-c(2)c (0)}{c(0)^{2}}}+\frac{r^{2}}{l^{2}}+\left(\frac{l}{r}\right)^{2}C_{2}(M,c(i),K _{i})+\left(\frac{l}{r}\right)^{4}C_{4}(M,c(i),K_{i})\ldots\] (49) where \[C_{2}(Z_{0},c(i),K_{i})=\frac{1}{2\kappa^{2}\sqrt{c(0)}}\left(\Upsilon+\frac{ Z_{0}}{2\Xi^{3}}\right)^{2}\] (50) with \[\kappa = K_{1}-K_{0},\] \[\Xi^{4} = c(2)-\frac{c(1)^{2}}{c(0)},\text{ and }\] \[\Upsilon = \frac{\Xi^{2}}{\sqrt{c(0)}}-\frac{3}{2}\left(\Xi-\frac{c(1)}{c(0 )\Xi^{3}}\right).\] In Eq.(49), \(C_{4}(Z_{0},c(i),K_{i})\) is also a function depending on the constants \(Z_{0},c(0),c(1),c(2),K_{0}\) and \(K_{1}\). In this case, the conserved charge is given \[Q(\partial_{t})=Ml^{4}+E_{v}\] (51) with \[M=\frac{\left(2\Upsilon\Xi^{3}K(1)+Z_{0}\right)^{2}}{4\kappa\,\Xi^{4}}.\] One can notice, see Eq.(45), that defining \(M\) can be done in general and adjusted by fine tuning \(Z_{0}\). Here, \[E_{v}=\left(-\frac{c(2)\,l^{4}}{3\sqrt{c(0)}}+\frac{c(1)^{2}\,l^{4}}{3c(0)^{ \frac{3}{2}}}\right)\Xi^{2}+\frac{c(1)\,c(2)\,l^{4}}{2c(0)}-\frac{c(1)^{3}\,l^ {4}}{3c(0)^{2}}\] \[I=\beta\left(\frac{l^{6}\left(c(2)\,c(0)-c(1)^{2}\right)^{2}K(0)^{2}}{3 \sqrt{l^{4}K(0)^{2}\left(c(1)^{2}-c(2)\,c(0)\right)}c(0)^{2}}+\frac{\left( \frac{3c(0)c(1)c(2)}{2}-c(1)^{3}\right)l^{4}K(0)}{3c(0)^{2}}\right)-P(r_{+}).\] (52)
### Chern Simons in seven dimensions
Besides the two cases above, in \(d=7\) a cubic Lagrangian could correspond to Chern-Simons gravity. In this case, the EOM is given by Eq.(36) which reduces the static ansatz to
\[\left(\frac{r^{2}}{l^{2}}-f\right)^{3}c(0)+6\left(\frac{r^{2}}{l^{2}}-f\right)^ {2}c\left(1\right)+6\left(\frac{r^{2}}{l^{2}}-f\right)c(2)=C\cdot c(0), \tag{53}\]
where \(C\,c(0)\) is an integration constant related with the _energy_ of the solution. This yields
\[f(r)=\frac{c(1)}{c(0)}+\frac{r^{2}}{l^{2}}-C_{2}. \tag{54}\]
The exact form of \(C_{2}=C_{2}(c(q),C)\) is not illustrative, but can be reckoned from solving
\[-\left(\frac{c(1)}{c(0)}-C_{2}\right)^{3}c(0)+6\left(\frac{c(1)}{c(0)}-C_{2} \right)^{2}c(1)-6\left(\frac{c(1)}{c(0)}-C_{2}\right)c(2)=C\,c(0).\]
Eq.(54) essentially can be cast as a generalization of the constant curvature case discussed in [14]. It can be inferred that for Chern Simons gravity in \(d=7\) there are no constraints so that the solution with nonconstant transversal curvature has finite value for its conserved charge due to its energy being related to \(C_{2}\) and consequently for its action.
## VII Discussion
In this work it has been shown some new proper asymptotically AdS static black hole solutions with nonconstant curvature transverse sections. The analysis was carried out by establishing the conditions that yield the finiteness of the conserved charges, associated with the time symmetry, and the corresponding action principle. These solutions satisfy Lovelock gravities EOM given by Eq.(20), whose form can be sketched as
\[\left(R+\frac{e^{2}}{l^{2}}\right)^{k}\left(\sum_{s=0}^{q-k}K(s)R^{s}\left( \frac{e}{l}\right)^{2(q-k-s)}\right)=0,\]
with \(q<[(d-1)/2]\). Here \(q\) represents the highest power of the Riemann tensors in the Lagrangian, and \(d\) the dimension. Part of the analysis revealed that the solutions can be classified according to their degeneration around the AdS ground state, given by \(k\) in the equation above. In particular, solutions with the degenerations up third order are discussed in detail for \(d\leq 7\). The results can be summarized as follows
* For Chern Simons gravity, namely \(d=2n+1\) and (\(k=q=n\)), the situation is essentially identically to the constant curvature transverse section case with no constraint on the \(c(q)^{\prime}s\). The effect considering nonconstant curvature transverse sections is a modification in the mass/energy by a function of the corresponding \(c(0),c(1),\ldots c(n-1)\).
* For \((k,q)=(1,1)\), namely the solutions of the General Relativity, it is only required to know \(c(0)\) and \(c(1)\), and thus no further constraints on the values of \(c(q)\), for \(q>2\) arises. This grants GR a singular status where transverse sections are essentially unconstrained. This is a very appealing condition in the context of the AdS/CFT conjecture, as mentioned above.
* For \(k=1\) and \(1<q<[d/2]\) (called Einstein-like solutions), meaning theories with higher power of the Riemann tensor that have branches with a solution that could share the asymptotia of General Relativity, the finiteness of the conserved charges and the action principle impose an increasing number conditions, with \(q\), on the \(c(p)^{\prime}s\). For instance, \(q=2\) it is required that \[c(1)^{2}=c(2)c(0)\] in \(d>5\). In the same fashion, for \(d\geq 9\) and \(q\geq 3\) it is also required that \[c(3)c(0)^{2}=c(1)^{3}.\] These two conditions are satisfied by any constant curvature transverse sections. For higher orders in \(q\) and higher dimensions, one can foresee the rise of a set of conditions of the same fashion to be trivially satisfied by any constant curvature transverse sections.
* For \((k=2,q=2)\), called Second order degenerated solutions, the solutions split into the two disjointed cases \(c(1)^{2}=c(2)c(0)\) and \(c(1)^{2}\neq c(2)c(0)\). In the first case, the slope agrees with the one given by considering the constant curvature transverse section case and no further constraints arise. Remarkably, for \(c(1)^{2}\neq c(2)c(0)\) both the charges and action principle can be finite, but the slope of the solutions differ from the constant curvature transverse section case. This introduces a new family of solutions whose extension, for instance to stationary solutions, can yield some interesting new results.
* For \(k=2\) and \(2<q<[d/2]\), (called second-order like solutions), meaning those that share the asymptotia of the \((k,q)=(2,2)\), the analysis reveals that essential feature remains, for instance, the split between the behaviors of \(c(1)^{2}=c(2)c(0)\) and \(c(1)^{2}\neq c(2)c(0)\).
To finish it is worth mentioning just a few interesting open questions. First, the thermodynamic analysis seems straightforward, but the consequence of a non-constant curvature transverse section in the causal structure could require a thorough geometrical analysis. Second, the analysis of the thermodynamics requires addressing if it is possible to have vanishing temperature solutions and how this constrains the \(c_{q}\) coefficients. Unfortunately, this requires determining the specific form of \(f(r)\) which, as mentioned before, is not possible in general for \(d>8\). In fact, this is not an easy task as even the GB solution with a constant curvature transverse section requires a discussion [41]. Finally, as mentioned before, in this work only a particular method of regularization was used, and thus it is also an open question if another method could yield different constraints on the \(c(q)\) or any at all.
###### Acknowledgements.
This work was partially funded through FONDECYT-Chile 1220335. Milko Estrada is funded by the FONDECYT Iniciacion Grant 11230247.
## Appendix A Lovelock equations of motion
The variation of Eq,(4) is given by,
\[\delta_{0}\mathbf{L}=G_{f}\delta_{0}e^{f}+D(\delta_{0}w^{ab})\tau_{ab} \tag{10}\]
where
\[\tau_{ab}=\frac{\partial\mathbf{L}}{\partial R^{ab}}=\kappa\sum_{p=0}^{n-1}p \alpha_{p}\,\epsilon_{aba_{3}\dots a_{2n}}\left[(R)^{p-1}\left(\frac{e}{l} \right)^{2n-2p}\right]^{a_{3}\dots a_{2n}}. \tag{11}\]
and
\[G_{f}=\frac{\overleftarrow{\partial}\mathbf{L}}{\partial e^{f}}=\kappa\sum_{ p=0}^{n-1}(2n-2p)\alpha_{p}\,\epsilon_{a_{1}\dots a_{2n-1}f}\left[(R)^{p} \left(\frac{e}{l}\right)^{2n-2p-1}\right]^{a_{1}\dots a_{2n-1}} \tag{12}\]
Using the Stoke theorem the second term in Eq.(10) can be split such that obtaining the second field equation \(D(\tau_{ab})=0\) and the boundary term
\[\Theta(\delta_{0}w^{ab}w^{ab}e^{d})=\kappa\sum_{p=0}^{n-1}p\alpha_{p}\, \epsilon_{a_{1}\dots a_{2n}}\left[\delta_{0}w(R)^{p-1}\left(\frac{e}{l}\right) ^{2n-2p}\right]^{a_{1}\dots a_{2n}}. \tag{13}\]
It is worth mentioning that for general \(\alpha_{p}\), \(T^{a}=de^{a}+w^{a}_{\ b}e^{b}=0\) is the only solution for \(D(\tau_{ab})=0\), thus this formalism is equivalent to the metric formalism. However, for the special case of Chern-Simons, although \(T^{a}=0\) is a solution, it is not the most general one. See for instance [42].
## Appendix B Regularization
The renormalization process of Lovelock gravity in even dimensions (\(d\geq 4\)) is direct and can be accomplished by adding the corresponding Euler density with a suitable unitless constant. See for instance [29; 43]. Because of
this, and for simplicity, only the renormalization process in odd dimensions will be sketched. For further details, see, [20; 27; 36; 44]. Unlike even dimensions, in this case, the regulation process must be carried by a suitable boundary term at the asymptotic AdS region. For the horizon, as no divergencies can arise, no additional term is necessary to attain finiteness.
The variation of the Lovelock action on the shell can be written as:
\[\delta_{0}I_{LL}=\int_{\partial\mathcal{M}}l^{2n-1}\left(\sum_{p=0}^{n}p(-1)^{ 2n-2p+1}\alpha_{p}\right)\delta_{0}\omega R^{n-1}. \tag{10}\]
From this, it is straightforward to realize, as mentioned above, that there is not a proper set of boundary conditions that define \(\delta I_{LL}=0\) as \(R\) diverges in the asymptotically AdS region. This can be amended by the addition of the boundary term given by [27; 45]:
\[I_{R}=\int_{\partial\mathcal{M}_{\infty}}B_{2n}=\kappa\int_{\partial\mathcal{ M}_{\infty}}\int_{0}^{1}\int_{0}^{t}\left(Ke\left(\tilde{R}+t^{2}(K)^{2}+s^{2} \frac{e^{2}}{l^{2}}\right)^{n-1}\right)dsdt \tag{11}\]
where \(\tilde{R}\) and \(K\) stand for the Riemann two-form and extrinsic curvature one-form respectively of the boundary \(\partial\mathcal{M}_{\infty}=\mathbb{R}\times\partial\Sigma_{\infty}\). One must recall the Gauss Codazzi decomposition:
\[\tilde{R}^{ab}+((K)^{2})^{ab}\Big{|}_{\partial\mathcal{M}_{\infty}}=\left.R^{ ab}\right|_{\partial\mathcal{M}_{\infty}} \tag{12}\]
where \(R^{ab}\) is the Riemann two form of \(\mathcal{M}\). \(\kappa\) in Equation (11) stands for a constant to be determined. The variation of Equation (11) yields:
\[\delta_{0}I_{R} = \kappa\int_{\partial\mathcal{M}_{\infty}}\int_{0}^{1}\left(e \delta_{0}K-\delta eK_{0}\right)\left(\tilde{R}+t^{2}(K)^{2}+t^{2}\frac{e^{2} }{l^{2}}\right)^{n-1}dt\] \[+ \kappa n\int_{\partial\mathcal{M}_{\infty}}\int_{0}^{1}\left(e \delta_{0}K\left(\tilde{R}+(K)^{2}+t^{2}\frac{e^{2}}{l^{2}}\right)^{n-1} \right)dt\]
For an asymptotically local AdS space, as the boundary is approached, it is satisfied that \(e\delta_{0}K-\delta_{0}eK\to 0\) and \(\left.\delta K\rightarrow\delta\omega\right|_{\partial\mathcal{M}}\). The fundamental key for the computation, however, is the fact that \(e^{2}\rightarrow-l^{2}R\). Finally, these conditions allow us to express variation as:
\[\delta_{0}I_{R}=\kappa n\int_{\partial\mathcal{M}_{\infty}}e\delta_{0}KR^{n-1 }\left(\int_{0}^{1}\left(1-t^{2}\right)^{n-1}\right)dt. \tag{12}\]
In this way, the variation of \(I=I_{LL}+I_{R}\):
\[\delta_{0}I=\int_{\partial\mathcal{M}_{\infty}}\left(\delta_{0}K\left(\frac{e }{l}\right)R^{n-1}\right)\left(l^{2n-1}\sum_{p=0}^{n}p(-1)^{2n-2p+1}\alpha_{p} +nl\kappa\frac{\Gamma(n)\sqrt{\pi}}{2\Gamma\left(n+\frac{1}{2}\right)}\right) +\ldots \tag{13}\]
here, \(\ldots\) stands for the integral of Equation (10) on the horizon. This defines:
\[\kappa=\frac{2l^{2n-2}}{n}\left(\sum_{p=0}^{n}p(-1)^{2n-2p}\alpha_{p}\right) \frac{\Gamma\left(n+\frac{1}{2}\right)}{\Gamma(n)\sqrt{\pi}} \tag{14}\]
In doing this now, there is a proper action principle. The Noether charge, in this case, is given by:
\[Q(\xi)_{\infty} = \int_{\partial\Sigma_{\infty}}\left(I_{\xi}\omega\left(\sum_{p= 0}^{n}p\alpha_{p}R^{p-1}e^{2(n-p)+1}\right)\right.\] \[+ \left.\kappa I_{\xi}\left(\int_{0}^{1}\int_{0}^{t}Ke\left(\tilde{ R}+t^{2}(K)^{2}+s^{2}\frac{e^{2}}{l^{2}}\right)^{n-1}\right)dsdt\right)\]
The direct evaluation of this expression for \(\xi=\partial_{t}\) on the static spaces considered yields the final result.
To conclude this section, it is convenient to express the presymplectic form in terms of the regularized Noether charge and the variation of the action defined by the boundary term in Equation (108). This yields:
\[\begin{split}\hat{\delta}G(\xi)\Big{|}_{\infty}&=\ \int_{\partial\Sigma_{\infty}}\hat{\delta}Q(\xi)_{\infty}+I_{\xi}\left( \kappa\int_{0}^{1}(e\hat{\delta}K-\hat{\delta}eK)\left(\tilde{R}+t^{2}(K)^{2}+ t^{2}\frac{e^{2}}{l^{2}}\right)^{n-1}dt\right.\\ &\left.+2\kappa(n-1)\hat{\delta}l\int_{0}^{1}\int_{0}^{1}K\left( \frac{e}{l}\right)\left(\tilde{R}+t^{2}(K)^{2}+s^{2}\frac{e^{2}}{l^{2}}\right) ^{n-1}dsdt\\ &\left.-2\kappa(n-1)\hat{\delta}l\,\int_{0}^{1}\int_{0}^{1}K\left( \frac{e}{l}\right)^{3}\left(\tilde{R}+t^{2}(K)^{2}+s^{2}\frac{e^{2}}{l^{2}} \right)^{n-3}dsdt\right).\end{split} \tag{109}\]
|
2301.05395 | MaNLP@SMM4H22: BERT for Classification of Twitter Posts | The reported work is our straightforward approach for the shared task
Classification of tweets self-reporting age organized by the Social Media
Mining for Health Applications (SMM4H) workshop. This literature describes the
approach that was used to build a binary classification system, that classifies
the tweets related to birthday posts into two classes namely, exact
age(positive class) and non-exact age(negative class). We made two submissions
with variations in the preprocessing of text which yielded F1 scores of 0.80
and 0.81 when evaluated by the organizers. | Keshav Kapur, Rajitha Harikrishnan | 2022-12-12T14:43:46Z | http://arxiv.org/abs/2301.05395v1 | # MaNLP@SMM4H'22: BERT for Classification of Twitter Posts
###### Abstract
The reported work is our straightforward approach for the shared task "Classification of tweets self-reporting age" organized by the "Social Media Mining for Health Applications (SMM4H)" workshop. This literature describes the approach that was used to build a binary classification system, that classifies the tweets related to birthday posts into two classes namely, exact age(positive class) and non-exact age(negative class). We made two submissions with variations in the preprocessing of text which yielded F1 scores of 0.80 and 0.81 when evaluated by the organizers.
## 1 Introduction
Determining the exact age of an individual is crucial to increasing the use of social media data for research purposes. In this contemporary world, adolescents use social media to the extent that it can have some very severe effects on their overall well-being if not monitored. Hence, a few applications like Twitter have set up some age restrictions for the well-being of an individual. They automatically detect the age of an individual trying to protect them from viewing unnecessary and harmful content.
In this work, we determine the exact age of an individual based on their tweets on Twitter. This helps in validating if a particular user has faked their age or not. One of the major challenges we faced while working with the data is that some of them could tweet about their friend's or relative's birthday which was getting misclassified. We have used BERT model which helps in the binary classification of our data. While developing our system for this task, we have discovered BERT that outperforms traditional training models.
## 2 Methodology
### Pre-Processing
Initially, the organisers provided us with 8,800 training data and 2,200 validation data. This dataset consisted of three fields: tweet id, text of the Tweet Object and annotated binary class label(exact age present/absent). The training and validation data were later combined and it was pre-processed for further development. For pre-processing, we removed URLs, emoticons, hashtags, and mentions using a python package _tweet-preprocessor_. After that we removed: contractions from the tweets, special characters and extra spaces. Then we used python package called _Natural Language Toolkit_ for removing the stop words. After these steps, we have further divided the pre-processing into two techniques: pronouns and removing pronouns.
### Model
In our model, we have used BERT (_bert-base-uncased_) from the Hugging Face library as a classifier and Softmax as the activation function. In the BERT model, there is an important special token [CLS] which is used as an input for our choice of classifier. We have used the Adam optimizer to fine-tune our BERT model. We trained the model with 4 to 10 epochs which converged after 10 epochs. Learning rate of the optimizer is given by 5e - 5.The batch size used is 32.
## 3 Evaluation
In the validation phase, our model produced satisfactory results with about 90%. In the test data, 10,000 tweets were provided by the organizers. We have first pre-processed with pronouns and then removed pronouns in the next round of pre-processing. After evaluation, our models generated an F1-score of 0.80 and 0.81.
Table 1 shows our evaluation scores for Precision,
Recall, and F1-Score as provided by the organizers. Model 1 shows scores of pre-processing with pronouns and Model 2 shows scores of pre-processing with pronouns removed.
## 4 Conclusion
We discussed our approach to fine-tuning our BERT model on Task 4 of the 2022 Social Media Mining for Health applications shared task. As we observe from the results, the given training data was inadequate to train on a BERT model. There was an imbalance in the number of positives and negatives given in our dataset(refer to Figure 1). An interesting observation drawn from this work is that BERT models rely on huge and balanced datasets for learning patterns. Future work might consider collecting more data points for training, fine-tuning our BERT model, and applying other state-of-the-art methods like RoBERTa.
|
2306.17736 | (N)NLO+NLL' accurate predictions for plain and groomed 1-jettiness in
neutral current DIS | The possibility to reanalyse data taken by the HERA experiments offers the
chance to study modern QCD jet and event-shape observables in deep-inelastic
scattering. To address this, we compute resummed and matched predictions for
the 1-jettiness distribution in neutral current DIS with and without grooming
the hadronic final state using the soft-drop technique. Our theoretical
predictions also account for non-perturbative corrections from hadronisation
through parton-to-hadron level transfer matrices extracted from dedicated Monte
Carlo simulations with Sherpa. To estimate parameter uncertainties in
particular for the beam-fragmentation modelling we derive a family of replica
tunes to data from the HERA experiments. While NNLO QCD normalisation
corrections to the NLO+NLL' prediction are numerically small, hadronisation
corrections turn out to be quite sizeable. However, soft-drop grooming
significantly reduces the impact of non-perturbative contributions. We
supplement our study with hadron-level predictions from Sherpa based on the
matching of NLO QCD matrix elements with the parton shower. Good agreement
between the predictions from the two calculational methods is observed. | Max Knobbe, Daniel Reichelt, Steffen Schumann | 2023-06-30T15:30:46Z | http://arxiv.org/abs/2306.17736v2 | # (N)Nlo + Nll+ accurate predictions for plain and groomed 1-jettiness in neutral current DIS
###### Abstract
The possibility to reanalyse data taken by the HERA experiments offers the chance to study modern QCD jet and event-shape observables in deep-inelastic scattering. To address this, we compute resummed and matched predictions for the 1-jettiness distribution in neutral current DIS with and without grooming the hadronic final state using the soft-drop technique. Our theoretical predictions also account for non-perturbative corrections from hadronisation through parton-to-hadron level transfer matrices extracted from dedicated Monte Carlo simulations with Sherpa. To estimate parameter uncertainties in particular for the beam-fragmentation modelling we derive a family of replica tunes to data from the HERA experiments. While NNLO QCD normalisation corrections to the NLO+NLL' prediction are numerically small, hadronisation corrections turn out to be quite sizeable. However, soft-drop grooming significantly reduces the impact of non-perturbative contributions. We supplement our study with hadron-level predictions from Sherpa based on the matching of NLO QCD matrix elements with the parton shower. Good agreement between the predictions from the two calculational methods is observed.
MCNET-23-07
IPPP/23/32
###### Contents
* 1 Introduction
* 2 Phase space and observable definition
* 3 DIS Monte Carlo simulations with Sherpa
* 3.1 MEPS@NLO predictions for DIS
* 3.2 Tuning the beam fragmentation model against HERA data
* 4 (N)NLO + NLL\({}^{\prime}\) resummation for 1-jettiness in DIS
* 4.1 NLL resummation in the Caesar approach
* 4.2 Grooming in DIS
* 4.3 Calculational tools and setup
* 5 Results for (groomed) 1-jettiness in DIS
* 6 Conclusions
* A Tuning details
## 1 Introduction
Event shape observables offer great potential for detailed studies of the intriguing dynamics of Quantum Chromodynamics (QCD), thereby providing insight into various strong interaction phenomena. For example, they offer sensitivity to the strong coupling constant \(\alpha_{S}\), the colour charges of the QCD quanta, and parton density functions, when considering hadronic initial state particles. Predictions for event shape distributions can be obtained from fixed-order perturbation theory, all-orders resummation of logarithmically enhanced contributions, as well as detailed particle-level simulations as provided by Monte Carlo event generators. Accordingly, they form a rather unique testbed for a variety of theoretical approaches, ranging from cutting-edge multi-loop calculations to detailed aspects in the modelling of the non-perturbative parton-to-hadron transition.
Event shapes have played a central role in the QCD measurement program of past \(e^{+}e^{-}\) collider experiments, see for instance [1, 2, 3, 4, 5]. Also at hadron-hadron machines they are considered in studies of hadronic final states. Possibly even more prominently, closely related jet-substructure observables have attracted enormous attention and sparked the development of modern grooming and tagging techniques, see Ref. [6] for a recent review. Also in deep-inelastic lepton-nucleon scattering experiments several event shape variables have been measured [7, 8, 9, 10, 11, 12]. However, the LEP and HERA experiments phased out in the years 2000 and 2007, respectively, such that later breakthroughs in calculational methods and modern observable definitions have not yet been fully exploited.
Their complementarity and partially reduced complexity when compared to present day LHC measurements, make the LEP and HERA data a real treasure for additional tests of our theoretical understanding and simulation capabilities. In the past years a small number of re-analyses of the LEP data have been published, see for instance [13, 14, 15, 16]. Furthermore, there are efforts to provide open data sets that can directly be used by the entire community [17, 18].
To open the treasure chest of their large data set for modern QCD studies the HERA H1 collaboration has recently started to publish a series of new, fascinating measurements that allow one to confront contemporary state-of-the-art predictions with precise DIS data. Besides their relevance for benchmarking
our present day tools, such analyses build an important stepping stone towards future electron-hadron colliders like the EIC at BNL [19, 20] or the LHeC at CERN [21, 22].
We here compile predictions for the 1-jettiness event shape in the Breit frame [23], that is equivalent to the well known thrust variable [24], for the HERA kinematics, _i.e._ lepton-proton collisions at \(\sqrt{s}=319\,\,\mathrm{GeV}\). Furthermore, we consider grooming of the hadronic final states based on the soft-drop method prior to the observable evaluation. We derive differential distributions for groomed and ungroomed \(\tau_{1}^{b}\) differential in the photon virtuality \(Q^{2}\in[150,20000]\,\,\mathrm{GeV}^{2}\), and the events inelasticity \(y\in[0.05,0.94]\). We perform Monte Carlo simulations with the Sherpa generator based on next-to-leading-order (NLO) matrix elements for the one- and two-jet final states matched to the parton shower and hadronised using Sherpa's new cluster fragmentation model [25]. To estimate the hadronisation modelling uncertainties in particular related to the beam remnant fragmentation we derive a set of replica tunes [26] to a selection of DIS measurements from the H1 and ZEUS experiments.
Furthermore, we compute resummed predictions at next-to-leading-logarithmic (NLL) accuracy in the observable value based on the implementation of the Caesar resummation formalism [27] in the Sherpa framework [28]. These get matched to the NNLO QCD result for the inclusive DIS process and the NLO matrix elements for the two-jet channel. For the NNLO QCD corrections we rely on an implementation in Sherpa presented in [29]. To account for non-perturbative corrections we derive parton-to-hadron level transfer matrices differential in the event shape variables that we extract from particle level simulations with Sherpa[30], thereby also accounting for the cluster-model parameter uncertainties through the set of replica tunes to HERA data.
Our calculations are targeted on an upcoming measurement by the H1 experiment, for that preliminary results have recently been presented [31, 32]. Results based on simulations with Sherpa in a similar fiducial phase space have been compared to data from jet-substructure observables in neutral current DIS in [33]. Our study extends earlier work on the simulation of DIS events with Sherpa[34]. Furthermore, this is the first time we include NNLO QCD correction in resummation calculations with Sherpa.
The article is organised as follows: in Sec. 2 we introduce the considered observables and define the fiducial phase space used in our study of the hadronic final states produced in \(ep\) collisions at HERA. In Sec. 3 we describe the setup used to simulate DIS events with Sherpa as well as the tuning of its beam-fragmentation parameters. In Sec. 4 we present our framework to compile \(\mathrm{(N)NLO}+\mathrm{NLL}^{\prime}\) predictions, based on the implementation of the Caesar formalism in Sherpa. Here, we also present our approach to treat non-perturbative corrections based on transfer matrices extracted from MC simulations, see Sec. 4.1. We present our final \(\mathrm{(N)NLO}+\mathrm{NLL}^{\prime}+\mathrm{NP}\) results in Sec. 5, alongside with MC predictions from Sherpa. We compile our conclusions and give an outlook in Sec. 6.
## 2 Phase space and observable definition
We consider deep-inelastic scattering (DIS) of leptons with momentum \(p\) of off protons with momentum \(P\) at HERA energies, _i.e._\(E_{l}=27.6\,\,\mathrm{GeV}\) and \(E_{p}=920\,\,\mathrm{GeV}\), resulting in a centre-of-mass energy of \(\sqrt{s}=319\,\,\mathrm{GeV}\). Denoting the outgoing lepton momentum as \(p^{\prime}\), we define the momentum difference, at LO carried by the virtual photon, as
\[q=p-p^{\prime}\equiv(0,0,0,-Q)\, \tag{1}\]
where the last equivalence defines the Breit frame, which we will assume whenever frame-specific formulae are given. We also introduce the usual Bjorken variable \(x_{B}\) and inelasticity \(y\)
\[x_{B} =\frac{Q^{2}}{2p\cdot P}\,, \tag{2}\] \[y =\frac{P\cdot q}{P\cdot p}. \tag{3}\]
We consider events with \(150<Q^{2}/\mathrm{GeV}^{2}<2\cdot 10^{4}\) and \(0.05<y<0.94\). No other cuts are applied, but we have studied 1-jettiness in smaller bins of \(Q^{2}\) and \(y\), and will only discuss a selection of results here*.
We take into account all final state particles apart from the outgoing lepton for the calculation of event-shape variables. We study a well known observable, referred to as thrust \(\tau_{Q}\)[24] or alternatively 1-jettiness \(\tau_{1}^{b}\)[23]. Several equivalent definitions exist in the literature. For concreteness we define it by dividing the event into a current hemisphere \(\mathcal{H}_{C}\) and a beam hemisphere \(\mathcal{H}_{B}\). Working in the Breit frame, we can introduce two reference vectors
\[n_{\pm}=(1,0,0,\pm 1) \tag{4}\]
and denote the hemispheres according to the final state particles momentum fractions along those,
\[\mathcal{H}_{C}=\{p_{i}:p_{i}\cdot n_{+}>p_{i}\cdot n_{-}\}\quad\text{and} \quad\mathcal{H}_{B}=\{p_{i}:p_{i}\cdot n_{+}<p_{i}\cdot n_{-}\}. \tag{5}\]
We can now define thrust as the sum of the longitudinal momentum components of all particles in the current hemisphere. As we prefer to work with an observable that vanishes in the soft limit, we ultimately use
\[\tau=1-\frac{2}{Q}\sum_{p_{i}\in\mathcal{H}_{C}}p_{i}^{z}\,. \tag{6}\]
Despite this definition only summing over one of the hemispheres, thrust, _i.e._ 1-jettiness, is actually sensitive to emissions anywhere in the event, and indeed is a global event shape in the sense of _e.g._[27]. Note this statement depends on the precise definition, including the normalisation factor here given by \(Q/2\), that differs in the thrust variant we use for tuning in the following.
In addition we study 1-jettiness calculated based on events that have been groomed of soft wide-angle radiation. Soft-drop grooming was first introduced in [35] as a jet substructure technique, including as a special case the modified Mass Drop Tagger [36, 37]. It has since been generalised and applied also to jets at lepton colliders [38, 18] and event shapes at both lepton [38, 39] and hadron [40] colliders. A version applicable to DIS was proposed in [41], based on the Centauro jet algorithm [42], that accounts for the forward-backward asymmetry when considering the Breit frame. This sequential cluster algorithm is based on the distance measure between particles with momenta \(p_{i},p_{j}\)
\[d_{ij} =(\Delta\bar{z}_{ij})^{2}+2\bar{z}_{i}\bar{z}_{j}(1-\cos\Delta \phi_{ij})\,, \tag{7}\] \[\text{with}\;\;\bar{z}_{i} =2\sqrt{1+\frac{q\cdot p_{i}}{x_{B}P\cdot p_{i}}}\quad\text{and} \quad\;\Delta\bar{z}_{ij}=\bar{z}_{i}-\bar{z}_{j}\,. \tag{8}\]
Note that [42] discusses more general functional forms of the distance measure, while we concentrate here on the definition given in [41]. As in all other soft-drop grooming methods the objects of interest, in this case the full event, are first clustered according to this sequential algorithm, and then the reverse clustering history is considered. The last cluster step is undone, and the softness of the softer of the two branches is evaluated. For the DIS case, [41] suggests to use
\[z_{i}=\frac{P\cdot p_{i}}{P\cdot q} \tag{9}\]
as a measure for softness. The formal soft-drop criterion then reads
\[\frac{\min[z_{i},z_{j}]}{z_{i}+z_{j}}>z_{\text{cut}}\, \tag{10}\]
with \(z_{\text{cut}}\) the grooming parameter. If this is satisfied, _i.e._ both branches are classified as hard, the algorithm terminates. Otherwise the softer branch (with smaller \(z\)) is dropped, and the procedure is repeated with the harder branch. This iteration stops when either Eq. (10) is satisfied, or there is only one particle left in the hard branch such that no further unclustering is possible.
We finally recalculate 1-jettiness, using Eq. (6) but restricting the sum to particles in the current hemisphere that have not been dropped during grooming, thereby considering variable values for \(z_{\text{cut}}\).
DIS Monte Carlo simulations with Sherpa
We derive hadron-level predictions for the DIS event shapes using a pre-release version of Sherpa-3.0[43], that will supersede the current Sherpa-2.2 series [44]. This major release features extended physics-modelling capabilities, including, for example, the automated evaluation of electroweak (EW) corrections at the one-loop order [45, 46, 47] or in the Sudakov approximation [48, 49], a complete reimplementation of the cluster hadronisation model [25], as well as an improved user interface based on Yaml[50]. To analyse our simulated event samples we employ the Rivet analysis package [51]. For jet clustering we use the Centauro plugin [42] within the FastJet framework [52].
### Meps@nlo predictions for DIS
The basics of simulating DIS processes by merging parton-shower evolved higher-multiplicity tree-level matrix elements within the Sherpa framework have been presented in [34]. We here lift this to next-to-leading order (NLO) accurate QCD matrix elements. To this end, we consider the massless single and dijet production channels in neutral current DIS at NLO, and three- and four-jets at leading order (LO), _i.e._
\[e^{-}p\to e^{-}+1,2\,j\,@\,\mathrm{NLO}+3,4\,j\,@\,\mathrm{LO}, \tag{11}\]
where we consider \(u,d,s\) quarks to be massless and add additional LO processes for the remaining massive quarks. The massless and massive channels get matched to the Sherpa Catani-Seymour dipole shower [53] and merged according to the MEPS@NLO[54] and MEPS@LO[55] truncated shower formalism, respectively. The contributing one-loop amplitudes are obtained from OpenLoops[56], that employs the Collier library [57] for the evaluation of tensor and scalar integrals. All tree-level matrix elements are provided by Comix[58], and PDFs are obtained from LHAPDF [59].
To determine the perturbative scales entering the calculation, the final states of the multi-parton final states get clustered to a two-to-two core process [55]. For the reconstructed core the factorisation, renormalisation, and parton shower starting scale are set to
\[\mu_{\mathrm{F}}=\mu_{\mathrm{R}}=\mu_{\mathrm{Q}}:=\mu_{\mathrm{ core}}\,. \tag{12}\]
For jet-associated DIS three configurations need to be distinguished [34]:
1. virtual photon exchange, _i.e._\(ej\to ej\), where \(\mu_{\mathrm{core}}^{2}=Q^{2}\),
2. interaction of the virtual photon with a QCD parton, _i.e._\(\gamma^{*}j\to j_{1}j_{2}\), with \(\mu_{\mathrm{core}}^{2}=m_{\perp,1}m_{\perp,2}\) defined as the product of the two jet transverse masses \(m_{\perp,i}=\sqrt{m_{i}^{2}+p_{\perp,i}^{2}}\) relative to the beam axis,
3. and pure QCD channels, _i.e._\(jj\to jj\), where \(\mu_{\mathrm{core}}^{2}=-\frac{1}{\sqrt{2}}\left(s^{-1}+t^{-1}+u^{-1}\right)^{-1}\) is a scaled harmonic mean of the Mandelstam variables \(s,t,u\).
Beyond the core process, the arguments of the strong-coupling factors are determined by the clustering algorithm [55]. The merging-scale parameter, separating the different jet-multiplicity contributions, is dynamically set to
\[Q_{\mathrm{cut}}=\frac{\bar{Q}_{\mathrm{cut}}}{\sqrt{1+Q_{\mathrm{cut}}^{2}/Q ^{2}}}\,,\quad\mathrm{using}\quad\bar{Q}_{\mathrm{cut}}=5\,\mathrm{GeV}\,. \tag{13}\]
As parton density functions we use the NNLO PDF4LHC21_40_pdfas set [60] with \(\alpha_{S}(M_{Z}^{2})\)=0.118.
To estimate perturbative uncertainties, we consider 7-point variations of the factorisation (\(\mu_{F}\)) and renormalisation (\(\mu_{R}\)) scales in the matrix element and the parton shower that get evaluated on-the-fly [61], _i.e._
\[\{(\tfrac{1}{2}\mu_{\mathrm{R}},\tfrac{1}{2}\mu_{\mathrm{F}}),\,(\tfrac{1}{2} \mu_{\mathrm{R}},\mu_{\mathrm{F}}),(\mu_{\mathrm{R}},\tfrac{1}{2}\mu_{\mathrm{ F}}),\,(\mu_{\mathrm{R}},\mu_{\mathrm{F}}),\,(\mu_{\mathrm{R}},2\mu_{\mathrm{F}}),(2\mu_{ \mathrm{R}},\mu_{\mathrm{F}}),\,(2\mu_{\mathrm{R}},2\mu_{\mathrm{F}})\}\,. \tag{14}\]
The resummation scale \(\mu_{Q}\) we keep fixed.
### Tuning the beam fragmentation model against HERA data
Ref. [25] presented a new cluster fragmentation model for Sherpa that will be used in Sherpa-3, super-seding the old cluster model described in [62], that was used in the Sherpa-1.X[63] and Sherpa-2.X[44] released. A particular feature of the new implementation is a specific treatment of the fragmentation of hadronic clusters that contain beam remnant particles. To calibrate the corresponding model parameters we performed dedicated tunes using HERA data for hadronic final state observables in neutral current DIS.
Broadly speaking, a cluster hadronisation simulation features two basic components, a cluster-formation and a cluster-decay model [64, 65]. Based on the pre-confinement property of QCD [66], finite mass colour neutral mesonic and baryonic clusters can be formed from the final state of a parton shower evolution of a hard scattering event. These primary clusters are then subject to an iterative fission process that ultimately results in the transition to known hadronic resonances, whose decays can be treated by a dedicated package. Both elements of the hadronisation model introduce sets of parameters that need to be carefully adjusted by comparing model predictions and measurements for suitable observables, a process commonly known as tuning.
In Ref. [26] the free model parameters were calibrated against hadronic observables measured in electron-positron annihilation experiments. However, in leptonic collisions the beam fragmentation modelling is not probed and the corresponding parameters remained unconstrained. This affects in particular the parametrisation of the decay of clusters that contain a remnant particle of an incident hadron, _e.g._ a (anti-)quark and (anti-)diquark from the break-up of the incoming proton in DIS. We consider the two-body decay of a beam cluster with flavours \(f_{1}\) and \(\bar{f}_{2}\), where a (di)quark-flavour pair \(f\bar{f}\) is drawn from the vacuum, resulting in
\[\mathcal{C}[f_{1}\bar{f}_{2}]\to\mathcal{C}_{1}[f_{1}\bar{f}]\,\mathcal{C}_{2 }[f\bar{f}_{2}]\,. \tag{15}\]
To fix the kinematics of the two-body decay in the rest frame of \(\mathcal{C}\), the absolute value of the transverse momentum of the decay products \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) is selected according to a Gaussian distribution \(\mathcal{N}(0,k_{T,0}^{2}/2)\) that is truncated at the parton-shower cut-off \(p_{T,\text{min}}\), _i.e._
\[\mathcal{P}(k_{T})\propto\exp\left(-k_{T}^{2}/k_{T,0}^{2}\right)\Theta(p_{T, \text{min}}^{2}-k_{T}^{2})\,. \tag{16}\]
The parameter \(k_{T,0}\) is thereby considered as independent of the incident cluster type. The direction of the two-component \(\vec{k}_{T}\) is picked uniformly in the transverse plane, with \(f_{1}\) and \(\bar{f}_{2}\) pointing along the positive and negative \(z\)-axis, respectively. This leaves one to fix the longitudinal momentum fractions \(z^{(1),(2)}\) with respect to the light-like vectors \(n_{\pm}^{\mu}=(1,0,0,\pm 1)\). For the case of a beam-remnant cluster, still working in its rest frame, these are distributed according to
\[\mathcal{P}(z)\propto z^{\alpha_{B}}(1-z)^{\beta_{B}}\cdot\exp\left\{-\gamma_ {B}\frac{1}{z}\left(\frac{k_{T}^{2}+(m_{f_{1}}+m_{\bar{f}_{2}})^{2}}{k_{T,0}^ {2}}\right)\right\}\,. \tag{17}\]
Note the similarity to the symmetric Lund string fragmentation function [67].
This results in the four-momenta of the decay products being given by
\[p_{\mathcal{C}_{1}}^{\mu} = \frac{m_{\mathcal{C}}}{2}\left(z^{(1)}n_{+}^{\mu}+(1-z^{(2)})n_{- }^{\mu}\right)+k_{T}^{\mu}\,, \tag{18}\] \[p_{\mathcal{C}_{2}}^{\mu} = \frac{m_{\mathcal{C}}}{2}\left((1-z^{(1)})n_{+}^{\mu}+z^{(2)}n_{- }^{\mu}\right)-k_{T}^{\mu}\,. \tag{19}\]
According to Eq. (17) the relevant free parameters specifically steering the decays of beam clusters are \(\alpha_{B}\), \(\beta_{B}\), and \(\gamma_{B}\). To calibrate those we performed dedicated tunes based on a variety of hadronic observables measured by the HERA experiments H1 and ZEUS. The remaining hadronisation parameters are set according to the LEP data tune described in Ref. [26].
We employ the Apprntice tuning tool [68], with reference data for DIS analyses at centre of mass energies of \(\sqrt{s}=300\,\mathrm{GeV}\), _i.e._ lepton energies of \(27.5\,\mathrm{GeV}\) and proton energies of \(820\,\mathrm{GeV}\). The tuning
requires an initial set of Monte Carlo runs, that are then used to generate a polynomial, bin-wise approximation of the Monte Carlo response with respect to changes in the hadronisation-model parameters. The predictions for the grid points are generated using the calculational setup described in Sec. 3.1.
The selection of observables considered for the tuning includes classic variables sensitive to hadronisation. In particular, we use event-shape distributions like thrust and jet broadening [9], energy flows and charged particle spectra [69, 70] and multiplicities [71, 72], as well as quark fragmentation functions [73, 74]. Further details on the used analyses and observables are provided in App. A.
Given we consider model parameters newly introduced that have not been tuned before, we have little prior knowledge about their preferred values and thus need to start out with rather wide parameter ranges. To narrow these down, we make an initial pass to get a rough idea of the relevant regions. The corresponding ranges are outlined in Tab. 1. For a second run we restrict the tuning ranges using the results of the exploration run, resulting in an iterative procedure to further narrow down the considered parameter intervals. The initial run, with largely unconstrained parameter values also serves the purpose of filtering out the most sensitive observables from the considered analyses. Observables or observable regions that remain unchanged under the variation of the tuning parameters are not suited for the following tunes and therefore dropped.
Similar to the procedure described in Ref. [26], we generate a set of equivalent tunes that only differ by the Monte Carlo runs used to construct the polynomial approximations as described above. The tunes are thus fully equivalent and can be used to estimate the non-perturbative model-parameter uncertainties as illustrated in Fig. 1 for a selection of data from the HERA experiments. We call these alternative parameter sets replica tunes. To reflect the uncertainty associated with the three beam-fragmentation parameters we here consider seven such replicas, _cf._ Tab. 1 for the resulting uncertainty variations.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline parameter & parameter tag & tuning range & central tune & uncertainty variation \\ \hline \(\alpha_{B}\) & ALPHA\_B & [-1, 20] & 14.2 & [13.9, 14.8] \\ \(\beta_{B}\) & BETA\_B & [0.5, 4] & 1.59 & [1.14, 1.60] \\ \(\gamma_{B}\) & GAMMA\_B & [1, 20] & 8.11 & [8.06, 9.47] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ahadic++ model parameters considered in the tuning. Quoted are the initial parameter interval, the obtained central-tune value, and uncertainty ranges extracted from 7 replica tunes.
Figure 1: Sherpa predictions for the hadronisation tune, for observables measured by the H1 and ZEUS experiments at \(\sqrt{s}=296\,\mathrm{GeV}\). Shown is the transverse energy flow (left) [69], thrust \(\tau^{\prime}\) (center) [9] and the charged particle multiplicity \(n_{\mathrm{ch}}\) (right) [71]. Note, the statistical uncertainties of the simulated data is small compared to the non-perturbative tuning uncertainties indicated by the blue band.
## 4 (N)Nlo + Nll\({}^{\prime}\) resummation for 1-jettiness in DIS
The 1-jettiness observable considered here is equivalent to thrust in DIS, which has originally been resummed at NLL accuracy in [24, 75]. The more general \(n\)-jettiness was suggested in [76], and has been resummed to NNLL accuracy [77]. For 1-jettiness, analytic fixed order results at LO have been presented in [78], and the NLL calculation has been matched to fixed order at NLO accuracy in [79]. The resummed calculations in this formalism for event shapes in DIS were extended to N\({}^{3}\)LL in [80]. Grooming for DIS has first been suggested in [41] based on jets defined with the Centauro jet algorithm [42]. The same Ref. [41] also provided NNLL results for both 1-jettiness and jet mass after soft drop grooming. Non-perturbative corrections have there been modelled through a two-parameter shape function [81, 82]. To our knowledge there are no published results studying these observables including matching to fixed order or using a fixed order calculation alone.
### Nll resummation in the Caesar approach
To perform the NLL resummation of logarithms \(L\) of event shapes in DIS we use the implementation of the Caesar formalism [27] available in the Sherpa framework [28, 83]. For a recursive infrared and collinear (rIRC) safe observable, the cumulative cross section for observable values up to \(v=\exp(-L)\) can be expressed to all orders, in general as a sum over partonic channels \(\delta\), as follows:
\[\begin{split}\Sigma_{\rm res}(v)&=\sum_{\delta} \Sigma_{\rm res}^{\delta}(v)\,,\ \text{with}\\ \Sigma_{\rm res}^{\delta}(v)&=\int d\mathcal{B}_{ \delta}\frac{d\sigma_{\delta}}{d\mathcal{B}_{\delta}}\exp\left[-\sum_{l\in \delta}R_{l}^{\mathcal{B}_{\delta}}(L)\right]\mathcal{P}^{\mathcal{B}_{ \delta}}(L)\mathcal{S}^{\mathcal{B}_{\delta}}(L)\mathcal{F}^{\mathcal{B}_{ \delta}}(L)\mathcal{H}^{\delta}(\mathcal{B}_{\delta})\,,\end{split} \tag{20}\]
where \(\frac{d\sigma_{\delta}}{d\mathcal{B}_{\delta}}\) is the fully differential Born cross section for channel \(\delta\) and \(\mathcal{H}\) implements the kinematic cuts applied to the Born phase space \(\mathcal{B}\). For a 2-jet observable like thrust in DIS, there is only one relevant partonic Born channel, corresponding to an incoming and an outgoing quark. This also implies that the soft function \(\mathcal{S}\), which implements colour evolution, is trivial in our case. Further, since we are dealing with an additive observable, the multiple emission function \(\mathcal{F}\) is simply given by \(\mathcal{F}(L)=e^{-\gamma_{E}R^{\prime}}/\Gamma(1+R^{\prime})\), with \(R^{\prime}(L)=\partial R/\partial L\) and \(R(L)=\sum_{l\in\delta}R_{l}(L)\). The collinear radiators \(R_{l}\) for the hard legs \(l\) were computed in [27] for a general observable \(V\) scaling for the emission of a soft-gluon of relative transverse momentum \(k_{t}^{(l)}\) and relative rapidity \(\eta^{(l)}\) with respect to leg \(l\) as
\[V(k)=\left(\frac{k_{t}^{(l)}}{\mu_{Q}}\right)^{a}e^{-b\eta^{(l)}}d_{l}\left( \mu_{Q}\right)g_{l}\left(\phi\right)\,. \tag{21}\]
For the case of 1-jettiness we are focusing on in this publication, we have \(a=b_{l}=1\), and fixing \(\mu_{Q}^{2}=Q^{2}\) also \(d_{l}g_{l}=1\) since there is no dependence on the azimuthal angle \(\phi\). The precise form of the logarithm can be varied according to
\[L\to\ln\left[\frac{x_{L}}{v}-x_{L}+1\right]\to\ln\frac{x_{L}}{v}\quad\text{ as}\quad v\to 0\,, \tag{22}\]
to estimated the impact of sub-leading logarithms, while leaving the distribution at the kinematic endpoint \(v\sim 1\) unchanged. Note this implies an additional contribution to \(R_{l}(L)\) to restore NLL accuracy.
The PDF factor \(\mathcal{P}\), in our study applicable only to the hadronic beam, is here given by
\[\mathcal{P}=\frac{f_{q}(x,e^{-2L/(a+b)}\mu_{F}^{2})}{f_{q}(x,\mu_{F}^{2})}\,, \tag{23}\]
corrects for the true initial-state collinear scale. We thereby account for the full DGLAP evolution by calculating a simple ratio. For the purpose of matching to a fixed order calculation, we also need the expansion of the ratio to a given order in \(\alpha_{\rm s}\). We generally follow the approach of [27] to implement
the expansion of a leading order approximation. This of course introduces additional effects beyond our considered logarithmic accuracy. We argue it is safe to ignore those, given the generally small numerical size of these contributions as seen for example in [28]. We here for the first time apply the Caesar implementation in Sherpa to an observable that is sensitive to the PDF ratio (note this only applies to the ungroomed version of thrust) and at the same time match to the (N)NLO calculation. We hence need to take care of the expansion to one order higher. Following [27], the numerator of Eq. (23) can to NLL accuracy be written and expanded in powers of \(\alpha_{\rm s}\) as
\[{\bf f}(x,e^{-2L/(a+b)}\mu_{F}^{2}) =\exp\left[-T\left(\frac{L}{a+b}\right){\bf P}\otimes\right]{\bf f }(x,\mu_{F}^{2})\] \[\sim 1-\left(T^{(1)}\left(\frac{L}{a+b}\right)+T^{(2)}\left(\frac{L} {a+b}\right)\right){\bf P}\otimes{\bf f}(x,\mu_{F}^{2})\] \[\quad+\frac{1}{2}\left(T^{(1)}\left(\frac{L}{a+b}\right)\right)^ {2}{\bf P}\otimes{\bf P}\otimes{\bf f}(x,\mu_{F}^{2})+{\cal O}\left(\alpha_{ \rm s}^{3}\right), \tag{24}\]
where \(T^{(i)}\) denotes the \(i\)th term obtained by expanding the integrated strong coupling
\[T(L)=-\frac{1}{\pi\beta_{0}}\ln(1-2\alpha_{\rm s}\beta_{0}L) \tag{25}\]
in powers of \(\alpha_{\rm s}\). The bold-faced symbols represent matrices (of splitting functions, \({\bf P}\)) and vectors (\({\bf f}=(f_{u},f_{d},f_{s},\dots)\)) in flavour space, and the convolution is given by
\[{\bf P}\otimes{\bf f}(x,\mu_{F}^{2})=\int_{x}^{1}\frac{dz}{z}{\bf P}\left( \frac{x}{z}\right){\bf f}(z,\mu_{F}^{2})\,. \tag{26}\]
New terms at \({\cal O}(\alpha_{\rm s}^{2})\) hence originate from the higher order expansion of \(T\), mixed terms with other parts of the resummation multiplying the leading order expansion, and the convolution of two splitting functions with the PDF in the last line of Eq. (24). The last one is the only one that requires a non-trivial implementation. We use the expressions from [84] for convoluted splitting functions, and solve the final integral for the convolution with the PDF through Monte Carlo integration, as done at leading order.
We match our resummed calculation in the multiplicative matching scheme along the lines of [83], which we briefly recap here. The matching to fixed order is done at the level of cumulative distributions \(\Sigma(v)\). Note that we have dropped the label for the partonic channel since in our case there is a single one only. We expand the inclusive cross section \(\sigma_{\rm fo}\) as well as the fixed-order and resummed cumulative distributions, \(\Sigma_{\rm fo}\) and \(\Sigma_{\rm res}\) in series of \(\alpha_{\rm s}\):
\[\sigma_{\rm fo} =\sigma^{(0)}+\sigma_{\rm fo}^{(1)}+\sigma_{\rm fo}^{(2)}+\dots\,, \tag{27}\] \[\Sigma_{\rm fo}(v) =\sigma^{(0)}+\Sigma_{\rm fo}^{(1)}(v)+\Sigma_{\rm fo}^{(2)}(v)+ \dots\,,\] (28) \[\Sigma_{\rm res}(v) =\sigma^{(0)}+\Sigma_{\rm res}^{(1)}(v)+\Sigma_{\rm res}^{(2)}(v)+ \dots\,, \tag{29}\]
where the number in parentheses indicates the respective order in \(\alpha_{\rm s}\), and \(\sigma^{(0)}\) denotes the Born-level cross section. Our final matched expression for the cumulative distribution, with the dependencies on the observable value suppressed, reads:
\[\Sigma_{\rm matched}=\Sigma_{\rm res}\left(1+\frac{\Sigma_{\rm fo}^{(1)}- \Sigma_{\rm res}^{(1)}}{\sigma^{(0)}}+\frac{\Sigma_{\rm fo}^{(2)}-\Sigma_{\rm res }^{(2)}}{\sigma^{(0)}}-\frac{\Sigma_{\rm res}^{(1)}}{\sigma^{(0)}}\frac{\Sigma _{\rm fo}^{(1)}-\Sigma_{\rm res}^{(1)}}{\sigma^{(0)}}\right)\,. \tag{30}\]
Note that, compared to our earlier works, we use \(\Sigma^{(2)}\) directly, thus reproducing the inclusive cross section to one order higher, what requires the calculation of \(\sigma_{\rm fo}^{(2)}\). Importantly, the resummed NLL result \(\Sigma_{\rm res}\) is multiplied by
\[\frac{\Sigma_{\rm fo}^{(1)}-\Sigma_{\rm res}^{(1)}}{\sigma^{(0)}}\to\frac{ \alpha_{\rm s}}{2\pi}C_{1}\quad\mbox{as}\quad v\to 0\,, \tag{31}\]
ensuring that the matched calculation is accurate to NLL\({}^{\prime}\).
In addition to the perturbative contribution described above, there is a significant non-perturbative component to the distribution of event shapes, that we necessarily need to take into account in order to accurately describe actual collider data. While it has been shown in various circumstances that soft-drop grooming reduces the impact of hadronisation corrections, see for example [81, 38, 39, 40, 85, 30], it is typically still necessary to account for a remaining small non-perturbative contribution. We here adopt the approach of [30] to extract transfer matrices from Monte Carlo simulations. Transfer matrices are defined as
\[\mathcal{T}_{hp}=\frac{\int dP\,\frac{d\sigma}{dP}\Theta_{p}\left(P\right) \Theta_{h}\left(H(P)\right)}{\int dP\,\frac{d\sigma}{dP}\Theta_{p}\left(P \right)}\,, \tag{32}\]
with
\[\Theta_{p}\left(P\right) =\prod_{i=1}^{m}\theta(V_{i}(P)-v_{p,i}^{\min})\theta(v_{p,i}^{ \max}-V_{i}(P))\,, \tag{33}\] \[\Theta_{h}\left(H(P)\right) =\prod_{i=1}^{m}\theta\left(V_{i}\left(H(P)\right)-v_{h,i}^{\min }\right)\theta\left(v_{h,i}^{\max}-V_{i}\left(H(P)\right)\right)\,, \tag{34}\]
for a transition between the parton level phase space \(P\) and the corresponding hadron level configuration \(H(P)\), characterised by a set of observables \(V_{i}\) that can be calculated on both of them. For our purpose, we assume that the requirements on the DIS kinematics, _cf._ Sec. 2, sufficiently fix the remaining degrees of freedom other than 1-jettiness \(\tau\). Hence, we are only concerned with events migrating between different bins in \(\tau\) within a given \(Q^{2}\), \(y\) bin. The transfer matrices as defined above can readily be extracted from the Sherpa event generator by analysing the different stages of the events evolution, _i.e._ after parton showering but before hadronisation and thereafter. For practical details of our event generation setup see Sec. 3. Our final results are then calculated from the resummed and matched parton level bins \(\Delta\sigma_{p}^{\rm PL}\) as
\[\Delta\sigma_{h}^{\rm HL}=\sum_{p}\mathcal{T}_{hp}\,\Delta\sigma_{p}^{\rm PL}. \tag{35}\]
### Grooming in DIS
The framework described above has already been employed to obtain resummed predictions for soft-drop thrust in lepton-lepton collisions at \(\mathrm{NLO+NLL}^{\prime}\) precision [39], for soft-drop groomed hadronic event shapes [40] and groomed jet substructure observables at the LHC [85, 86, 30]. The extensions made in [40] to accommodate the phase space constraints implied by soft-drop grooming, with general parameters \(z_{\rm cut}\) and \(\beta\), are directly applicable here. Note that [41] does not define a \(\beta\neq 0\) version of grooming in DIS, and we make no attempt here to extend it.
The applicability of the results from [40] to DIS event shapes relies on two statements. First, within the current hemisphere the phase space constraints to radiation in the soft and collinear limits correspond to the case of final state radiation in general hadronic collisions. Second, in the beam hemisphere any soft and collinear radiation is groomed away. Accordingly, we can treat radiation in \(\mathcal{H}_{B}\) equivalent to the initial state radiation case in [40], even if the precise shape of the phase space boundary is different, but such difference does not enter at NLL accuracy. We analyse the behaviour of the Centauro algorithm and the associated soft-drop grooming variant in the language of the Caesar framework in the following to illustrate this. Recall that we are working in the Breit frame. At NLL accuracy, we have to take into account ensembles of soft particles, well separated in rapidity, around a Born configuration consisting of the proton momentum
\[P^{\mu}=\frac{Q}{2x_{B}}n_{+}^{\mu} \tag{36}\]
and the outgoing struck quark in \(n_{-}\) direction. The virtual photon carries momentum
\[q=\frac{Q}{2}(n_{-}-n_{+}). \tag{37}\]
We parameterise the momenta of additional soft gluons as
\[k_{i}^{\mu}=k_{t}^{i}\left(\frac{e^{\eta_{i}}}{2}n_{-}^{\mu}+\frac{e^{-\eta_{i}}}{ 2}n_{+}^{\mu}+n_{\perp}^{\mu}\right)\,, \tag{38}\]
where \(n_{\perp}\) is a transverse unit vector perpendicular to \(n_{+}\) and \(n_{-}\). The variable introduced in the Centauro algorithm, _cf._ Eq. (8), can be written using the phase space variables \(\eta_{i}\), \(k_{t}^{i}\) as
\[\bar{z}_{i}=2e^{-\eta_{i}}\, \tag{39}\]
such that the expression for the distance measure, _cf._ Eq. (7), becomes
\[d_{ij}=4\left(e^{-2\eta_{i}}+e^{-2\eta_{j}}+2e^{-(\eta_{i}+\eta_{j})}\cos\Delta \phi_{ij}\right)\sim 4e^{-2\eta_{i}}\,, \tag{40}\]
where we have identified the behaviour for strong ordering in \(\eta\), \(\eta_{i}\ll\eta_{j}\). In this limit, the algorithm builds up a single jet containing the hard quark by adding the next remaining gluon that is most collinear to this jet. The last clustering will add the gluon most collinear to the beam direction to the jet. If all gluons are separated in rapidity well enough, there are no other clusters to be taken care of.
From this discussion it is clear that all comparisons of scales during soft drop will be between a soft gluon and a jet containing the hard quark. At Born level, the four-momentum of the jet will be approximately that of the quark, and the gluon will be the softer of the two. With this in mind the hardness measure for soft drop for soft momentum \(k_{i}\) can be written as
\[z_{i}\sim\frac{k_{t}^{i}}{Q}e^{\eta_{i}}. \tag{41}\]
Within the current hemisphere, the phase space restriction, on an emission that passes the soft-drop criterion, is given by
\[\frac{k_{t}e^{\eta}}{Q}>z_{\rm cut}\, \tag{42}\]
which precisely matches the one given in [40] for \(\beta=0\) (see Sec. 3.4 point (iv), and note that the hard quark has energy \(Q/2\) in the Breit frame).
Note that particles outside of the current hemisphere will enter in Eq. (42) with negative rapidity \(\eta\). They will hence be groomed away unless they are at very high \(k_{t}\), only causing logarithms of \(z_{\rm cut}\). We note again that the precise shape of the phase space boundary is different from what is given in [40] for initial states. The main point is however that only logarithms of \(z_{\rm cut}\) are produced, which we ignore noting again that we work in the limit \(v\ll z_{\rm cut}\).
### Calculational tools and setup
As already stated, the resummation calculation for 1-jettiness is accomplished with the Caesar plugin to Sherpa that hooks into the event generation framework++. Sherpa thereby provides all the process management, and gives access to the Comix matrix element generator [58], as well as phase-space integration and event-analysis functionalities. We make use of Sherpa's interface to LhaPdf[59] and use the PDF4LHC_40_pdfas PDF set, as we do for the parton-shower simulations outlined in the previous section. The value of the strong coupling is set accordingly, _i.e._\(\alpha_{S}(M_{Z}^{2})=0.118\). The Sherpa framework is also used to compile all the required higher-order tree-level and one-loop calculations. For the NLO QCD computations we use the Sherpa implementation of the Catani-Seymour dipole subtraction [87] and the interfaces to the Recola[88, 89] and OpenLoops[90] one-loop amplitude generators. The calculation of NNLO accurate predictions for DIS has been automated in Sherpa in [29], and we use it to compute cross sections \(\sigma_{\rm fo}^{(2)}\) at order \(\alpha_{\rm s}^{2}\) differential in \(Q^{2}\) and \(y\) to achieve overall NNLO accuracy for inclusive cross sections. This corresponds to an accuracy of the distribution differential in thrust at
NLO, and we refer to the combined accuracy of our fixed order predictions including cross sections as (N)NLO. The plugin implements the building blocks of the Caesar master formula Eq. (20), along with the necessary expansion in \(\alpha_{s}\) used in the matching with fixed-order calculations. The building blocks are evaluated fully differentially for each Born-level configuration \(\mathcal{B}_{\delta}\) of a given momentum configuration. Jet clustering and grooming functionalities are accessed through the interface of Sherpa to FastJet[52]. Non-perturbative corrections are extracted from dedicated runs of the Sherpa generator using the identical setup described in Sec. 3, thereby employing the functionality of the Rivet analysis tool to provide access to intermediate evolution stages through the HepMC event record [91].
## 5 Results for (groomed) 1-jettiness in DIS
Having outlined our calculational techniques for describing hadronic final state observables in neutral current DIS, we can finally present our numerical results for the 1-jettiness event shape. We begin by discussing selected results for the ungroomed case. We have compiled predictions for a wide range of \(Q^{2}\) values, _i.e._\(Q^{2}\in[150,20000]\,\,\mathrm{GeV}^{2}\). Furthermore, we consider the production cross section differential in the events inelasticity, thereby covering the region \(y\in[0.05,0.94]\). For brevity, we here focus on three kinematic regions corresponding to medium values of \(y\in[0.4,0.7]\) and rather low (\(Q^{2}\in[150,200]\,\mathrm{GeV}^{2}\)), medium (\(Q^{2}\in[440,700]\,\mathrm{GeV}^{2}\)) and high (\(Q^{2}\in[3500,8000]\,\mathrm{GeV}^{2}\)) photon virtuality.
Along with the central predictions we show error bands indicating the perturbative uncertainty obtained from 7-point variations of \(\mu_{R},\mu_{F}\), in both the shower and the semi-analytic calculation, and in addition a variation of \(x_{L}=0.5,2\) in the latter, _cf._ Eq. (22). Furthermore, we include an uncertainty estimate related to the tuning of beam-fragmentation parameters based on replica tunes, see Sec. 3.2. Generally, this contribution is found to be rather small compared to the perturbative uncertainties. We observe the overall uncertainties for the NLO QCD matrix element plus parton-shower simulations and the resummation predictions to be of similar sizes.
We first analyse the behaviour of the \(\mathrm{NLO}+\mathrm{NLL}^{\prime}\) resummation calculation upon inclusion of the NNLO normalisation correction and non-perturbative effects. To this end we compile in Fig. 2 corresponding predictions for the three kinematic regions specified before. From the lower panels, showing the ratio to the respective \(\mathrm{NLO}+\mathrm{NLL}^{\prime}\) result, it can be read off, that correcting the normalisation to NNLO accuracy has a rather small impact. The differential cross section receives a small negative correction, of at most a few percent at small \(\tau\) in the lower \(Q^{2}\) region. Note, however, that even the smallest \(Q^{2}\) values in this analysis remain sizeable compared to the overall range accessible for the HERA experiments. Somewhat more significant is the reduction in the perturbative uncertainties when going from NLO to NNLO, in particular for the bulk of the distributions, _i.e._ low values of 1-jettiness.
Next, we consider the inclusion of non-perturbative corrections based on the transfer-matrix approach described in Sec. 4.1. As clearly visible in Fig. 2 these significantly alter the shape of the distributions, introducing a sizeable shift towards larger 1-jettiness values. In particular for the low and medium \(Q^{2}\) region the first bin gets almost entirely depopulated. In contrast, for values of \(\tau\approx 0.1\ldots 0.2\) corrections can reach up to \(+100\%\). The effect of hadronisation corrections is less pronounced at higher \(Q^{2}\). We furthermore note, that the non-perturbative corrections through the bin migration via transfer matrices partially compensate the dependence of the perturbative calculation on scale variations and in particular of \(\mu_{R}\).
We close this first discussion of the resummed predictions for ungroomed 1-jettiness by pointing to the distinct peak at \(\tau\approx 1\) for the low and medium \(Q^{2}\) distributions, emerging after a significant decline of the differential cross section from lower to larger observable values. For the given observable definition the configuration \(\tau=1\) can be attributed to events with an empty current hemisphere \(\mathcal{H}_{C}\)[78]. Such configurations first appear when considering the NLO real-emission correction to the DIS process, when both final state partons feature negative longitudinal momenta in the Breit frame, such that 1-jettiness defaults to 1, see Eq. (6). We here account for these configurations through matching to the exact NLO QCD result for \(\tau\), _i.e._ including the full \(\mathcal{O}(\alpha_{S})\) corrections to the two-parton channel. It can be observed, that hadronisation corrections reduce the amount of \(\tau\approx 1\) events, what can be expected, as the fragmentation of partons originally in the beam hemisphere might spill over hadrons in the current hemisphere.
We now turn to the presentation of the hadron level results from MEPS@NLO simulations with Sherpa as outlined in Sec. 3 and compare those to the \(\mathrm{(N)NLO+NLL^{\prime}+NP}\) predictions. In Fig. 3 we compare the respective results for the three considered kinematic regions. We observe an overall fair agreement between the matrix element improved shower simulations at hadron level obtained from Sherpa and the resummed and matched calculation at \(\mathrm{(N)NLO+NLL^{\prime}+NP}\) accuracy, corrected for non-perturbative effects. In general the merged prediction features a somewhat harder spectrum, _i.e._ favours somewhat larger observable values. This might also be attributed to the inclusion of the exact tree-level three- and four-jet matrix elements, see Eq. (11). These contributions feature LO scale dependence and are thus the source for the somewhat enlarged theoretical uncertainties in the shower simulation towards larger values of \(\tau\). However, the regions of small 1-jettiness agree within uncertainties for all three kinematic regions, up until the peak of the respective distribution. Towards the kinematic endpoint, the two approaches tend to agree again, with both calculations predicting very similar cross sections for events with \(\tau\sim 1\).
Besides the plain 1-jettiness event shape we here also consider the effect of soft-drop grooming the hadronic final state. In Fig. 4 we show resummed predictions for groomed 1-jettiness, referred to as \(\tau^{\mathrm{SD}}\) in what follows, integrated over the full \(Q^{2}\) range, _i.e._\(Q^{2}\in[150,20000]\;\mathrm{GeV}^{2}\), and the inelasticity region \(y\in[0.2,0.7]\). We compiled predictions for three commonly considered values of \(z_{\mathrm{cut}}\), namely \(z_{\mathrm{cut}}=0.05,0.1,0.2\), thereby always assuming the angular grooming parameter \(\beta=0\). As seen for the
Figure 3: Distributions of 1-jettiness in selected \(y-Q^{2}\) bins, _i.e._\(y\in[0.4,0.7]\) and, from left to right, \(Q^{2}/\mathrm{GeV}^{2}\in[150,200]\), \([440,700]\), and \([3500,8000]\), respectively. Shown are hadron level MEPS@NLO predictions from Sherpa and results at \(\mathrm{(N)NLO+NLL^{\prime}+NP}\) accuracy. The lower panels present the ratio to the MEPS@NLO result.
Figure 2: Distributions of ungroomed 1-jettiness in selected \(y-Q^{2}\) bins, at different stages of the calculation, at \(\mathrm{NLO+NLL^{\prime}}\) accuracy, including the normalisation at NNLO (\(\mathrm{(N)NLO+NLL^{\prime}}\)) accuracy, and including non-perturbative corrections. All results correspond to DIS kinematics with \(y\in[0.4,0.7]\) and the plots represent from left to right regions of \(Q^{2}/\mathrm{GeV}^{2}\in[150,200]\), \([440,700]\), and \([3500,8000]\), respectively. The lower panels present the ratio to the \(\mathrm{plain}\;\mathrm{NLO+NLL^{\prime}}\) result.
ungroomed case, we note rather small effects of the NNLO normalisation corrections compared to the \(\mathrm{NLO+NLL^{\prime}}\) calculation. Also the systematic uncertainties hardly change from NLO to NNLO. However, the size of the non-perturbative corrections is significantly reduced relative to the ungroomed case, staying below 50% and being largely flat over a wide range of \(\tau^{\mathrm{SD}}\), apart from very low values of 1-jettiness and at the endpoint \(\tau^{\mathrm{SD}}\sim 1\). This confirms the potential of soft-drop grooming to mitigate hadronisation effects for event shape observables also in DIS, seen before in \(e^{+}e^{-}\)[38, 39] and \(pp\) collisions [40].
The comparison of the \(\mathrm{(N)NLO+NLL^{\prime}+NP}\) results with hadron level simulations at MEPS@NLO accuracy is presented in Fig. 5. For all the \(z_{\mathrm{cut}}\) values, we observe good agreement between our Sherpa simulation and the resummation calculation somewhat better than for the ungroomed case. In all three cases, the \(\mathrm{(N)NLO+NLL^{\prime}+NP}\) calculation predicts a larger cross section in the \(\tau\sim 1\) bin, although still compatible within the uncertainty of the event generator for \(z_{\mathrm{cut}}=0.05\) and the combined uncertainty for both calculations for \(z_{\mathrm{cut}}=0.1\). Apart from this last bin, for these two \(z_{\mathrm{cut}}\) values the resummation calculation is consistently below the Sherpa simulation. In the case of \(z_{\mathrm{cut}}=0.05\), this happens flat over the full spectrum \(\tau^{\mathrm{SD}}<1\), while for increasing \(z_{\mathrm{cut}}\) a slight shape develops, with the \(\mathrm{(N)NLO+NLL^{\prime}+NP}\) cross section decreasing faster for \(\tau^{\mathrm{SD}}<z_{\mathrm{cut}}\) than what is seen in the Monte Carlo simulation.
It will be interesting to compare the \(\mathrm{(N)NLO+NLL^{\prime}+NP}\) predictions and the Sherpa MEPS@NLO simulations with the data of upcoming measurements by the H1 experiment. This will shed light on the found deviations between the two sets of predictions and possibly guide the development of yet improved theoretical predictions, _e.g._ through the inclusion of next-to-next-to-leading logarithmic corrections.
Figure 4: Distributions of groomed 1-jettiness, at different stages of the calculation, at \(\mathrm{NLO+NLL^{\prime}}\) accuracy, including the normalisation at NNLO (\(\mathrm{(N)NLO+NLL^{\prime}}\)) accuracy, and including non-perturbative corrections. From left to right the plots represent predictions for the grooming parameter \(z_{\mathrm{cut}}=0.05,0.1,0.2\), respectively. The lower panels present the ratio to the plain \(\mathrm{NLO+NLL^{\prime}}\) result.
Figure 5: Distributions of groomed 1-jettiness. Shown are hadron level MEPS@NLO predictions from Sherpa and results at \(\mathrm{(N)NLO+NLL^{\prime}+NP}\) accuracy. From left to right the plots represent predictions for the grooming parameter \(z_{\mathrm{cut}}=0.05,0.1,0.2\), respectively. The lower panels present the ratio to the MEPS@NLO result.
Conclusions
We presented the calculation of theoretical predictions for the 1-jettiness event shape in neutral current DIS at HERA energies. The here considered 1-jettiness observable, evaluated in the Breit frame, is equivalent to the well-known thrust variable that has been widely studied at lepton and hadron colliders. Besides plain 1-jettiness we also considered its variant after soft-drop grooming the hadronic final state using different values of the grooming parameter \(z_{\rm cut}\). We consider the triple-differential cross section in the observable, momentum transfer \(Q^{2}\), and the events inelasticity \(y\).
Based on the Caesar formalism we derive NLL accurate results matched to the exact NLO QCD matrix element for the two-jet DIS matrix element. Furthermore, we include the exact NNLO QCD corrections to the inclusive DIS process, thereby achieving full NNLO accuracy for the integrated observable distribution. We furthermore correct our results of \(\rm(N)NLO+NLL^{\prime}\) accuracy for non-perturbative hadronisation effects through a transfer matrix that takes into account migration in the observable value when going from parton to hadron level. The corresponding corrections have been extracted from Monte Carlo simulations at MEPS@NLO accuracy with the Sherpa generator. To this end, we have performed tunes of the beam-fragmentation parameters of Sherpa's new cluster fragmentation model against data from the H1 and ZEUS experiments. We thereby also derived replica tunes that account for the parametric uncertainties.
For plain 1-jettiness we have shown results for three kinematic regions, corresponding to medium inelasticity \(y\) and ranges of rather low, medium, and high \(Q^{2}\) values. While the impact of the NNLO contributions is found to be very small, hadronisation corrections significantly sculpt the differential distributions, pushing events from lower to larger 1-jettiness values. When comparing the hadronisation corrected \(\rm(N)NLO+NLL^{\prime}\) predictions with hadron level predictions from Sherpa good agreement is found, with larger deviations dominantly in the region \(0.2<\tau<0.6\). Quite good agreement is found regarding events at the endpoint of the distribution, _i.e._\(\tau\simeq 1\). For the low and medium \(Q^{2}\) regions the distribution here develops a significant peak, that can be attributed to events with an empty current hemisphere.
For the soft-drop groomed variant of 1-jettiness we have shown predictions for three values of \(z_{\rm cut}\), integrated over a wide range of \(Q^{2}\), _i.e._\(Q^{2}\in[150,20000]~{}{\rm GeV}^{2}\), and \(y\in[0.2,0.7]\). For all values of \(z_{\rm cut}\) non-perturbative corrections to the resummed predictions get significantly reduced, when comparing to the ungroomed case. Furthermore, an improved agreement with the hadron level predictions from Sherpa is found.
It will be interesting to confront the two types of predictions with actual data from the HERA collider that are currently being analysed by the H1 experiment. We can expect that in particular for the ungroomed 1-jettiness observable data should be able to discriminate between the two predictions. This will motivate and guide the development and advancement of the theoretical predictions, for example by including higher-logarithmic corrections or improved means to account for non-perturbative corrections. There are also interesting developments in the field of parton shower algorithms for DIS, from the inclusion of NNLO QCD corrections [29] to dipole showers at formal NLL accuracy [92, 93, 94, 95]. It will be exciting to also confront these with upcoming precision measurements from the HERA experiments.
## Acknowledgements
We would like to thank Daniel Britzger and Henry Klest for triggering us to dive into DIS event shapes and a very fruitful communication. We furthermore thank Johannes Hessler and Vinicius Mikuni for discussions. We are indebted to Stefan Hoche for assistance with the NNLO corrections and we are grateful to Frank Krauss for help with Sherpa's new beam fragmentation model.
MK and SS acknowledge support from BMBF (05H21MGCAB) and funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - project number 456104544 and 510810461. DR is supported by the STFC IPPP grant (ST/T001011/1).
## Appendix A Tuning details
We here collate more detailed information on the tuning of the Ahadic++ beam-fragmentation parameters. The Rivet analyses and considered observable measurements by the H1 and ZEUS HERA experiments used for the tuning are summarised in Tab. 2.
|
2307.00163 | A chip-scale second-harmonic source via injection-locked all-optical
poling | Second-harmonic generation allows for coherently bridging distant regions of
the optical spectrum, with applications ranging from laser technology to
self-referencing of frequency combs. However, accessing the nonlinear response
of a medium typically requires high-power bulk sources, specific nonlinear
crystals, and complex optical setups, hindering the path toward large-scale
integration. Here we address all of these issues by engineering a chip-scale
second-harmonic (SH) source based on the frequency doubling of a semiconductor
laser self-injection-locked to a silicon nitride microresonator. The
injection-locking mechanism, combined with a high-Q microresonator, results in
an ultra-narrow intrinsic linewidth at the fundamental harmonic frequency as
small as 57 Hz. Owing to the extreme resonant field enhancement,
quasi-phase-matched second-order nonlinearity is photoinduced through the
coherent photogalvanic effect and the high coherence is mapped on the generated
SH field. We show how such optical poling technique can be engineered to
provide efficient SH generation across the whole C and L telecom bands, in a
reconfigurable fashion, overcoming the need for poling electrodes. Our device
operates with milliwatt-level pumping and outputs SH power exceeding 2 mW, for
an efficiency as high as 280%/W under electrical driving. Our findings suggest
that standalone, highly-coherent, and efficient SH sources can be integrated in
current silicon nitride photonics, unlocking the potential of $\chi^{(2)}$
processes in the next generation of integrated photonic devices. | Marco Clementi, Edgars Nitiss, Elena Durán-Valdeiglesias, Sofiane Belahsene, Junqiu Liu, Tobias J. Kippenberg, Hélène Debrégeas, Camille-Sophie Brès | 2023-06-30T22:36:50Z | http://arxiv.org/abs/2307.00163v2 | # A chip-scale second-harmonic source via injection-locked all-optical poling
###### Abstract
Second-harmonic generation allows for coherently bridging distant regions of the optical spectrum, with applications ranging from laser technology to self-referencing of frequency combs. However, accessing the nonlinear response of a medium typically requires high-power bulk sources, specific nonlinear crystals, and complex optical setups, hindering the path toward large-scale integration. Here we address all of these issues by engineering a chip-scale second-harmonic (SH) source based on the frequency doubling of a semiconductor laser self-injection-locked to a silicon nitride microresonator. The injection-locking mechanism, combined with a high-Q microresonator, results in an ultra-narrow intrinsic linewidth at the fundamental harmonic frequency as small as 57 Hz. Owing to the extreme resonant field enhancement, quasi-phase-matched second-order nonlinearity is photoinduced through the coherent photogalvanic effect and the high coherence is mapped on the generated SH field. We show how such optical poling technique can be engineered to provide efficient SH generation across the whole C and L telecom bands, in a reconfigurable fashion, overcoming the need for poling electrodes. Our device operates with milliwatt-level pumping and outputs SH power exceeding 2 mW, for an efficiency as high as 280 %/W under electrical driving. Our findings suggest that standalone, highly-coherent, and efficient SH sources can be integrated in current silicon nitride photonics, unlocking the potential of \(\chi^{(2)}\) processes in the next generation of integrated photonic devices.
## I Introduction
Second-harmonic generation (SHG) [1] plays a fundamental role in the realm of nonlinear optics, as it enables linking octave-spaced regions of the spectrum while preserving the coherence of the optical field. Applications range from laser physics and technology [2; 3], to imaging [4], material science [5], and self-referencing of frequency combs [6], to name a few. Since its inception in 1961 [1], SHG has widely been applied in bulk optics, whereas the nonlinear nature of the process requires an appropriate combination of i) a high-intensity coherent source, ii) a material endowed with a second-order nonlinearity (\(\chi^{(2)}\)) and iii) carefully engineered phase-matching conditions. Such hurdles have stimulated a great research effort in the domain of integrated optics, whereas the integration of frequency doubling on-chip bears the promise of realizing novel devices in a compact, power-efficient and scalable fashion, while eliminating the need for bulky and complex experimental apparatuses.
On-chip integration has indeed proven advantageous, as it enables to enhance the interaction strength thanks to the transverse confinement in waveguides [7], and also via the use of resonant structures [8]. Moreover, the development of nano-fabrication techniques in several \(\chi^{(2)}\) materials has yielded the demonstration of highly-efficient SHG in platforms such as lithium niobate on insulator (LNOI) [9] and III-V semiconductors, such as AlN [10], GaN [8], AlGaAs [11] and GaP [12], especially in combination with quasi-phase-matching (QPM) techniques. However, despite great progress, such emerging platforms struggle to find employ in practical devices, owing to the lack of compatibility with established fabrication processes, in particular those of the silicon-based complementary metal-oxide semiconductor (CMOS) technology, widely adopted by the electronics market.
In contrast, silicon nitride (Si\({}_{3}\)N\({}_{4}\)) photonics [13; 14; 15] has emerged as a mature integrated photonics platform, thanks to its compatibility with CMOS fabrication, that favors scalability while allowing co-integration with microelectronics. Silicon nitride photonic devices benefit notably from ultra-low propagation losses, a wide transparency window, ranging from the mid-infrared to the near-UV, large bandgap and negligible Raman effect, making them an ideal choice for high-power and specifically nonlinear applications, such as Kerr microcombs [16; 17], supercontinuum generation [18], and parametric quantum light sources [19; 20]. While these advantages are usually ascribed to third-order (\(\chi^{(3)}\)) nonlinear processes, owing to the centrosymmetric nature of the amorphous material, it was recently shown that Si\({}_{3}\)N\({}_{4}\) waveguides [21; 22; 23] and resonators [24; 25] can be endowed with a photoinduced second-order nonlinearity by the coherent photogalvanic effect, whereas high conversion efficiency (CE = \(P_{\rm SH}/P_{\rm FIR}^{2}\)) of second harmonic (SH) light, exceeding 2,500%/W, can be reached particularly in resonant structures, owing to the high values of field enhancement achievable.
Furthermore, the lack of active sources in group IV semiconductors can be overcome by heterogeneous integration with III-V sources [26]. As shown by recent findings, the combination of such sources with silicon nitride chips is not only technologically accessible, but can also
be exploited to improve the coherence properties of the latter through a self-injection-locking (SIL) mechanism to a microring resonator [27; 28; 29; 30; 31; 32; 33; 34]. In these schemes, the backscattering from a high quality factor (Q) microring resonator is injected into the cavity of a semiconductor laser. Under the appropriate conditions, the ring resonator acts as an effective narrowband filter, resulting in the locking of the source to its resonance frequency and a narrowing of the laser linewidth proportional to Q\({}^{2}\). Recent experimental evidence has shown a reduction of the intrinsic laser linewidth below the hertz-level [33], while the combination of this technique with Kerr microcombs has led to the realization of heterogeneously integrated turnkey soliton sources [30; 35].
In this work, we show how SIL and photoinduced second-order nonlinearity can occur concurrently in a Si\({}_{3}\)N\({}_{4}\) microring resonator to create a standalone dual-wavelength source emitting highly-coherent light at both the fundamental and SH frequencies (Fig. 1a). The resonator's high Q factor yields a narrowing of the intrinsic laser linewidth near the hertz-level, and its high finesse results in the efficient generation of milliwatt-level light at the SH, despite the use of a non-intrinsic \(\chi^{(2)}\) material. We show how the generated SH wavelength can be tuned by simply adjusting the device operating conditions (current and temperature), and we provide a full mapping of the suitable operating points over the C and L bands, showing abundance of suitable doubly-resonant conditions. Remarkably, the generated light shares the same properties of the pump field, including its coherence. This establishes our chip-scale source as a potentially powerful tool for applications that benefit not only from the ultra-narrow linewidth, such as Rb [36] and Sr [37] based chip-scale atomic clocks and integrated quantum photonics [38; 39], but also from mutual coherence of the two beams, such as the self-referencing of optical frequency combs [6].
## Results
A prototype realization of our device is shown in Fig. 1b. The layout consists of an electrically pumped distributed feedback laser diode (DFB), edge-coupled to the Si\({}_{3}\)N\({}_{4}\) photonic chip. The DFB is realized in an InGaAsP multi-quantum well, buried waveguide geometry, and it is packaged on a wire-bonded and thermally stabilized stage. To prove the generality of our results, in this study we used two batches of DFB lasers, operating respectively in the C and L telecom bands and characterized by similar performance and output power, respectively up to 60 mW and 90 mW at room temperature. By varying the driving current and device temperature, the emission wavelength can be tuned within a range of approximately \(5\,\mathrm{nm}\). The Si\({}_{3}\)N\({}_{4}\) chip is fabricated at wafer-scale through the photonic Damascene process [40] and contains microring resonators with a radius of \(896\,\mathrm{\SIUnitSymbolMicro m}\). The waveguide cross-section is \(2\times 0.55\,\mathrm{\SIUnitSymbolMicro m}^{2}\), and adiabatic mode converters are used for input- and output-coupling. Also in this case, several chips from the same wafer were tested in order to assess repeatability, with the same nominal geometric parameters. In this demonstration, the DFB position is finely adjusted to inject light efficiently to the bus waveguide. Light is then coupled to the microring resonator through a single point coupler with a gap of 550 nm. We measure an insertion loss of approximately 5 dB at the DFB/Si\({}_{3}\)N\({}_{4}\) chip interface. The temperature of the Si\({}_{3}\)N\({}_{4}\) chip is separately controlled to finely tune the resonant frequencies through thermo-optic effect. Finally, the light at the output facet is collected either in free-space or by means of a lensed fiber.
Before operating in SIL regime, we characterized both the linear and nonlinear properties of the resonator in the telecom band (1500-1600 nm) using a table-top external cavity tunable laser source. Transmission spectroscopy (Fig. 1c) at low power reveals a set of slightly overcoupled resonances with an average free spectral range (FSR) of 25.6 GHz for the fundamental transverse electric (TE\({}_{00}\)) mode, a loaded quality factor of \(Q=2\times 10^{6}\) (Fig. 1d), and an estimated intrinsic quality factor of \(Q_{0}=7\times 10^{6}\). In the SH band (750-800 nm), both the ring and the bus waveguides display a multimode behavior, with up to 5 TE modes supported, which we hereby label from SH1 (TE\({}_{00}\)) to SH5 (TE\({}_{40}\)). The associated azimuthal resonances are undercoupled to the fundamental mode of the bus waveguide, and therefore not accessible through standard transmission spectroscopy. However, their FSR can be accurately estimated via numerical simulations (Fig. 1e).
The same tunable laser was then amplified and used to pump the resonant modes in order to investigate and map the second-order nonlinear response (Fig. 2a). Owing to the high field enhancement provided by resonant modes at the SH frequency, the CE is maximized whenever a nearly doubly-resonant condition is met, that is, as the pump and SH frequencies are both tuned closely to a resonance [25]. When this condition is satisfied and the circulating pump intensity is high enough, the coherent photogalvanic effect is triggered: a static electric field is established inside the waveguide due to the displacement of heavy charges, resulting in the breaking of the centro-symmetry condition, and in the consequent establishment of a permanently photoinduced \(\chi^{(2)}\) response [41; 42]. Moreover, the local sign and amplitude of the \(\chi^{(2)}\) is such that the QPM condition is automatically fulfilled, resulting in the inscription of a \(\chi^{(2)}\) grating inside the waveguide [25; 43]. Such all-optical poling (AOP) phenomenon manifests in the sudden increase of the generated SH signal, which reaches its equilibrium state in the millisecond timescale, as soon as the appropriate pump detuning and power conditions are met. Since the occurrence of a doubly-resonant condition is strongly dependent on the fabrication tolerance, we implemented a technique to map the AOP-SHG configurations displaying the highest CE as a function of the sample temperature and pump wavelength. The results
are shown in Figs. 2b-c, where two different samples - targeting respectively the C and L bands - were measured by slowly scanning the pump laser (scan speed: 50 pm/s) at varying values of the sample temperature. Such two-dimensional maps reveal the presence of families of doubly-resonant configurations, that can be visually identified as linear patterns in the generated SH plots (highlighted by the white dashed lines). The slope of such patterns depends on both the FSR difference between the two modes involved and on their thermopotic coefficient [44], while their horizontal spacing depends only on the FSR difference (see Methods). From comparison between such estimated slopes and the calculated group index ad different wavelength, we are able to retrieve the resonant modes involved, in this case the pair FH-SH1. Notably, the presence of hotspots associated with particularly high generated power - exceeding 20 mW - was observed, which we attribute to fluctuations in the resonances Q factor and coupling conditions. These hotspots - some of which, falling within the DFBs tunability range, we labeled from A to C for illustrating purpose - represent the optimal operating points for injection-locked SHG, and we therefore investigated further their properties. To confirm the validity of our picture, we applied our mapping technique in a narrow range after optical poling with significantly lower input power (14 mW) and higher scan speed (1 nm/s), in order not to alter the properties of the inscribed grating. The result at hotspot A, shown in Fig. 2d, confirms the existence of an optimal combination of parameters in the temperature/wavelength space. Furthermore, remarkably, it points to a significant increase of the CE compared to the high-power case, which we registered to be as high as 250%/W. This increase in the CE at low power is due to the absence of parasitic effects such as pump depletion and the generation of free-carriers associated to the nonlinear photoconductivity [42]. A similar value of generated power and CE can be assessed for several of the hotspots identified, both in the C and L bands, that were also observed to preserve a high CE up to several tens of milliwatts of pump power. This was confirmed by power scaling measurements, illustrated for hotspot B in Fig. 2e, which display a nearly quadratic trend as a function of the input pump power up to about 50 mW. Finally, to confirm the QPM nature of SHG, we performed two-photon imaging of the \(\chi^{(2)}\) grating [43] after poling the sample at hotspot C. The result, shown in Fig.2f reveals a periodic pattern, with a period of approximately \(2.47\,\mathrm{\SIUnitSymbolMicro m}\). From comparison with the simulated values of the effective index, we infer a QPM condition between the FH and SH1 mode, in excellent agreement with our deductions drawn from the linear pattern observed in Fig. 2c.
From the measurements shown in Fig. 2, we identified several operating points compatible with the tunability bandwidth of our DFBs, that could be probed for injection-locked SHG. The tunable laser was thus replaced by the DFB diode, realizing the SIL-SHG source
Figure 1: **Self-injection-locked second-harmonic source.****a.** Schematic of the SIL mechanism. The DFB laser inject light at the FH wavelength (solid red arrow) to the ring resonator bus waveguide. A small fraction of the light circulating inside the ring is reflected by Rayleigh backscattering (dashed arrows) and injected to the DFB cavity, yielding a dramatic narrowing of the emission linewidth compared to the free-running regime (lower panel). Such high-coherence laser field displays a high intracavity intensity, which is used to trigger the coherent photogalvanic effect and generate SH light. **b.** Schematic of the experimental device. The DFB laser (detail in inset) is mounted and wire-bonded on a temperature stabilized control board, whose position is finely tuned by micro-actuators. The temperature of the Si\({}_{3}\)N\({}_{4}\) chip is independently controlled to tune the resonant conditions. Light is collected at the output using a collimation lens or a lensed fiber (not shown here). The Si\({}_{3}\)N\({}_{4}\) chip size is \(5\times 5\) mm\({}^{2}\). The DFB chip length is \(400\,\mathrm{\SIUnitSymbolMicro m}\). **c.** Transmission spectrum at the the FH wavelength. **d.** Detail of the resonance used for SIL-SHG. The inset shows a SEM cross-section of one of the fabricated waveguides. **e.** Finite element simulation of the transverse TE modes involved at the fundamental and SH frequencies. The horizontal axis refers to the pump wavelength.
as described above (Fig. 3a). By fixing the DFB temperature and sweeping the driving current, we were able to observe several SIL events, marked by strongly asymmetric dips in the transmitted spectrum (Figs. 3b), owing to the locking of the lasing frequency to the bottom of the resonance dip. The width of such dips reflects the locking bandwidth, which depends on both the magnitude of the back-reflected signal and on its phase, two quantities that are controlled by finely adjusting the position of the DFB with respect to the chip facet using a piezoelectric positioner. By retrieving the DFB spectrum as a function of the driving current through optical heterodyne measurements (Fig. 3c), we estimated a locking bandwidth in the GHz range, visualized by stark deviation in the otherwise linear frequency shifting trend, and characterized by a pronounced hysteresis [28] between the increasing current (decreasing frequency) and the decreasing current (increasing frequency) scans. When the doubly-resonant condition is fulfilled in correspondence of a SIL event, the AOP effect is triggered, resulting in the emission of light at the SH frequency. The phenomenon was investigated for several doubly-resonant configurations (hotspots) shown in Figs. 2b-c and the result was found consistently repeatable both for pumping in the C and L bands, with a maximum emission power as high as 2.3 mW in the bus waveguide, and a peak CE as high as 280%/W. Remarkably, the former value is comparable, if not greater, to the best results obtained in for injection-locking in LNOI technology [45]. This result highlights the high potential of silicon nitride
Figure 2: **All-optical poling and second harmonic generation.****a.** Ring resonator all-optical poling using an external tunable laser. Light is in-coupled using a lensed fiber. The generated SH light is visible to the camera sensor showing the intense circulating power inside the resonator. **b-c.** Two dimensional maps of the output FH and SH obtained by scanning the pump power at different temperatures of the sample. Panel b shows the results for a Si\({}_{3}\)N\({}_{4}\) sample designed to operate in the C band, while panel c corresponds to a different sample operated in the L band. The approximate tunability region of the DFB is shaded in white. The dashed lines are guides to the eye highlighting the variation of the doubly-resonant condition for the corresponding families of modes. **d.** Map of the CE as a function of the pump wavelength and sample temperature, highlighting the best detuning condition. **e.** Scaling trend of the generated peak SH power as a function of the input pump level. **f.** Two-photon microscope image of the inscribed \(\chi^{(2)}\) grating.
for the engineering of second-order nonlinear processes, despite relying on a photoinduced, rather than intrinsic, nonlinearity. By fixing the current in correspondence of a SIL-SHG event, a constant CW emission is observed (Fig. 3a). When visualized on an optical spectrum analyzer (3d), such emission shows a monochromatic spectrum, characterized by strong side-mode suppression ratio (SMSR) exceeding 60 dB, limited by the sensitivity of our instrument.
Finally, we investigated the coherence properties of our dual-wavelength source. A first analysis of the emission linewidth was performed at the fundamental wavelength by optical heterodyne with a reference tunable laser (Fig. 4a). A significant narrowing can be immediately appreciated when passing from the free-running to the injection-locked regime, with a decrease in linewidth from \(\delta\nu\approx 1\,\mathrm{MHz}\) in the former case (3 dB bandwidth from Voigt fit), to \(\delta\nu<50\,\mathrm{kHz}\) when locked, limited by the noise of the reference laser used as a local oscillator. To get more insight on the emitted light properties, we implemented a frequency discriminator apparatus to assess the frequency noise of the emitted light [46; 47], as shown in Fig. 4b. The retrieved power spectral density (PSD) is shown in Fig. 4c, where a dramatic reduction in both the technical (\(f^{-1}\)) and white noise, exceeding 37 dB, is observed, with a noise floor lower plateau as small as \(S_{\nu}^{0}=18\,\mathrm{Hz}^{2}\,\mathrm{Hz}^{-1}\), corresponding to an intrinsic (Schaow-Townes) linewidth \(\delta\nu_{\mathrm{ST}}=\pi S_{\nu}^{0}=57\,\mathrm{Hz}\). As a comparison, the SIL source outperforms the commercial tunable laser used for characterization in terms of intrinsic linewidth.
## Discussion
The device developed here represents a novel approach to the engineering of on-chip SH sources, whereas the resonant element not only enhances the CE, but first and foremost improves the coherence properties of the FH and SH fields through the SIL mechanism. While this effect has been widely investigated recently for perspective application to other nonlinear processes [29; 30; 31; 32; 35], the results presented here represent one of the first demonstrations of injection-locked SHG on a chip.
Only a single similar result has been reported so far, to the best of our knowledge, by Ling and co-workers [45].
Figure 3: **Injection-locked second-harmonic generation.****a.** The SIL-SHG source in operation. The generated SH is visible as scattering from the silicon nitride ring and at the chip output. **b.** Current scans of the DFB laser targeting several operating points (hotspots) among the two samples studied. The SIL events display are identified as strongly asymmetric dips characterized by hysteresis, whose width and depth depends on the amplitude and phase of the backscattered signal by the microresonator, as well as the dip visibility. SIL-SHG events are marked by spikes in the generated SH power, that reaches a peak value of 2.3 mW. All powers are rated at the output of the bus waveguide. **c.** Optical heterodyne spectrum of the output FH as a function of the driving current. In the proximity of the resonance, the emission frequency deviates from the otherwise linear shifting trends, and the linewidth is narrowed significantly. Once the current if further increased (or decreased), the trend recovers its linear character. The width of the gap in the frequency range spanned by the laser emission represents the locking bandwidth. **d.** Device emission at the FH and SH wavelength recorded by an optical spectrum analyzer (resolution: 20 pm), showing the absence of side modes within the instrument’s dynamic range (65 dB).
In that work, the authors exploit LNOI technology to realize a SIL-SHG source similar to the one presented here, by leveraging the high intrinsic \(\chi^{(2)}\) and electric field poling of lithium niobate. Despite promising results, their device still suffers from some limitations inherent to the LNOI platform, most notably i) the need for electrodes used to inscribe a QPM grating, that limits the operation to a fixed design wavelength, and ii) a relatively low quality factor (\(Q\approx 4\times 10^{5}\)), that sets the best narrowing performance reported to an estimated intrinsic linewidth of 4.7 kHz at the SH. In contrast, our device displays wide tunability across the whole telecom spectrum, only requiring control over the pump laser wavelength and on the sample temperature. The AOP mechanism allows indeed to erase and re-write the QPM grating by solely changing these two parameters, as long as a doubly-resonant condition is satisfied [25]. As a result, the same microresonator can be dynamically reconfigured in order to match a different pump wavelength and/or family of modes, with ample choice provided by the abundance of doubly-resonant configurations, thus eliminating the need for poling electrodes and enhancing the flexibility of the final device. Our solution also excels in terms of coherence, as it displays an intrinsic linewidth almost reaching the hertz-level, owing to the high Q of the resonators used. It is worth stressing that such short-term linewidth is mapped on the generated SH field, with a predicted intrinsic SH linewidth as small as 228 Hz (note that a 4-fold increase in the frequency noise is expected as a result of frequency doubling), thus implying the mutual coherence between the output fields at the FH and SH wavelengths. Our device also performs well in terms of generated power, being capable to reach and exceed a milliwatt-level SH output (up to 2.3 mW) with a pump power of about 33 mW, corresponding to a net (i.e. non-normalized) conversion efficiency \(\eta=P_{\text{SH}}/P_{\text{FH}}\approx 7\%\) at continuous-wave regime, and consistently displaying a normalized CE exceeding 100 %/W across all the hotspots tested, with a peak value recorded as high as 280 %/W. This result is particularly remarkable given the relatively low value of the photoinduced nonlinearity in silicon nitride - up to \(\chi^{(2)}\approx 0.3\) pm/V [21], compared to \(\chi^{(2)}\approx 54\) pm/V in the case of LNOI [7] - and highlights the maturity of the silicon nitride photonics platform, as well as its suitability for applications in nonlinear optics. Our device performance proves superior also compared with existing single-wavelength SIL sources in the visible and near-infrared ranges [34; 48], with an order-of-magnitude narrower intrinsic linewidth and significantly higher SMSR. This advantage can be attributed to the more difficult realization of the laser and microresonator components, which requires an obvious increase in the fabrication accuracy to efficiently operate at a shorter wavelength.
In the perspective of commercial applications, our proof-of-concept realization could be further improved, both in terms of device engineering and figures of merit. In particular, the output power can be significantly increased by engineering an optimal coupling between the DFB facet and the bus waveguide, for example through the use of optimized adiabatic mode converters. Ultimately, one could also foresee a full heterogeneous integration, which has been shown to be within reach of state-of-art fabrication technology [26], thus enabling a wafer-scale integration of this type of sources. Finally, further improvements in the microresonator Q factor may enable increased conversion efficiencies and an even lower laser linewidths, potentially unlocking access to hertz-level dual-wavelength coherence on a photonic chip. This result has indeed already proven possible in single
Figure 4: **Emission linewidth.****a.** Heterodyne spectrum of the emission linewidth at FH in the free running (red dots) and SIL (orange line) regimes. The linewidth of the FH emission in the SIL regime is estimated to be well-below the resolution bandwidth (RBW) of the instrument. **b.** Schematic of the frequency discriminator setup used for the frequency noise measurements. Light from the SIL-SHG source is collected at the output of the chip, and the de-multiplexed FH is routed to an unbalanced Mach-Zehnder interferometer. On one arm, the phase is regulated using a fiber phase shifter. The output signal is detected at one of the outputs with a fast photodiode and visualized on an electrical spectrum analyzer. **c.** Frequency noise spectra measured by the frequency discriminator. The technical noise pattern, scaling as \(\nu^{-1}\) and associated with a Gaussian contribution to the broadening of the emission line, is marked by a dashed grey line. The white noise plateaus, associated with the Lorentzian contribution to the broadening of the emission line, are highlighted by dotted grey lines. From the value of such plateaus we estimated a narrowing of the intrinsic linewidth of 37dB, close to the limit of sensitivity of our technique (6 Hz Hz\({}^{-1}\)).
wavelength SIL sources [33], whereas the use of very long ring resonators in a folded spiral geometry also showed promising advantage in reducing the thermo-refractive contribution to technical noise, the latter being effectively averaged over the whole device length. However, this approach may not be suitable for the purpose of frequency doubling, as the use of low-confinement waveguides increases the transverse mode area, ultimately weakening the nonlinear interaction. In this perspective, the use of high-confinement waveguides based on thick silicon nitride layers [40] is more advantageous, as it maximizes the nonlinear interaction. Moreover, the use of long resonators reduces the field enhancement, ultimately setting a trade-off between coherence and conversion efficiency. Finally, the combination of SIL and AOP can be potentially extended to further processes, such as the cascaded sum-frequency generation of the optical third-harmonic [42; 49]. Not least, one could foresee, through fine-tuned dispersion engineering, to employ the same microring resonator for the generation of a self-starting soliton microcomb [35], whose frequency doubling could potentially allow access to \(f-2f\) interferometry on-chip [50]. Despite bearing high technical difficulties, this target would come with great benefit, allowing to realize a self-referenced microcomb on a single integrated photonic chip. Such achievement would unravel the potential of optical atomic clocks in a fully integrated chip-scale photonic device, and potentially be exploited to bring hertz-level coherence over the whole near-infrared spectrum and beyond.
In conclusion, we have demonstrated a chip-scale dual-wavelength source based on the injection-locking of a DFB laser to a high-Q Si\({}_{3}\)N\({}_{4}\) microresonator. The device displays a near-hertz intrinsic linewidth of 57 Hz, milliwatt-level SH output power and high side-modes suppression exceeding 60 dB, over a locking bandwidth of several gigahertz. By exploiting an all-optical poling technique, our system can be reconfigured to operate across the whole C and L telecom bands by solely tuning the sample temperature and pump wavelength. Our findings confirm the suitability of silicon nitride photonics for the integration of highly-efficient second-order nonlinear processes, and open a pathway towards the realization of novel chip-scale devices such as miniaturized atomic clocks and fully integrated self-referenced microcombs.
_Note_. Preliminary results about this work have been presented by the authors in the form of a conference proceeding [51]. During the preparation of this manuscript, a similar example of SIL-SHG in silicon nitride has been reported online [52] in the form of a preprint.
**Methods**
**Device information.** The Si\({}_{3}\)N\({}_{4}\) microresonator we used in the experiment was fabricated by the photonic Damascene process that features ultralow loss operation [40]. It is a ring structure (radius \(R=896~{}\mu\)m) coupled with a bus waveguide, both buried in SiO\({}_{2}\) cladding. The waveguide nominal cross-section (width \(\times\) height) is \(2000\times 550~{}\)nm\({}^{2}\), which support multiple spatial modes at the SH wavelength. A scanning electron microscope image of the cleaved sample reveals a fabricated cross-section of \(2150\times 572~{}\)nm\({}^{2}\), which was used for simulation of the resonant modes. The microresonator is characterized to exhibit normal dispersion at the pump wavelength.
**AOP mapping**. The formula used to calculate the slope of the doubly-resonant trends in Fig. 2b-c is [44]:
\[\frac{dT}{d\lambda_{\mathrm{p}}}\approx-\frac{\Delta\nu_{\mathrm{FH}}-\Delta \nu_{\mathrm{SH}}}{\Delta\nu_{\mathrm{FH}}\left(\frac{d\nu_{\mathrm{SH}}}{dT} -2\frac{d\lambda_{\mathrm{SH}}}{dT}\right)} \tag{1}\]
where \(T\) is the sample temperature, \(\lambda_{\mathrm{p}}\) is the pump wavelength, \(\Delta\nu_{\mathrm{FH(SH)}}\) is the FSR at the FH (SH) wavelength expressed in hertz and \(d\lambda_{\mathrm{FH(SH)}}/dT\) is the thermo-optic coefficient at the FH (SH) wavelength. The approximation is valid as long as the ratio between the two FSRs is close to 1. The horizontal spacing between similar trends is calculated as:
\[\Delta\lambda_{\mathrm{spacing}}\approx\frac{\lambda_{\mathrm{p}}^{2}\Delta \nu_{\mathrm{FH}}^{2}}{2c|\Delta\nu_{\mathrm{FH}}-\Delta\nu_{\mathrm{SH}}|} \tag{2}\]
where \(c\) is the speed of light.
**TPM imaging**. For characterization of the inscribed \(\chi^{(2)}\) gratings, a high power femtosecond Ti:Sapphire laser is focused at the grating plane of the microresonator in an upright configuration. The focal spot is then raster-scanned across the plane while, in the meantime, its generated SH signal is monitored so that the \(\chi^{(2)}\) response is probed. From the periodicity retrieved, the original phase mismatch between the modes involved is inferred.
**Optical heterodyne**. To obtain the data shown in Figs.3c and 4a, an external cavity laser, serving as a local oscillator (LO) is tuned at frequency close to the emission under test. The two fields are mixed at a 50:50 fiber beam-splitter and routed to a fast photodiode, in order to retrieve the optical beat-note. The resulting electrical signal is visualized on an electrical spectrum analyzer, retrieving the narrowband spectrum of the emitted light. The resolution of the technique is approximately 50 kHz, and it is limited by the resolution bandwidth of the spectrum analyzer and by the finite linewidth of the LO laser.
**Acknowledgements**
This work was funded by ERC grant PISSARRO (ERC-2017-CoG 771647).
**Competing interests**
The authors declare no competing interests.
**Data availability**
The data and code that support the plots within this paper and other findings of this study are available from the corresponding authors upon reasonable request. |
2309.10810 | PGDiff: Guiding Diffusion Models for Versatile Face Restoration via
Partial Guidance | Exploiting pre-trained diffusion models for restoration has recently become a
favored alternative to the traditional task-specific training approach.
Previous works have achieved noteworthy success by limiting the solution space
using explicit degradation models. However, these methods often fall short when
faced with complex degradations as they generally cannot be precisely modeled.
In this paper, we propose PGDiff by introducing partial guidance, a fresh
perspective that is more adaptable to real-world degradations compared to
existing works. Rather than specifically defining the degradation process, our
approach models the desired properties, such as image structure and color
statistics of high-quality images, and applies this guidance during the reverse
diffusion process. These properties are readily available and make no
assumptions about the degradation process. When combined with a diffusion
prior, this partial guidance can deliver appealing results across a range of
restoration tasks. Additionally, PGDiff can be extended to handle composite
tasks by consolidating multiple high-quality image properties, achieved by
integrating the guidance from respective tasks. Experimental results
demonstrate that our method not only outperforms existing diffusion-prior-based
approaches but also competes favorably with task-specific models. | Peiqing Yang, Shangchen Zhou, Qingyi Tao, Chen Change Loy | 2023-09-19T17:51:33Z | http://arxiv.org/abs/2309.10810v1 | # PGDiff: Guiding Diffusion Models for Versatile Face Restoration via Partial Guidance
###### Abstract
Exploiting pre-trained diffusion models for restoration has recently become a favored alternative to the traditional task-specific training approach. Previous works have achieved noteworthy success by limiting the solution space using explicit degradation models. However, these methods often fall short when faced with complex degradations as they generally cannot be precisely modeled. In this paper, we propose _PGDiff_ by introducing _partial guidance_, a fresh perspective that is more adaptable to real-world degradations compared to existing works. Rather than specifically defining the degradation process, our approach models the desired properties, such as image structure and color statistics of high-quality images, and applies this guidance during the reverse diffusion process. These properties are readily available and make no assumptions about the degradation process. When combined with a diffusion prior, this partial guidance can deliver appealing results across a range of restoration tasks. Additionally, _PGDiff_ can be extended to handle composite tasks by consolidating multiple high-quality image properties, achieved by integrating the guidance from respective tasks. Experimental results demonstrate that our method not only outperforms existing diffusion-prior-based approaches but also competes favorably with task-specific models.
## 1 Introduction
Recent years have seen diffusion models achieve outstanding results in synthesizing realistic details across various content [33; 29; 7; 15; 34]. The rich generative prior inherent in these models opens up a vast array of possibilities for tasks like super-resolution, inpainting, and colorization. Consequently, there has been a growing interest in formulating efficient guidance strategies for pre-trained diffusion models, enabling their successful adaptation to various restoration tasks [9; 42; 20; 37].
A common approach [9; 42; 20] is to constrain the solution space of intermediate outputs during the denoising process1. At each iteration, the intermediate output is modified such that its degraded counterpart is guided towards the input low-quality (LQ) image. Existing works achieve this goal either by using a closed-form solution [42; 20] or back-propagating simple losses [9]. These methods are versatile in the sense that the pre-trained diffusion model can be adapted to various tasks without fine-tuning, as long as the degradation process is known in advance.
Footnote 1: It refers to the reverse diffusion process, not the image denoising task.
While possessing great versatility, the aforementioned methods are inevitably limited in generalizability due to the need for prior knowledge of the degradation process. In particular, a closed-form solution generally does not exist except for special cases such as linear operators. In addition, back-propagating losses demand differentiability of the degradation process, which is violated for many degradations such as JPEG compression. Importantly, degradations in the wild often consist
of a mixture of degradations [41], and hence, it is difficult, if not impossible, to model them accurately. As a result, existing works generally limit the scope to simplified cases, such as fixed-kernel downsampling. The generalization to real-world degradations remains a formidable challenge.
Motivated by the above, instead of modeling the degradation process, we propose to model the _desired properties_ of high-quality (HQ) images. The merit of such guidance is the agnosticity to the degradation process. However, it remains unclear what properties are desired and how appropriate guidance can be constructed. Through our extensive experiments, we find that with diffusion prior acting as a natural image regularization, one could simply guide the denoising process with easily accessible properties, such as image structure and color statistics. For example, as shown in Fig. 1, one could generate plausible outputs simply by providing guidance on the lightness and the statistics (_i.e._, mean and variance) of each color channel, without knowing the exact decolorization process. By constraining the HQ image space, our idea bypasses the difficulty of knowing the prior relation between LQ and HQ images, thus improving generalizability.
In this work, we devise a simple yet effective instantiation named _PGDiff_ by introducing _partial guidance_. PGDiff adopts classifier guidance [7] to constrain the denoising process. Each image property corresponds to a classifier, and the intermediate outputs are updated by back-propagating the gradient computed on the loss between the classifier output and the target property. Since our partial guidance is agnostic to the degradation process, it can be easily extended to complex tasks by compositing multiple properties. For instance, the task of old photo restoration can be regarded as a combination of restoration, inpainting, and colorization, and the resultant guidance is represented as a weighted sum of the guidance in the respective task. We also demonstrate that common losses such as perceptual loss [2; 16] and adversarial loss [22] can be incorporated for further performance gain.
**Contributions.** Our main contributions include **i)** a new concept of adapting diffusion models to restoration without presumptions of the degradation process. We show that it suffices to guide the denoising process with _easily accessible properties_ in the HQ image space, with diffusion prior acting as regularization, and **ii)**_partial guidance_, a versatile approach that is applicable to a broad range of image restoration and enhancement tasks. Furthermore, it allows flexible combinations of guidance for intricate tasks. We conduct extensive experiments to demonstrate the effectiveness of PGDiff on a variety of challenging tasks including blind face restoration and old photo restoration. We also demonstrate interesting applications, such as reference-based restoration. The results confirm the superiority of PGDiff over previous state-of-the-art methods.
## 2 Related Work
**Generative Prior for Restoration.** Generative prior has been widely adopted for a range of image restoration tasks, including super-resolution, inpainting, and colorization. One prominent approach in
Figure 1: **Overview of Our PGDiff Framework for Versatile Face Restoration**. Here, we take the colorization task as an example to illustrate our inference pipeline. One may refer to Table 1 for the corresponding details (_e.g._, property, classifier, and target) of other tasks. We show that our method can handle a wide range of tasks, including (a) blind face restoration, (b) face colorization, (c) face inpainting, and also composite tasks such as (d) old photo restoration.
this field is the use of pre-trained generative adversarial networks (GANs) [10; 18; 1]. For instance, GAN-inversion [26; 11; 28] inverts a corrupted image to a latent code, which is then used for generating a clean image. Another direction is to incorporate the prior into an encoder-decoder architecture [3; 4; 40; 44], bypassing the lengthy optimization during inference. VQVAE [35] is also commonly used as generative prior. Existing works [48; 13; 43; 47] generally first train a VQVAE with a reconstruction objective, followed by a fine-tuning stage to adapt to the subsequent restoration task. Recently, diffusion models have gained increasing attention due to their unprecedented performance in various generation tasks [33; 29; 7; 15; 34], and such attention has led to interest in leveraging them as a prior for restoration.
**Diffusion Prior.** There has been a growing interest in formulating efficient guidance strategies for pre-trained diffusion models, enabling their successful adaptation to various restoration tasks [9; 42; 20; 37; 39]. Among them, DDRM [20], DDNM [42], and GDP [9] adopt a zero-shot approach to adapt a pre-trained diffusion model for restoration without the need of task-specific training. At each iteration, the intermediate output is modified such that its degraded counterpart is guided towards the input low-quality image. This is achieved under an assumed degradation process, either in the form of a fixed linear matrix [42; 20] or a parameterized degradation model [9], with learnable parameters representing degradation extents. In this work, we also exploit the generative prior of a pre-trained diffusion model by formulating efficient guidance for it, but unlike existing works that limit the solution space using explicit degradations [9; 42; 20], we propose to model the desired properties of high-quality images. Such design is agnostic to the degradation process, circumventing the difficulty of modeling the degradation process.
## 3 Methodology
PGDiff is based on diffusion models. In this section, we first introduce the background related to our method in Sec. 3.1, and the details of our method are presented in Sec. 3.2.
### Preliminary
**Diffusion Models.** The diffusion model [33] is a class of generative models that learn to model a data distribution \(p(x)\). In particular, the forward process is a process that iteratively adds Gaussian noise to an input \(x_{0}\sim p(x)\), and the reverse process progressively converts the data from the noise distribution back to the data distribution, often known as the denoising process.
For an unconditional diffusion model with \(T\) discrete steps, at each step \(t\), there exists a transition distribution \(q(x_{t}|x_{t-1})\) with variance schedule \(\beta_{t}\)[15]:
\[q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}\,x_{t-1},\beta_{t}\mathbf{ I}). \tag{1}\]
Under the reparameterization trick, \(x_{t}\) can be written as:
\[x_{t}=\sqrt{\alpha_{t}}\,x_{t-1}+\sqrt{1-\alpha_{t}}\,\epsilon, \tag{2}\]
where \(\alpha_{t}=1-\beta_{t}\) and \(\epsilon\sim\mathcal{N}(\epsilon;\mathbf{0},\mathbf{I})\). Recursively, let \(\bar{\alpha}_{t}=\prod_{i=1}^{t}\alpha_{i}\), we have
\[x_{t}=\sqrt{\bar{\alpha}_{t}}\,x_{0}+\sqrt{1-\bar{\alpha}_{t}}\,\epsilon. \tag{3}\]
During sampling, the process starts with a pure Gaussian noise \(x_{T}\sim\mathcal{N}(x_{T};\mathbf{0},\mathbf{I})\) and iteratively performs the denoising step. In practice, the ground-truth denoising step is approximated [7] by \(p_{\theta}(x_{t-1}|x_{t})\) as:
\[p_{\theta}(x_{t-1}|x_{t})=\mathcal{N}(\mu_{\theta}(x_{t},t),\Sigma_{\theta}(x_ {t},t)), \tag{4}\]
where \(\Sigma_{\theta}(x_{t},t)\) is a constant depending on pre-defined \(\beta_{t}\), and \(\mu_{\theta}(x_{t},t)\) is generally parameterized by a network \(\epsilon_{\theta}(x_{t},t)\):
\[\mu_{\theta}(x_{t},t)=\frac{1}{\sqrt{\alpha}_{t}}(x_{t}-\frac{\beta_{t}}{\sqrt {1-\bar{\alpha}_{t}}}\epsilon_{\theta}(x_{t},t)). \tag{5}\]
From Eq. (3), one can also directly approximate \(x_{0}\) from \(\epsilon_{\theta}\):
\[\hat{x}_{0}=\frac{1}{\sqrt{\alpha}_{t}}x_{t}-\sqrt{\frac{1-\bar{\alpha}_{t}}{ \bar{\alpha}_{t}}}\epsilon_{\theta}(x_{t},t). \tag{6}\]
**Classifier Guidance.** Classifier guidance is used to guide an unconditional diffusion model so that conditional generation is achieved. Let \(y\) be the target and \(p_{\phi}(y|x)\) be a classifier, the conditional distribution is approximated as a Gaussian similar to the unconditional counterpart, but with the mean shifted by \(\Sigma_{\theta}(x_{t},t)g\)[7]:
\[p_{\theta,\phi}(x_{t-1}|x_{t},y)\approx\mathcal{N}(\mu_{\theta}(x_{t},t)+ \Sigma_{\theta}(x_{t},t)g,\Sigma_{\theta}(x_{t},t)), \tag{7}\]
where \(g=\nabla_{x}\log\ p_{\phi}(y|x)|_{x=\mu_{\theta}(x_{t},t)}\). The gradient \(g\) acts as a guidance that leads the unconditional sampling distribution towards the condition target \(y\).
### Partial Guidance
Our _partial guidance_ does not assume any prior knowledge of the degradation process. Instead, with diffusion prior acting as a regularization, we provide guidance only on the desired properties of high-quality images. The key to PGDiff is to construct proper guidance for each task. In this section, we will discuss the overall framework, and the formulation of the guidance for each task is presented in Sec. 4. The overview is summarized in Fig. 1 and Algorithm 1.
**Property and Classifier.** The first step of PGDiff is to determine the desired properties which the high-quality output possesses. As summarized in Table 1, each image property corresponds to a classifier \(p_{\phi}(y|\hat{x}_{0})\), and the intermediate outputs \(x_{t}\) are updated by back-propagating the gradient computed on the loss between the classifier output and the target \(y\).
Given a specific property (_e.g._, lightness), we construct the corresponding classifier (_e.g._,rgb2gray), and apply classifier guidance during the reverse diffusion process as shown in Fig. 1. Although our
\begin{table}
\begin{tabular}{c|c c c} \hline & **Task** & **Property** & **Target: \(y\)** & **Classifier: \(p_{\phi}(y|\hat{x}_{0})\)** \\ \hline \hline \multirow{4}{*}{**Homogeneous**} & Inpainting & Unmasked Region & Mask(\(y_{0}\)) & Mask \\ & Colorization & Lightness & rgb2gray(\(y_{0}\)) & rgb2gray \\ & & Color Statistics & AdIN(\(\hat{x}_{0}\))[18] & Identity \\ \cline{1-1} \cline{2-4} & Restoration & Smooth Semantics & Clean(\(y_{0}\)) & Identity \\ \cline{1-1} \cline{2-4} & Ref-Based Restoration & Smooth Semantics & Clean(\(y_{0}\)) & Identity \\ & & Identity Reference & ArcFace(\(y_{ref}\)) & ArcFace [6] \\ \hline \hline \multirow{2}{*}{**Composite**} & **Task** & **Composition** & \\ \hline \hline
**Composite** & Old Photo Restoration & & \\
**Task** & (w/ scratches) & Restoration + Inpainting + Colorization & \\ \hline \end{tabular}
\end{table}
Table 1: **Examples of Partial Guidance. Each image property corresponds to a classifier, and each task involves one or multiple properties as guidance. The target value of each property is generally obtained either from the input image \(y_{0}\) or the denoised intermediate output \(\hat{x}_{0}\). For a composite task, we simply decompose it into multiple tasks and combine the respective guidance. Here, Clean denotes a pre-trained restorer detailed in Section 4.1, Identity refers to an identity mapping, and \(y_{ref}\) represents a reference image containing entity with the same identity as \(y_{0}\).**
PGDiff is conceptually similar to classifier guidance, we find that the conventional guidance scheme often leads to suboptimal performance. In this work, we borrow ideas from existing works [37; 5] and adopt a _dynamic guidance scheme_, which introduces adjustments to the guidance weight and number of gradient steps for enhanced quality and controllability.
**Dynamic Guidance Scheme.** Our dynamic guidance scheme consists of two components. First, we observe that the conventional classifier guidance, which adopts a constant gradient scale \(s\), often fails in guiding the output towards the target value. This is especially unfavourable in tasks where high similarity to the target is desired, such as inpainting and colorization. To alleviate this problem, we calculate the gradient scale based on the magnitude change of the intermediate image [37]:
\[s_{norm}=\frac{\left\|x_{t}-x_{t-1}^{\prime}\right\|_{2}}{\left\|g\right\|_{2} }\cdot s, \tag{8}\]
where \(x_{t-1}^{\prime}\sim\mathcal{N}(\mu_{\theta},\Sigma_{\theta})\). In this way, the dynamic guidance weight \(s_{norm}\) varies along iterations, more effectively guiding the output towards the target, thus improving the output quality.
Second, the conventional classifier guidance typically executes a single gradient step at each denoising step. However, a single gradient step may not sufficiently steer the output toward the intended target, particularly when the intermediate outputs are laden with noise in the early phases of the denoising process. To address this, we allow multiple gradient steps at each denoising step [5] to improve flexibility. Specifically, one can improve the guidance strength of a specific property by increasing the number of gradient steps. The process degenerates to the conventional classifier guidance when the number of gradient steps is set to \(1\). During inference, users have the flexibility to modulate the strength of guidance for each property as per their requirements, thus boosting overall controllability.
**Composite Guidance.** Our partial guidance controls only the properties of high-quality outputs, and therefore can be easily extended to complex degradations by stacking respective properties. This is achieved by compositing the classifiers and summing the loss corresponding to each property. An example of composite tasks is shown in Table 1. In addition, we also demonstrate that additional losses such as perceptual loss [2; 16] and adversarial loss [22] can be incorporated for further quality improvement. Experiments demonstrate that our PGDiff achieves better performance than existing works in complex tasks, where accurate modeling of the degradation process is impossible.
## 4 Applications
By exploiting the diffusion prior, our PGDiff applies to a wide range of restoration tasks by selecting appropriate guidance. In this section, we will introduce the guidance formulation and provide experimental results.
Figure 2: **Comparison on Blind Face Restoration. Input faces are corrupted by real-world degradations. Our PGDiff produces high-quality faces with faithful details. (**Zoom in for best view.**)
### Blind Face Restoration
**Partial Guidance Formulation.** The objective of blind face restoration is to reconstruct a high-quality face image given a low-quality input corrupted by unknown degradations. In this task, the most straightforward approach is to train a network with the MSE loss using synthetic pairs. However, while these methods are able to remove the degradations in the input, it is well-known [26] that the MSE loss alone results in over-smoothed outputs. Therefore, extensive efforts have been devoted to improve the perceptual quality, such as incorporating addition losses (_e.g._, GAN loss) [22; 10; 16; 46; 8] and components (_e.g._, codebook [48; 13; 43; 47; 35] and dictionary [23; 24; 12; 8]). These approaches often require multi-stage training and experience training instability.
In our framework, we decompose a high-quality face image into _smooth semantics_ and _high-frequency details_, and provide guidance solely on the _smooth semantics_. In this way, the output \(\hat{x}_{0}\) in each diffusion step is guided towards a degradation-free solution space, and the diffusion prior is responsible for detail synthesis. Given an input low-quality image \(y_{0}\), we adopt a pre-trained face restoration model \(f\) to predict smooth semantics as partial guidance. Our approach alleviates the training pressure of the previous models by optimizing model \(f\) solely with the MSE loss. This is because our goal is to obtain _smooth semantics_ without hallucinating unnecessary high-frequency details. Nevertheless, one can also provide guidance of various forms by selecting different restorers, such as CodeFormer [48]. The loss for classifier guidance is computed as: \(\mathcal{L}_{res}=||\hat{x}_{0}-f(y_{0})||_{2}^{2}\).
**Experimental Results.** We evaluate the proposed PGDiff on three real-world datasets, namely LFW-Test [40], WebPhoto-Test [40], and WIDER-Test [48]. We compare our method with both task-specific CNN/Transformer-based restoration models [48; 26; 40] and diffusion-prior-based models2[9; 42; 45]. As shown in Fig. 2, existing diffusion-prior-based methods such as GDP [9] and DDNM [42] are unable to generalize to real-world degradations, producing outputs with notable artifacts. In contrast, our PGDiff successfully removes the degradations and restores the facial details invisible in the input images. Moreover, our PGDiff performs favorably over task-specific methods even without extensive training on this task. A quantitative comparison and the technical details of \(f\) are provided in the supplementary material.
Footnote 2: Among them, GDP [9] and DDNM [42] support only 4\(\times\) fixed-kernel downsampling, while DiffFace [45] is a task-specific model for blind face restoration.
### Face Colorization
**Partial Guidance Formulation.** Motivated by color space decomposition (_e.g._, YCbCr, YUV), we decompose our guidance into _lightness_ and _color_, and provide respective guidance on the two aspects. For lightness, the input image acts as a natural target since it is a homogeneous-color image. Specifically, we guide the output lightness towards that of the input using the simple rgb2gray operation. Equivalently, the loss is formulated as follows: \(\mathcal{L}_{l}=||\mathtt{rgb2gray}(\hat{x}_{0})-\mathtt{rgb2gray}(y_{0}) ||_{2}^{2}\). The lightness guidance can also be regarded as a dense structure guidance. This is essential in preserving image content.
With the lightness guidance constraining the structure of the output, we could guide the color synthesis process with a lenient constraint - color statistics (_i.e._, mean and variance of each color channel). In particular, we construct the target by applying AdaIN[18] to \(\hat{x}_{0}\), using a pre-determined set of color statistics for each R, G, B channel. Then we push \(\hat{x}_{0}\) towards the color-normalized output: \(\mathcal{L}_{c}=||\hat{x}_{0}-\mathtt{sg}\left(\mathtt{AdaIN}(\hat{x}_{0}, \mathbb{P})\right)||_{2}^{2}\), where \(\mathbb{P}\) refers to the set of color statistics and \(\mathtt{sg}(\cdot)\) denotes the stop-gradient operation [35]. The overall loss is formulated as: \(\mathcal{L}_{color}=\mathcal{L}_{l}+\alpha\cdot\mathcal{L}_{c}\), where \(\alpha\) is a constant that controls the relative importance of the structure and color guidance. To construct a universal color tone, we compute the average color statistics from a selected subset of the CelebA-HQ dataset [17]. We find that this simple strategy suffices to produce faithful results. Furthermore, our PGDiff can produce outputs with diverse color styles by computing the color statistics from different reference images.
**Experimental Results.** As shown in Fig. 3, GDP [9] and DDNM [42] lack the capability to produce vibrant colors. In contrast, our PGDiff produces colorized outputs simply by modeling the lightness and color statistics. Furthermore, our method is able to generate outputs with diverse color styles by calculating color statistics from various reference sets.
### Face Inpainting
**Partial Guidance Formulation.** Since diffusion models have demonstrated remarkable capability in synthesizing realistic content [33; 29; 7; 15; 34], we apply guidance only on the unmasked regions, and rely on the synthesizing power of diffusion models to generate details in the masked regions. Let \(B\) be a binary mask where \(0\) and \(1\) denote the masked and unmasked regions, respectively. We confine the solution by ensuring that the resulting image closely resembles the input image within the unmasked regions: \(\mathcal{L}_{inpaint}=||B\otimes\hat{x}_{0}-B\otimes y_{0}||_{2}^{2}\), where \(\otimes\) represents the pixel-wise multiplication.
**Experimental Results.** We conduct experiments on CelebRef-HQ [24]. As depicted in Fig. 4, GPEN [44] and GDP [9] are unable to produce natural outputs, whereas CodeFormer [48] and DDNM [42] generate outputs with artifacts, such as color incoherence or visual flaws. In contrast, our PGDiff successfully generates outputs with pleasant details coherent to the unmasked regions.
### Old Photo Restoration
**Partial Guidance Formulation.** Quality degradations (_e.g._, blur, noise, downsampling, and JPEG compression), color homogeneity, and scratches are three commonly seen artifacts in old photos. Therefore, we cast this problem as a joint task of _restoration_, _colorization_, and _inpainting3_. Similar to face colorization that composites the loss for each property, we composite the respective loss in each task, and the overall loss is written as: \(\mathcal{L}_{old}=\mathcal{L}_{res}+\gamma_{color}\cdot\mathcal{L}_{color}+ \gamma_{inpaint}\cdot\mathcal{L}_{inpaint}\), where \(\gamma_{inpaint}\) and \(\gamma_{color}\) are constants controlling the relative importance of the different losses.
Footnote 3: We locate the scratches using an automated algorithm [38], and then inpaint the scratched regions.
**Experimental Results.** We compare our PGDiff with BOPB [38], GFP-GAN [40] and DDNM [42]. Among them, BOPB is a model specifically for old photo restoration, GFP-GAN (v1) is able to restore and colorize faces, and DDNM is a diffusion-prior-based method that also claims to restore old photos with scratches. As shown in Fig. 5, BOPB, GFP-GAN, and DDNM all fail to give natural
Figure 4: **Comparison on Face Inpainting on Challenging Cases. Our PGDiff produces natural outputs with pleasant details coherent with the unmasked regions. Moreover, different random seeds give various contents of high quality.**
Figure 3: **Comparison on Face Colorization. Our PGDiff produces diverse colorized output with various color statistics given as guidance. The first column of our results is guided by the average color statistics of a subset of the CelebA-HQ dataset [17], and the guiding statistics for the remaining three columns are represented as an image in the top right corner.**
color in such a composite task. While DDNM is able to complete scratches given a proper scratch map, it fails to give a high-quality face restoration result. On the contrary, PGDiff generates sharp colorized faces without scratches and artifacts.
### Reference-Based Restoration
**Partial Guidance Formulation.** In reference-based restoration, a reference image from the same identity is given to improve the resemblance of personal details in the output image. Most existing works exploiting diffusion prior [9; 42; 20; 45] are not applicable to this task as there is no direct transformation between the reference and the target. In contrast, our partial guidance is extensible to more complex tasks simply by compositing multiple losses. In particular, our PGDiff can incorporate personal identity as a partial attribute of a facial image. By utilizing a reference image and incorporating the identity loss into the partial guidance, our framework can achieve improved personal details. We extract the identity features from the reference image using a pre-trained face recognition network, such as ArcFace [6]. We then include the negative cosine similarity to the loss term \(\mathcal{L}_{res}\) in blind face restoration (Sec. 4.1): \(\mathcal{L}_{ref}=\mathcal{L}_{res}-\beta\cdot\mathtt{sim}(v_{\hat{x}_{0}},v_ {r})\), where \(\beta\) controls the relative weight of the two losses. Here \(\mathtt{sim}(\cdot)\) represents the cosine similarity, and \(v_{\hat{x}_{0}}\) and \(v_{r}\) denote the ArcFace features of the predicted denoised image and the reference, respectively.
**Experimental Results.** We use the CelebRef-HQ dataset [24], which contains \(1,005\) entities and each person has \(3\) to \(21\) high-quality images. To build testing pairs, for each entity, we choose one image and apply heavy degradations as the input, and then we select another image from the same identity as the reference. In Fig. 6, we observe that without the identity loss term \(\mathtt{sim}(v_{\hat{x}_{0}},v_{r})\), some of the personal details such as facial wrinkles and eye color cannot be recovered from the distorted inputs. With the additional identity loss as guidance, such fine details can be restored. In addition, our PGDiff can be used to improve identity preservation of arbitrary face restorers. For instance, as shown in Fig. 7 (a), by using CodeFormer [48] as our restorer and incorporating the identity loss, the fine details that CodeFormer alone cannot restore can now be recovered.
### Quality Enhancement
**Partial Guidance Formulation.** Perceptual loss [16] and adversarial loss [10] are two common training losses used to improve quality. Motivated by this, we are interested in whether such losses can also be used as the guidance for additional quality gain. We demonstrate this possibility in the task of blind face restoration using the following loss: \(\mathcal{L}_{quality}=\mathcal{L}_{res}+\lambda_{per}\cdot||\mathtt{VGG}(\hat{x }_{0})-\mathtt{VGG}(y)||_{2}^{2}+\lambda_{GAN}\cdot D(\hat{x}_{0})\), where \(\lambda_{per}\) and \(\lambda_{GAN}\) are the relative weights. Here \(\mathtt{VGG}\) and \(D\) represent pre-trained VGG16 [32] and the GAN discriminator [19], respectively.
Figure 5: **Comparison on Old Photo Restoration on Challenging Cases. For a severely damaged old photo, with one eye masked with scratch, while only DDNM [42] is able to complete the missing eye, its restoration quality is significantly low. In contrast, our PGDiff produces high-quality restored outputs with natural color and complete faces.**
Figure 6: **Reference-Based Face Restoration. Our PGDiff, using \(\mathcal{L}_{ref}\) with identity loss as guidance, produces personal characteristics that are hard to recover without reference, _i.e._, using \(\mathcal{L}_{res}\) only. (Zoom in for details)**
**Experimental Results.** We demonstrate in Fig. 7 (b) that perceptual loss and adversarial loss can boost the blind restoration performance in terms of higher fidelity with photo-realism details.
## 5 Ablation Studies
In this section, we perform ablation studies on the dynamic guidance scheme mentioned in Sec. 3.2 to verify its effectiveness over the conventional classifier guidance scheme.
**Effectiveness of Dynamic Guidance Weight.** We first investigate the effectiveness of dynamic guidance weight \(s_{norm}\) in the face inpainting task, where the unmasked regions of the output image should be of high similarity to that of the input. As shown in Fig. 8 (a), without the dynamic guidance weight, although plausible content can still be generated in the masked area, the similarity and sharpness of the unmasked regions are remarkably decreased compared with the input. With \(s_{norm}\) replacing the constant \(s\), the output is of high quality with unmasked regions nicely preserved. The results indicate that our dynamic guidance weight is the key to ensuring high similarity to the target during the guidance process.
**Effectiveness of Multiple Gradient Steps.** To verify the effectiveness of multiple gradient steps, we compare the blind restoration results with the number of guidance steps \(N\) set to be \(1\), \(2\), and \(3\). While \(N=1\) is just the conventional classifier guidance, we set \(N=2\) during the first \(0.5T\) steps and set \(N=3\) during the first \(0.3T\) steps. As shown in Fig. 8 (b), artifacts are removed and finer details are generated as \(N\) increases. These results suggest that multiple gradient steps serve to improve the strength of guiding the output toward the intended target, particularly when the intermediate outputs are laden with noise in the early phases of the denoising process.
## 6 Conclusion
The generalizability of existing diffusion-prior-based restoration approaches is limited by their reliance on prior knowledge of the degradation process. This study aims to offer a solution that alleviates this constraint, thereby broadening its applicability to the real-world degradations. We find that through directly modeling high-quality image properties, one can reconstruct faithful outputs without knowing the exact degradation process. We exploit the synthesizing power of diffusion models and provide guidance only on properties that are easily accessible. Our proposed _PGDiff_ with _partial guidance_ is not only effective but is also extensible to composite tasks through aggregating multiple properties. Experiments demonstrate that PGDiff outperforms diffusion-prior-based approaches in both homogeneous and composite tasks and matches the performance of task-specific methods.
Figure 8: **Ablation Study of Dynamic Guidance.** The comparison results on the dynamic guidance scheme verify its effectiveness over the conventional classifier guidance scheme.
Figure 7: (a) Using CodeFormer as the restorer with our identity guidance improves the reconstruction of fine details similar to the ground truth. (b) The comparison results show that the quality enhancement loss is able to enhance fidelity with photo-realism details. |
2303.12084 | Thrill-K Architecture: Towards a Solution to the Problem of Knowledge
Based Understanding | While end-to-end learning systems are rapidly gaining capabilities and
popularity, the increasing computational demands for deploying such systems,
along with a lack of flexibility, adaptability, explainability, reasoning and
verification capabilities, require new types of architectures. Here we
introduce a classification of hybrid systems which, based on an analysis of
human knowledge and intelligence, combines neural learning with various types
of knowledge and knowledge sources. We present the Thrill-K architecture as a
prototypical solution for integrating instantaneous knowledge, standby
knowledge and external knowledge sources in a framework capable of inference,
learning and intelligent control. | Gadi Singer, Joscha Bach, Tetiana Grinberg, Nagib Hakim, Phillip Howard, Vasudev Lal, Zev Rivlin | 2023-02-28T20:39:35Z | http://arxiv.org/abs/2303.12084v1 | # Thrill-K Architecture: Towards a Solution to the Problem of Knowledge Based Understanding
###### Abstract
While end-to-end learning systems are rapidly gaining capabilities and popularity, the increasing computational demands for deploying such systems, along with a lack of flexibility, adaptability, explainability, reasoning and verification capabilities, require new types of architectures. Here we introduce a classification of hybrid systems which, based on an analysis of human knowledge and intelligence, combines neural learning with various types of knowledge and knowledge sources. We present the Thrill-K architecture as a prototypical solution for integrating instantaneous knowledge, standby knowledge and external knowledge sources in a framework capable of inference, learning and intelligent control.
Keywords:Neuro-Symbolic AI, Hybrid Systems, Knowledge Engineering. +
Footnote †: : _This preprint has not undergone any post-submission improvements or corrections. The Version of Record of this contribution is published in: Goertzel, B., Ikle, M., Potapov, A., Ponomaryov, D. (eds) Artificial General Intelligence. AGI 2022. Lecture Notes in Computer Science(), vol 13539. Springer, Cham. [https://doi.org/10.1007/978-3-031-19907-3_39_](https://doi.org/10.1007/978-3-031-19907-3_39_) |
2309.08087 | hear-your-action: human action recognition by ultrasound active sensing | Action recognition is a key technology for many industrial applications.
Methods using visual information such as images are very popular. However,
privacy issues prevent widespread usage due to the inclusion of private
information, such as visible faces and scene backgrounds, which are not
necessary for recognizing user action. In this paper, we propose a
privacy-preserving action recognition by ultrasound active sensing. As action
recognition from ultrasound active sensing in a non-invasive manner is not well
investigated, we create a new dataset for action recognition and conduct a
comparison of features for classification. We calculated feature values by
focusing on the temporal variation of the amplitude of ultrasound reflected
waves and performed classification using a support vector machine and VGG for
eight fundamental action classes. We confirmed that our method achieved an
accuracy of 97.9% when trained and evaluated on the same person and in the same
environment. Additionally, our method achieved an accuracy of 89.5% even when
trained and evaluated on different people. We also report the analyses of
accuracies in various conditions and limitations. | Risako Tanigawa, Yasunori Ishii | 2023-09-15T01:00:55Z | http://arxiv.org/abs/2309.08087v1 | # Hear-Your-Action: Human Action Recognition
###### Abstract
Action recognition is a key technology for many industrial applications. Methods using visual information such as images are very popular. However, privacy issues prevent widespread usage due to the inclusion of private information, such as visible faces and scene backgrounds, which are not necessary for recognizing user action. In this paper, we propose a privacy-preserving action recognition by ultrasound active sensing. As action recognition from ultrasound active sensing in a non-invasive manner is not well investigated, we create a new dataset for action recognition and conduct a comparison of features for classification. We calculated feature values by focusing on the temporal variation of the amplitude of ultrasound reflected waves and performed classification using a support vector machine and VGG for eight fundamental action classes. We confirmed that our method achieved an accuracy of 97.9% when trained and evaluated on the same person and in the same environment. Additionally, our method achieved an accuracy of 89.5% even when trained and evaluated on different people. We also report the analyses of accuracies in various conditions and limitations.
Risako Tanigawa and Yasunori Ishii Panasonic Holdings Corporation, Yagumonaka-machi, Moriguchi City, Osaka, Japan ultrasound, action recognition, active sensing
## 1 Introduction
Action recognition is one of the important technologies that is used for many applications such as robotics [1], healthcare [2, 3, 4], elderly behavior monitoring [5, 6] and suspicious behavior detection [7]. Many of these techniques utilize visual clues, such as RGB videos and images. Images contain a wealth of visual information about people and scenes. However, privacy concerns limit the use of scenes that may include identifiable information, such as faces.
To consider the privacy concern, the methods that using radio frequency (RF) signals [8, 9, 10], Wi-Fi signals [11, 12, 13], and acoustic signals [14, 15, 16, 17, 18, 19] are proposed. Although RF and Wi-Fi signals can detect fine-grained human postures due to their short wavelengths, the accuracy would degrade due to interference from the electromagnetic waves emitted by electronic devices. Acoustic signals are also interfered with ambient noise. However, if the frequency of the target sound is known, it can be restricted by filters. Furthermore, when using ultrasonic active sensing, the influence of environmental noise is less compared to audible sounds.
The sensing of a person through acoustic signals can be classified into two streams: passive and active sensing. Passive sensing involves capturing sounds emitted by objects. Recognition tasks, such as segmentation [15] and pose estimation [16, 14], have been performed based on audible sounds, particularly focusing on the voices. However, voices contain person-identifiable information and can be considered sensitive data from a privacy perspective. On the other hand, active sensing methods analyze the reflected signals from a person in response to the sounds emitted by a speaker. Therefore, these methods allow the acquisition of human movements without using personally identifiable information. Although the active sensing methods have been applied to gesture recognition [17, 18], segmentation [20], and pose estimation [19], human action recognition by non-invasive ultrasound active sensing has not been well established. In particular, there are no published action recognition datasets that focus on ultrasonic active sensing.
We propose a new task for human action recognition by ultrasound active sensing. We build our own dataset. To construct this dataset, we define 8 basic motion patterns, which include upper-/lower-/whole-body motions and motionless postures. Then, feature extraction was performed based on time series amplitudes of reflected waves, and action classification was evaluated using a support vector machine and a convolutional neural network. Our contributions are as follows: (1) We tackle a new task of action recognition by contactless ultrasound sensing; (2) Since there are no previous methods to handle this task, we create a new dataset; and (3) We conduct a comparison of features for classification.
## 2 Related Works
In this section, we present the related work on recognizing action from acoustic sensing. The acoustic sensing methods are categorized as follows: (1) passive sensing and (2) active sensing. Passive sensing methods have been used for action recognition [14], segmentation [15], 2D hand and arm pose [16]. These methods use clues of ambient and subject sound including voices. Voices have been used for speaker
recognition tasks therefore passive sensing has privacy concerns due to the usage of voices. Moreover, passive sensing methods cannot detect human information when the person does not emit any sound such as voices and footsteps.
On the other hand, active sensing allows for the acquisition of human information irrespective of whether a person is emitting sound or not. The active sensing methods have been used for gesture recognition [17, 18], 3D pose estimation [19], and segmentation [20]. In active sensing, single-frequency burst waves or chirp signals with temporally varying frequencies are employed as acoustic signals for the sound source. The burst waves are used to measure the distance to objects in [18] and to detect spatial power distribution of reflected waves in [20]. The method employing chirp signals enhances the accuracy of Time-of-Flight (ToF) by incorporating signals from multiple frequencies [17]. Additionally, utilizing the frequency characteristics as features enables the estimation of complex tasks such as 3D pose estimation [19].
In our approach, we use active sensing for human action recognition to avoid obtaining privacy-related information. Chirp signals are used for the sound source of our active sensing because the signals enable us to obtain spatial propagation features in multiple frequencies and are used for other tasks such as gesture recognition [17] and 3D pose estimation [19]. The frequencies are limited to ultrasound range to avoid capturing voices in the audible range.
## 3 Methods
### Human Action Recognition By Ultrasound Active Sensing
We propose a new task that estimates human action from ultrasound signals. To realize this task, the information we need to know is the changes in the emitted ultrasound from a speaker when the ultrasound is captured by microphones. This is the same manner of the room impulse response when using audible sound. The ultrasound transfer features are changed along with the environment of the space. If a person moves inside the space, the transfer features of ultrasound would be changed. Therefore, we established an ultrasound active sensing system for human action recognition and considered feature extraction methods suitable for the action recognition task. Our concept is described in Fig. 1. We closely set a speaker and microphone opposite to humans and captured reflected signals from the environment and humans. Analyzed reflected signals to extract features are used for action classifications.
### Active Sensing System
The schematic diagram of the active sensing system used in this research is described in Fig. 2. We used a tweeter to emit ultrasound. As a receiver, we used a MEMS microphone. The microphone was placed 36 mm below and 22.5 mm away from the center of the speaker. The sampling frequency was set to \(f_{s}=96\) kHz enabling to capture of ultrasound range.
### Signal Design
We design a linear chirp signal for the active sensing. The linear chirp signal \(x\) is a signal where the frequency increases linearly over time and calculated as
\[x(t)=\sin\left(2\pi\left(\frac{\beta}{2}t^{2}+f_{0}t\right)+\phi_{0}\right), \tag{1}\]
where \(t\) is the time, \(f_{0}\) is the lower bound frequency, \(\phi_{0}\) is the initial phase. \(\beta\) is the coefficient determined by the time length of the chirp signals \(\tau\) and the lower and upper bound frequencies (\(f_{0}\) and \(f_{1}\)) as \(\beta=(f_{1}-f_{0})/\tau\). In our approach, we set the frequencies to \(f_{0}=20\) kHz and \(f_{1}=40\) kHz. To determine the time length of the chirp signal, we considered the minimum distance of the sensing area. We need to avoid interfering the direct and reflected ultrasound when we capture the reflected ultrasound of human. Therefore, the time length of the chirp signal is limited as \(\tau\leq(2d_{\min})/c\), where \(d_{\min}\) is the minimum distance of the sensing area and \(c\) is the speed of ultrasound in air. We set \(d_{\min}=0.30\) m; therefore, we set \(\tau=1.5\) ms.
Figure 1: Concept diagram of our task.
Figure 2: Schematic diagram of the our sensing system.
To observe the transfer feature changes over time, cyclic chirp signal emission is required. To do so, we determined the cycle time \(T\) of the chirp signals based on the maximum distance of the sensing area \(d_{\max}\): \(T\geq(2d_{\max})/c\). We set \(d_{\max}=2.0\) m; therefore, we set \(T=11.8\) ms.
### Feature Extraction
While common feature extraction methods have been well-established in audible passive sensing, such as Short-Time Fourier Transform and Mel-Frequency Cepstrum Coefficient, there are limited researches in the context of active sensing, and the feature extraction methods are not well-established in the ultrasound active sensing field. Thus, in this paper, we focus on validating the potential of the following two types of features: (1) time-series reflected waves and (2) time-series envelopes of reflected waves.
Time-series reflected wavesThe first feature is time-series reflected waves. Temporal information is crucial for recognizing human actions, and the temporal changes in ultrasound transfer characteristics are useful for identifying human actions from ultrasound waves. Therefore, we utilize the time-series reflected waves as a feature to extract the temporal changes in the reflected ultrasound. Let the received signal of a microphone as \(\mathbf{y}\), the signal includes direct signal and reflected signal. The direct signal is the wave that directly approaches from the speaker, and the reflected signal is the wave propagates through the space and reaches the microphones. We extract each reflected wave from a single chirp signal and concatenate the waves as \(F_{\mathrm{ref}}=[\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{N}]\), where \(F_{\mathrm{ref}}\) is the time-series reflected waves and \(\mathbf{y}_{i}\:(i=1,2,\ldots,N)\) is the \(i\)-th reflected wave. The \(i\)-th reflected wave is extracted as \(\mathbf{y}_{i}=\mathbf{y}[N_{\mathrm{dir},i}+N_{\min},\ldots,N_{\mathrm{dir},i }+N_{\max}],\) where \(N_{\mathrm{dir},i}\) is the index of \(i\)-th direct wave, \(N_{\min}=2f_{s}d_{\min}/c\) is the index corresponding to the minimum distance of the sensing area \(d_{\min}\), and \(N_{\max}=2f_{s}d_{\max}/c\) is the index corresponding to the maximum distance of the sensing area.
Time-series envelopes of reflected wavesThe second feature is the time-series envelopes of reflected waves. Since the phase of reflected waves greatly depends on the shape of the reflecting object, the feature is sensitive to changes in the sensing environment. Therefore, in order to focus on the amplitude of the reflected waves, we utilize the envelopes of the amplitude of each reflected wave. Then the envelopes are concatenated as the time-series envelopes of the reflected waves \(F_{\mathrm{env}}\) as \(F_{\mathrm{env}}=[\mathbf{\hat{y}}_{1},\mathbf{\hat{y}}_{2},\ldots,\mathbf{ \hat{y}}_{N}],\) where \(\mathbf{\hat{y}}_{i}\:(i=1,2,\ldots,N)\) is the \(i\)-th envelope of reflected wave.
We show the examples in Fig. 3. The left figure represents \(F_{\mathrm{ref}}\), and the right figure represents \(F_{\mathrm{env}}\). The vertical axis corresponds to reflected waves, while the horizontal axis represents the cycle period of the chirp signal. \(F_{\mathrm{ref}}\) represents the time-series reflected waves feature, hence the alternating positive and negative values along the vertical axis. \(F_{\mathrm{env}}\) represents the time-series envelopes of reflected waves feature. The position of the high-amplitude region between 0.00 and 0.04 of \(F_{\mathrm{ref}}\) and \(F_{\mathrm{env}}\), is changing as time progresses. This indicates that both features are capable of representing human actions.
## 4 Experimental Settings
### Datasets
Since there have been no existing ultrasound action recognition datasets available, we constructed our own dataset to evaluate the possibility of action recognition from ultrasound. The experimental setup is described in Fig. 4. We recorded the data in three different spaces: (Ra) anechoic chamber, (Rb) a room without furniture, (Rc) a room with furniture. In the anechoic chamber and a room without furniture, we recorded data with one subject. In the room with furniture, we recorded single-person data with four subjects. We set eight fundamental action classes: hand-waving, throwing, kicking, picking-up, walking, lying-down, sitting, and standing.
### Evaluation
To classify action classes from ultrasound features, we use two models: support vector machine (SVM) [21] and VGG [22]. The kernel function of SVM is set to the radial basis function kernel. The regularization parameter is set to \(c=1\) and the kernel coefficient is set to \(\gamma=1/(N_{\mathrm{dim}}\times F_{\mathrm{var}})\)
Figure 4: Schematic diagram of data acquisition condition and pictures of room.
Figure 3: Example features of walking class.
where \(N_{\rm dim}=960\) is the dimension of the features and \(F_{\rm var}\) is the variance of the features. We set the learning rate to \(0.1\) and use a batch size of \(12\). The optimizer is the Stochastic Gradient Descent (SGD) with a momentum of \(0.9\) and a weight decay of \(0.0005\). We also use data augmentation, which is rotation within \(\pm\)5 degree and horizontal flip. We use the accuracy score for all evaluations.
## 5 Results
Table 1 shows the evaluation result. Compared to the models, the accuracy tends to be higher when using VGG in all conditions except for condition No. 4 with \(F_{\rm env}\). Compared to the features, the best feature varied depending on the condition. However, comparing the overall average values, for SVM, the feature \(F_{\rm env}\) had 11.6 points higher than the feature \(F_{\rm ref}\), while for VGG, both features had an accuracy rate of 66.7%. By using only the amplitude information in the reflected waves, the accuracy rate is equal to or higher. This indicates that amplitude information is more important in the classification task.
Conditions No. 1 and 2 rows in Table 1 are the evaluation results of the simplest arrangement where we use the same room and same person. The data is split between training and evaluation. The accuracy was reached 99.8% in the best condition for condition No.2, \(F_{\rm env}\). From this result, we confirmed that SVM and VGG have approximately the same accuracy. Therefore, under ideal conditions, actions can be classified with the proposed features.
Conditions from No. 3 to 6 rows in Table 1 are the evaluation results of the accuracy for unknown subjects. We split the data of Rc into four sets per subject from Rc(1) to Rc(4). By referring to conditions No. 3 to 6 in Table 1, there is variation in accuracy depending on the set used for training and evaluation. To analyze this point, the accuracy rates per class in VGG with the feature \(F_{\rm env}\) are shown in Fig. 5. Under condition No. 3, all the classes had over 70% accuracy rates. In condition No. 4, low accuracy rates under 30% for sitting, hand-waving, and kicking. However, the other classes have accuracy rates of over 60%. On the other hand, conditions No. 5 and No. 6 showed different trends. Some classes had accuracy rates of 0%, while others had accuracy rates exceeding 80%. These results show that individual differences in actions decreased the accuracy. Therefore, the results indicate the necessity to develop a method for calculating feature vectors that can mitigate the impact of individual differences in actions.
Condition No. 7 row in Table 1 is the evaluation result of the accuracy for the unknown room. The accuracy sharply dropped and reached a mere 22.7% even at its highest score. This is because the reflected waves include reflections not only from humans but also from objects other than humans. Since these reflections vary depending on the environment, the robustness against data from environments is considered to be low. Our experiments revealed that the proposed action recognition method needs to be more robust to unknown environments. In order to improve the issue, it is effective to standardize the data using data when no one is present or to remove the stationary component by taking the difference in reflected waves for each chirp signal. Alternatively, by collecting large amounts of data and performing deep learning-based learning, the performance can be expected to improve by learning a recognizer that is robust to differences between people and the environment.
## 6 Conclusions
We propose a new framework to recognize human action by ultrasound active sensing, and performed action recognition using SVM and VGG with features based on the reflected ultrasound when emitting a chirp signal of 20k-40kHz. We confirmed that our method performs well under simple conditions using our newly constructed dataset, and also observed the impact of performance variability and individual differences on the overall performance. Future research will focus on developing a feature extraction method for individual behavioral differences and spatial variations, and constructing a suitable deep learning model. We believe that this method can be utilized as a privacy-aware human sensing method and can expand the possibilities of ultrasound active sensing.
\begin{table}
\begin{tabular}{c|c c c c c c|c c c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{No.}} & \multirow{2}{*}{Ra} & \multirow{2}{*}{Rb} & \multirow{2}{*}{Rc} & \multirow{2}{*}{Rc} & \multicolumn{6}{c|}{Accuracy [\%]} \\ & & & & & (1) & (2) & (3) & (4) & \(F_{\rm ref}\) & \(F_{\rm env}\) & \(F_{\rm ref}\) & \(F_{\rm env}\) \\ \hline
1 & T/E & - & - & - & - & - & 78.4 & **85.0** & **97.9** & 94.6 \\
2 & - & T/E & - & - & - & 97.3 & **98.7** & 98.3 & **99.8** \\
3 & - & - & E & T & T & T & **81.1** & 80.1 & 89.3 & **89.5** \\
4 & - & T & E & T & T & 52.9 & **73.5** & **72.1** & 60.0 \\
5 & - & - & T & T & E & T & 18.1 & **43.8** & 46.7 & **51.2** \\
6 & - & - & T & T & T & E & 0.8 & **35.2** & 40.2 & **49.6** \\
7 & T & T & E & E & E & E & **20.7** & 14.1 & **22.7** & 22.2 \\ \hline \hline \multicolumn{1}{c}{Average} & & & 49.9 & **61.5** & **66.7** & **66.7** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experimental conditions and accuracy results. The character ”T” and ”E” represents the data used for training and evaluation, respectively.
Figure 5: Accuracy per class from conditions No. 3 to 6 with \(F_{\rm env}\) in VGG. |
2309.15968 | Dynamics of Ideological Biases of Social Media Users | Humanity for centuries has perfected skills of interpersonal interactions and
evolved patterns that enable people to detect lies and deceiving behavior of
others in face-to-face settings. Unprecedented growth of people's access to
mobile phones and social media raises an important question: How does this new
technology influence people's interactions and support the use of traditional
patterns? In this article, we answer this question for homophily-driven
patterns in social media. In our previous studies, we found that, on a
university campus, changes in student opinions were driven by the desire to
hold popular opinions. Here, we demonstrate that the evolution of online
platform-wide opinion groups is driven by the same desire. We focus on two
social media: Twitter and Parler, on which we tracked the political biases of
their users. On Parler, an initially stable group of Right-biased users evolved
into a permanent Right-leaning echo chamber dominating weaker, transient groups
of members with opposing political biases. In contrast, on Twitter, the initial
presence of two large opposing bias groups led to the evolution of a bimodal
bias distribution, with a high degree of polarization. We capture the movement
of users from the initial to final bias groups during the tracking period. We
also show that user choices are influenced by side-effects of homophily. Users
entering the platform attempt to find a sufficiently large group whose members
hold political biases within the range sufficiently close to their own. If
successful, they stabilize their biases and become permanent members of the
group. Otherwise, they leave the platform. We believe that the dynamics of
users' behavior uncovered in this article create a foundation for technical
solutions supporting social groups on social media and socially aware networks. | Mohammed Shahid Modi, James Flamino, Boleslaw K. Szymanski | 2023-09-27T19:39:07Z | http://arxiv.org/abs/2309.15968v2 | # Dynamics of Ideological Biases of Social Media Users\({}^{\dagger}\)
###### Abstract
Humanity for centuries has perfected skills of interpersonal interactions and evolved patterns that enable people to detect lies and deceiving behavior of others in face-to-face settings. Unprecedented growth of people's access to mobile phones and social media raises an important question: How does this new technology influence people's interactions and support the use of traditional patterns? In this paper, we answer this question for homophily driven patterns in social media. In our previous studies, we found that, on a university campus, changes in student opinions were driven by the desire to hold popular opinions. Here, we demonstrate that the evolution of online platform-wide opinion groups is driven by the same desire. We focus on two social media: Twitter and Parler, on which we tracked the political biases of their users. On Parler, an initially stable group of right-biased users evolved into a permanent right-leaning echo chamber dominating weaker, transient groups of members with opposing political biases. In contrast, on Twitter, the initial presence of two large opposing bias groups led to the evolution of a bimodal bias distribution, with a high degree of polarization. We capture the movement of users from the initial to final bias groups during the tracking period. We also show that user choices are influenced by side-effects of homophily. The users entering the platform attempt to find a sufficiently large group whose members hold political bias within the range sufficiently close to the new user's bias. If successful, they stabilize their bias and become a permanent member of the group. Otherwise, they leave the platform. We believe that the dynamics of users uncovered in this paper create a foundation for technical solutions supporting social groups on social media and socially aware networks.
opinion evolution in social media, polarization, echo chambers, socially aware networks
## I Introduction
People exhibit different patterns of social behavior [1] that shape their interpersonal interactions [2] and determine how social groups are created and evolve [3]. Traditionally, these social behaviors have been studied in the context of direct interactions between actors in an offline setting. Hence, their presence and effects within online social environments is not well understood. Indeed, social media has played an ever-growing role in many spheres of human interaction. One such sphere is politics, which is important because it shapes the governments and political systems at all levels of societies and nations [4]. In this role, social media provides platforms for politicians to influence countless individuals across vast distances instantly. However, these mediums have also allowed for the widespread dissemination of misinformation [5], facilitating the polarization of users [6], and enabling the formation of echo chambers [7].
The online interactions in social networks that we study here are inherently different from offline face-to-face verbal interactions during which participants silently monitor voice intonation and body language of their partners to recognize their emotions and behavioral patterns. Such recognition facilitates detection of lies and deceiving behavior, but it is missing in online interactions, lowering the chance that social media users will be able to recognize and reject strongly biased, questionable, or faked content. Accordingly, there is a need to further our understanding of dynamics of social groups in social media to amplify their benefits but temper their drawbacks.
One social principle that is integral to our understanding of social group dynamics is homophily [8]. A study of the homophily of student groups on a university campus was presented in [9]. It included modeling the evolution of these groups by tracing over time the opinions held by these students on a variety of issues. We found that the most stable groups in terms of stability and longevity of members consisted of students with majority opinions. In contrast, groups with students holding minority opinions were unstable, often changing members and dissolving. We also showed that the entire system evolves toward a stable state in which all groups are fully polarized on the opinions most important to the members.
The question thus arises of whether the homophily principle and its impact on group dynamics can be observed on online social networks as well. Social networks do not facilitate only interactions between actors, but influence user decisions by content recommendations biased by preference tracking algorithms, such as used by Twitter and other social media. Such preferences are also used by socially-aware networks in which edges represent voluntary social interactions between users and which provide network services using social network analysis techniques [10]. In addition, users exhibit preferences for using certain features of social media more often than others. For instance, a study on temporal dynamics and switching behavior of users on Twitter [11] showed that users naturally prefer "follows" over "lists" and "list subscriptions" as the most common means of information consumption. This means that temporal analysis of switching behavior could reveal external characteristics of users, such as their ideological biases.
Major social media platforms like Facebook, Twitter, and Instagram continue to grow, but such growth is not limited to these highly popular platforms. In fact, recent events in U.S. politics have prompted an entrance of new, alternative platforms to cater to specific groups of users. The most visible
example is Parler [12], launched in 2018. This microblogging platform marketed itself as the "free-speech" social media alternative. Designed as a Twitter clone, Parler aimed to become a platform for conservative-leaning social media users alienated by Twitter. In this paper, we analyze the dynamics of group evolution on social media using data collected by tracking users on Parler and Twitter and assigning them initial and final political biases. They are defined by the average biases of URL links posted by these users during the first and last month of activity, respectively. The tracking of users lasted from September to December 2020, a period that includes the 2020 U.S. Presidential election, which occurred in November of that year and triggered a high level of political interactions during that time.
Using the biases assigned to users, at the end of each period, we created groups of users of the same bias and two "constellations" of groups of left and right biases, each regardless of the intensity of the respective biases. Then, we analyzed the evolution of these groups on Parler and Twitter. Our analyses confirmed that side-effects of homophily uncovered in our previous work on interactions of students (which ranged from face-to-face meetings to cell-phone messages and calls) are also valid on social media. The two methods of avoiding interactions in diverse groups are either changing important opinions to majority ones, or if this fails, dropping off the platform. Overall, we aim to demonstrate that homophily plays a role in group formation, evolution, and retention online by showing that these results hold in an online setting for two contrasting social media platforms.
Our results show that Twitter has two stable bias groups with the locally largest fractions of members across the political spectrum: liberal bias and conservative bias. They have the local maxima in terms of political bias stability, with holders of these biases retaining their opinions for a long time. In contrast, groups with members holding unpopular political biases were unstable, with their members quitting the platform or leaving to groups with more popular political biases. The desire to interact with peers with similar views motivates holders of unpopular political biases to change their biases or keep them and leave the social media platform. This desire drives the evolution of the large platforms, like Twitter, toward bimodal polarization. In contrast, The smaller platform, Parler, has been dominated from its start by extreme right bias and fake news bias, which heavily overlap in terms of committed users making their groups stable and popular. Stability of dominating biases and initially the lack of a noticeable presence of liberal biased content on the platform freezes these two patterns into permanence. The resulting homogeneity formed an unopposed echo chamber on Parler, where the users in this echo chamber engage in and proliferate the same kind of content with little deviation.
## II Terminology
While Twitter is an established social media platform that has been subject to numerous research studies across multiple disciplines, Parler has seen less attention. Subsequently, we highlight below the content terminology used within Parler's user interface for those not familiar with the platform.
Parler is fashioned after Twitter, and as such, their methods for content generation and interaction are also similar, but with different names. Posts on Parler are called "Parleys". Parler users are allowed to make posts which are visible to other users and are limited to a maximum of 1,000 characters. Each post can be upvoted or downvoted to indicate if the voting user agrees or disagrees with the content. However, posts only show the number of upvotes, and not the number of downvotes. Comments can be made under posts, and these comments can be upvoted or downvoted as well. Comments can also be made under existing comments on posts, thus creating a local comment tree for each post. Parler's version of Twitter's retweet feature is the "echo". Echoing allows users to choose an existing post and post it to their page, optionally adding content that then appears above the post. Users can post a variety of content, including URLs, images, GIFs, and videos.
To summarize, Parleys are equivalent to Tweets, Echoes to Retweets, and Upvotes to Likes. Comments are similar across Parler and Twitter as well. The equivalence of these features and the overall intentional similarities between Twitter and Parler means that the graph structure that is organically created by the usage of both websites ends up looking similar as well.
## III Methods
### _Datasets_
The Parler database was accessed in 2021 [13]. The published dataset includes most of the posts sent between March of 2018 and January of 2021. It contains about 183 million Parler posts sent by 13 million users. We analyzed a subset of these posts ranging from September \(1^{st}\) to December \(1^{st}\), 2020. The Twitter dataset was obtained from [6]. This dataset was collected using the Twitter Search API to find all tweets, retweets, quotes, and replies containing the name of one of the two primary 2020 U.S. Presidential candidates sent between June of 2020 and December of 2020. This search yielded approximately 702 million Twitter communications sent by 20 million users. As in the case of Parler, we analyze a subset of all communications sent between September and December 2020.
### _News Media Classification_
We focus our study on the political biases of users on Parler and Twitter. Accordingly, first we need to identify the political leanings of the content that these users propagate. To do this, we adopted a methodology used in [6], which was originally designed for Twitter. These classifications identify political biases and fake news of news media outlets. So, given a Tweet with a URL linking to a valid outlet, we can assign political bias to this tweet. The classifications that we use to identify biases of users originated from two political biases and fake news establishing websites: allsides.com (AS) and Media Bias / Fact Check (MBFC).
AS is a well-known and respected tool for rating news media bias that combines several methods, such as blind surveys, academic research, community feedback, independent reviews, and editorial reviews (www.allsides.com/media-bias/media-bias-rating-methods). This approach improves the
reliability of findings. MBFC assigns news media biases, using a different approach that relies on evaluation of wording, sourcing, story choices, and political endorsement (www.mediabiasfactcheck.com/methodology). MBFC results have been used for labeling bias and factual accuracy of news sources in several academic studies and in multiple journal publications.
The combined evaluations of AS and MBFC have been used to classify a total of 119 media news outlets. The classifications are grouped into five news media categories based on the traditional U.S. political spectrum. Given that the "left" represents liberals and "right" represents conservatives, the categories are _right, leaning right, center, leaning left_ and _left_. These categories are refined by the addition of two more categories, _extreme right bias_ and _extreme left bias_. These two categories include news media organizations that tend to exhibit heavy bias toward selected political issues, to the point of promoting propaganda or conspiracy theories not supported by any credible sources. Finally, the third addition, a _fake news category_, includes any news media organizations that have been flagged by AS and MBFC as sites that regularly disseminate controversial or fake news to force their points of view. Once these categories are assigned to the news sources, all users can be classified by the content that they consume or spread on Twitter and Parler.
### _Mapping Users to Political Bias Groups_
We note that political bias, in the context of social media graph data, is an external characteristic. Therefore, grouping users by their political bias is an external grouping. As opposed to an internal characteristic such as centrality, political bias is a property we ascribe to users based on the political bias classifications of their posts based on the assessments of AS and MBFC. As such, the classifications of political views and the related conclusions contained in this paper should not be interpreted as representing the opinions of the authors or their funders.
MBFC/AS classification ranges from extreme left to center and fake news, defining eight classes that we ranked as follows. The extreme left is ranked 1, and the remaining classes are listed in the order of increasing right bias and assigned the rank by 1 larger than its left predecessor, ending with rank 8 assigned to fake news. These URLs ranks are averaged over all URL links posted by this user over the initial and final month of data collection, respectively to obtain the initial and final biases of this user. A user with the initial bias who did not have any posts within the last month is classified as a platform dropout. This method can measure user bias evolution over time using different time periods and define more than two time intervals such as an initial and final month used in this paper, enabling monitoring user bias evolution more precisely.
Fig. 1 shows two-level clustering of polarized users that we introduce here. At the lower level, there are eight bias groups found in Parler and Twitter as determined by the MBFC/AS classifications. For group membership, each user rounds its bias to the nearest integer value and joins the corresponding group. At the higher level, we cluster together left and right biases regardless of their intensity, which creates left and right constellations, with the center bias and fake news groups existing outside of these constellations. The connections between these groups show that some groups are ideologically "near", such as leaning left and left groups, whereas other groups are ideologically "far", such as leaning right and fake groups. Thus, groups that have a single edge between them are at a unit distance away from each other and would require a member to shift their beliefs a little to move between them. For a pair of groups not connected by an edge, the member of the initial bias group can travel along the shortest path from it to reach the final bias group. The number of edges in that path will define the distance traveled by this member.
The table in Fig. 2 displays the total number of classified content items in our Twitter and Parler datasets, grouped by their assigned news media category determined by AS and MBFC. This gives us an initial perspective on the political leanings of these platforms. Parler has a strong conservative news media presence, as evidenced by the average bias of all users being 7, corresponding to the Extreme Right bias. In contrast, Twitter users have more balanced news media usage with the average bias of all users being about 4, exactly at the Center bias.
Fig. 1: This figure shows a two-level clustering of the polarized users. The lower level contains eight bias groups of users. The higher level consists of two primary clusters called constellations that group associated biases together. The center bias and fake news groups exist outside the two constellations. Edges connecting each constellation’s groups to each other show that members can directly reach groups within each constellation, defining unit distance between them. Travel between groups across the constellations requires several unit steps. Each user has two biases, initial and final. The initial bias is based on URL. Links collected during the initial month of collected data, while the final bias uses links gathered in the last month. Each user with two different biases travels from initial to final bias, changing sizes of the bias groups dynamically.
### _Dynamics of Users Flows between Bias Groups and Platforms_
We portray the movements of the number of users between political biases over time using a Flow Matrix (FM) in which each row and each column represents a bias group. Each cell in the Flow Matrix shows the number of users that moved from the initial bias of their row to the final bias of their column. However, a decrease in membership numbers occurs between the initial and final population of users in the study. This is because some users stop posting early on and do not post again. We call these users "dropouts". We categorize any user who stops posting and does not make a single post for two months or more as a dropout from their platform instead of assigning them a final bias. Thus, these dropouts are not included in the flow matrix calculation.
Using the Flow Matrices, we can find the distance and direction each user moved based on their initial bias group and final bias group. To clarify notation, we assume that clockwise movements in Fig. 1 and leftward movements in the FM have negative polarization. The corresponding counterclockwise and rightward movements have positive polarization. We calculate these movement vectors for every user and compute the mean, median and interquartile range (IQR) of the movement vectors for each bias group. We visualize this data using box plots in the below section, which illustrate the dynamics of inter-group movements for the two platforms.
## IV Results
### _Dynamics of User Political Biases_
We compute and display the dynamics of users' biases for Twitter and Parler in Fig. 3. The new users arrive at the input column labeled "I" and each cell of this column represents the number of newcomers with the label of this cell. New users who do not stay long enough to be assigned a final bias flow to the dropout column "D". Each cell of column "D" has the count of dropouts for each bias category. The remaining newcomers move to the active users column "A" to the right of the column "I". From there, users leave the cells from the "I" vector defining their initial bias to the Flow Matrix "FM" in the same rows as their cell and to the column in FM that represents their final bias. Therefore, summing FM along each row yields the number of users with the initial bias represented by this row (this number is stored in vector "I"). Summing this matrix along the columns yields the number of users whose final bias is represented by their column. These numbers are shown in the bottom row "F" and arrows indicate which column shows the composition of initial biases in each final bias cell in column "F".
Fig. 3 exposes patterns of political bias propagation on Twitter and Parler, revealing an interesting trend in user groupings in each news media category. In Twitter, there are two disjoint communities that have two of the locally largest fractions of users. One community is centered around the left (liberal) news media category and the other is centered on the right (conservative) and extreme right news media category, with little overlap with the center news media category. In contrast, Parler's FM yields a singular community with a locally largest fraction of users. It is centered around the fake news and extreme right bias news media categories. The bimodal and unimodal patterns of Twitter and Parler, respectively, characterize the diversity of news propagated on these platforms. The act of dropping out from a platform can arise in many kinds of human interactions, but with different intensities, as seen in Fig. 3.
These figures display the raw numbers for the initial, final, and dropout populations for each bias group on both platforms from which computed fractions of the dropouts in each bias category and observed different dropout trends for different groups. 49.7% of all users on Twitter dropped out from the platform between September and December, compared to 19.4% for Parler. This significant difference in dropout fractions highlights the stability of Parler.
In both platforms, the differences between the right and left biases were small, a bit over 10% of the dropout rate in each case. The dropout rate was higher for the right bias (50.6%) than the left bias (45.3%) on Twitter, but lower for the right bias (19.2%) than the left bias (21.9%) on Parler. This demonstrates that the existence of only one popular political bias on Parler prevents individuals with biases distant to the popular political bias from even attempting to join Parler, since those who join have a similar rate of staying on the platform as the rate of the user with popular biases. Subsequently, a perpetual echo chamber arises through the overall avoidance of the platform, not from more intensive user dropout.
Comparing these dropout rates to university students [9] reveals that resistance to dropping out is strong in this offline setting, since the yearly dropout rate from the target campus, Notre Dame University, was 2% (80 to 100 times lower compared to our social media platforms, considering the four-times
Fig. 2: The count of news media URL links posted on Twitter and Parler, grouped by political bias. The percentages show the fraction of content in our data that fall within that news media category overall. The rank column shows numeric values assigned to each bias. The computed average biases of users of Twitter is 3.96 \(\approx 4\) which is the Center bias, while for Parler it is 6.83 \(\approx 7\), the Extreme Right bias.
longer time over which student dropouts were counted). This difference highlights the notably lower cost of dropping out of social media platforms, which can be done in a short time without jeopardizing any long-term relationships. In contrast, students must invest one year of their time before leaving, and usually will have some new acquaintances on campus by that time. They will also likely be subsequently entering a new university with already established groups of students, which can make socialization more difficult compared to entering campus as a part of the larger group of incoming freshmen.
### _Movement Dynamics_
The movement diagram for Twitter and Parler is shown in Fig. 4. We use the IQR data discussed in the previous section to generate box plots for each initial bias group. Each box spans the range from the first to third quartile of movements of members of this group, with the yellow midline representing the movement median, while whiskers capture the maximum movement distances traveled by members. As a reminder, we define the smallest distance as a unit step in each direction that represents one hop of an edge in Fig. 1. For instance, the median value of Parler's Center group is approximately two steps toward the right direction (as opposed to negative two steps, which denotes two steps toward the left).
On Twitter, we observe that an average user's movement was within the two closest groups from their initial group because each box plot is within the range from negative two to two steps. The medians are between zero and one step for each group, indicating low intra-group distances and strong polarization between the two constellations. The box plots
Fig. 3: Flow diagram of Twitter (Top) and Parler (Bottom) users in terms of their initial and final biases and their changing status as newcomers, dropouts, or active users. Column **I** shows the number of newcomers in each of the initial bias groups. Column **D** to the left of **I** shows the number of newcomers that drop out from the platform before they are assigned a final bias. Column **A** shows the number of newcomers who obtain a final bias classification. The Flow Matrix **FM** connects all active users with the same initial bias to the specific final bias assigned to them. Finally, the bottom row **F** shows the number of users with their final biases. Thus, the direction of flow is from column **I** to **A**, then to columns **FM** along the corresponding row, and finally to row **F**.
create a wave-like pattern because these group movements are self-reinforcing. For example, The Leaning Left bias group is one step from the Center bias group which in turn tends to move further left feeding into the Left group, which itself favors unit rightward movement back into the Leaning Left group.
For Parler, the box plots show consistent rightward movements from the Left bias group toward the Center and Leaning Right bias groups with median movement of two steps, since most movements are limited to the range from one to three steps. Very few movements begin within the Right, Extreme Right or Fake bias groups. The Fake and Extreme Right bias groups interact mostly internally leading to the formation of the echo chamber. Parler's left constellation also shows instability, with a large fraction of users abandoning the platform. These patterns are very different from Twitter's, which neither exhibit a strong directional preference nor constellation-wide instabilities for either Left or Right sides.
## V Discussion
In this paper, we collected Parler and Twitter data around the 2020 U.S. Presidential election to compare the political content propagation dynamics of these platforms. This comparison demonstrates fundamental differences in the populations of the two online social mediums. Parler was created to provide an alternative to Twitter, with an emphasis on political free speech attempting to attract alienated users from other social media in the wake of the political discourse triggered by the 2020 election. To provide insight into the type of users that Parler attracted, and the political information being disseminated in Parler and Twitter, we used political bias classifications of news media outlets to identify the presence of fake news and classify content along the U.S. political spectrum. We then characterized the dynamics of content propagation by users by analyzing user movement behavior. These results, combined with our political categorizations of posts on Twitter and Parler, allowed us to show how stable each type of political bias is measured by consistency with which users continue to propagate the content of their current bias.
On Twitter, we found two consistent and disjoint groups of overlapping users, where liberal-oriented users tended to spread only similarly liberal biased news, while conservative-oriented users spread only similarly conservative biased news, creating two locally largest fractions of the group members. In contrast, Parler had only one distinct group with the locally largest fraction of the group members, lacking any significant patterns of liberal biased news spread. Instead, there were primarily only conservative-oriented users who consistently spread conservative bias and fake news.
Characterizing these patterns, we observed that on Parler the fake news category had the greatest fraction of users migrating to it or choosing to stay in it. This indicated that users on Parler who initially spread fake news had a penchant to continue disseminating them. Furthermore, users with other political biases were more likely to shift and propagate fake news themselves, suggesting the presence of a strong echo chamber. The fake news group on Twitter, on the other hand, did not attract a significant proportion of the members. Instead, Twitter had two bias groups with locally largest fractions of members: one centered around liberal news media categories, and the other centered about conservative news media categories. Subsequently, users with these biases were most inclined to retain them, with similar political biases being likely to migrate to them, causing polarization as users converge on these opposing political biases. We note that the bimodal pattern of Twitter here corroborates observed polarization between the left biased and right biased users reported in [6], which also showed a decreasing overlap in center-biased discourse over time.
The broader impact of the results of this paper is the advancement of our understanding of how human behavior adapts to new ways of interpersonal interactions, and how new technologies can benefit from the patterns that seem to persist across the communication mediums. One example of this persistence are the trends observed here in the Twitter and Parler that expand on the results from [9], which show
Fig. 4: Movement diagram for Twitter and Parler users, visualizing the movement behavior for each bias group. Each box plot shows the interquartile range for the initial bias group distances traveled by each user from this group. The yellow lines in the box plots represent the median distance traveled by the group members and whiskers on either side visualize the maximum extent of distance moved.
that university student groups whose members were mostly of majority opinion holders were had more stable membership and persisted longer than groups whose members held minority opinions. Parler initially gained majority of fake news and extreme right bias and then maintained these biases over time. Meanwhile all liberal biased content was relegated to an insignificant minority. However, on Twitter, users with a broad range of political biases were initially joining, resulting in the formation of two groups of biases. In both cases, the users' behaviors show two tendencies, one for moving toward stable opinions, and another dropping out of the platform. These tendencies drive polarization, as users migrate to stable popular political bias groups, and unpopular outlier biases are deserted, resulting in the formation of isolated echo chambers.
Studies of temporal social networks [14] show quantitatively that people do not communicate randomly in all types of interactions, which causes entropy of the interactions to decrease over time. The same conclusion is reached in the research presented here, as the dynamics of political biases in social media are stabilizing user interactions over time. Within this scope, we can conclude that optimizing social media and socially aware networks implementations for such patterns [10] will be efficient due to the trend of these patterns stabilizing over long periods of time.
## VI Future Directions
The results presented in this paper offer several interesting avenues for future work. Among them, structural graph-based comparisons between Parler and Twitter will likely provide further insights into the differences between these social media platforms. Another interesting direction for research would be to compare the content characteristics and propagation habits of users on both platforms to see if the presence of strong content moderation guidelines on Twitter led to more accountable behavior from its most influential members compared to Parler.
We also plan to study bias dynamics over time periods smaller than three months. Computing biases periodically on a weekly basis will reveal trajectories over the graph shown in Fig. 1. Having them will allow us to measure the forces that (1) attract users to popular bias groups, (2) restrict the length of travel in search of peers, and (3) motivate users to drop out from the current social media platform. The first force is a side-effect of homophily [8], which is the tendency to interact with people with compatible views. It is easy to ensure such compatibility in small groups of face-to-faces interacting people, yet difficult for technology enabled large interacting groups of social media users. Homophily motivates people to change their views to interact comfortably within such groups. The second force, confirmation bias [15], prompts users to choose familiar or similar opinions, constraining the strength of homophily. If the second force prevails, and no close-by stable group exists, the third force, also rooted in homophily, motivates users to leave the social media platform that are incompatible or hostile to the user's views. The second force strengthens with time as long as the biases persist. But the interplay is subtle. When confirmation bias breaks and frees the user to move farther across biases, the user adapts a new bias and confirmation bias switches to it. Thus, after new biases are accepted, they are enforced by confirmation bias and homophily, making new members of a stable group more committed to it than the old ones. We plan to extend this work by adding quantitative analyses of these interesting observations, by using two families of equations that define the utility of membership in groups introduced in [9]. These formulae characterize the side-effect of pursuing the highest utility groups by students. Increasing utility resulted in an unintended polarization of groups.
For developers of socially aware networks systems, the knowledge of the patterns arising in interactions of users of social media interested in broad topics like politics, sports, and movies is important. The relevant patterns include stable and popular groups of users with specific biases and opinions that define effective communities in social networks, echo chambers that define preferred routes of information flows, and patterns of real-time data access which are essential for designing socially aware caching [10]. Hence, they can be used for social-based community detection, routing, and data caching strategies and algorithms in social media and social aware networks.
|
2310.20668 | Universality of random-site percolation thresholds for two-dimensional
complex non-compact neighborhoods | The phenomenon of percolation is one of the core topics in statistical
mechanics. It allows one to study the phase transition known in real physical
systems only in a purely geometrical way. In this paper, we determine
thresholds $p_c$ for random site percolation in triangular and honeycomb
lattices for all available neighborhoods containing sites from the sixth
coordination zone. The results obtained (together with the percolation
thresholds gathered from the literature also for other complex neighborhoods
and also for a square lattice) show the power-law dependence
$p_c\propto(\zeta/K)^{-\gamma}$ with $\gamma=0.526(11)$, $0.5439(63)$ and
$0.5932(47)$, for honeycomb, square, and triangular lattice, respectively, and
$p_c\propto\zeta^{-\gamma}$ with $\gamma=0.5546(67)$ independently on the
underlying lattice. The index $\zeta=\sum_i z_i r_i$ stands for an average
coordination number weighted by distance, that is, depending on the
coordination zone number $i$, the neighborhood coordination number $z_i$ and
the distance $r_i$ to sites in $i$-th coordination zone from the central site.
The number $K$ indicates lattice connectivity, that is, $K=3$, 4 and 6 for the
honeycomb, square and triangular lattice, respectively. | Krzysztof Malarz | 2023-10-31T17:32:05Z | http://arxiv.org/abs/2310.20668v2 | Universality of random-site percolation thresholds for two-dimensional complex non-compact neighborhoods
###### Abstract
The phenomenon of percolation is one of the core topics in statistical mechanics. It allows one to study the phase transition known in real physical systems only in a purely geometrical way. And three things are unavoidable: death, paying taxes, and excepting universal formula for percolation thresholds. Anyway, in this paper, we try to solve the third of the enumerated problems and determine thresholds \(p_{c}\) for random site percolation in triangular and honeycomb lattices for all available neighborhoods containing sites from the sixth coordination zone. The results obtained (together with the percolation thresholds gathered from the literature also for other complex neighborhoods and also for a square lattice) show the power-law dependence \(p_{c}\propto(\zeta/K)^{-\gamma}\) with \(\gamma=0.526(11)\), \(0.5439(63)\) and \(0.5932(47)\), for honeycomb, square, and triangular lattice, respectively, and \(p_{c}\propto\zeta^{-\gamma}\) with \(\gamma=0.5546(67)\) independently on the underlying lattice. The index \(\zeta=\sum_{i}z_{i}r_{i}\) stands for an average coordination number weighted by distance, that is, depending on the coordination zone number \(i\), the neighborhood coordination number \(z_{i}\) and the distance \(r_{i}\) to sites in \(i\)-th coordination zone from the central site. The number \(K\) indicates lattice connectivity, that is, \(K=3\), \(4\) and \(6\) for the honeycomb, square and triangular lattice, respectively. We do not claim that these results are one giant leap for mankind in searching for such a formula, but rather they are one small step of a man in that way.
random site percolation; Archimedean lattices; Newman-Ziff algorithm; complex and extended neighborhoods; analytical formulas for percolation thresholds; Monte Carlo simulation +
Footnote †: preprint:
## I Introduction
The percolation [1; 2; 3; 4] is one of a core topics in statistical physics as it allows for studying phase transitions and their properties in only geometrical fashion, i.e., without heating or cooling anything (except of paying unconsionable invoices for electricity in the computer centers). Although originated from a rheology [5; 6] (and still applied there [7]) the application of percolation theory range from forest fires [8] to disease propagation [9], not omitting problems originated in hard physics (including magnetic [10] and electric [11] properties of solids) but also with implications for: nanoengineering [12]; materials chemistry [13]; agriculture [14]; sociology [15]; terrorism [16]; urbanization [17]; dentistry [18]; information transfer [19]; psychology of motivation [20]; and finances [21] (see References [22] and [23] for the most recent reviews also on fractal networks [24]).
The phase transition mentioned above is first of all characterized by a critical parameter called _percolation threshold_\(p_{c}\) and much effort went into searching for a universal formula that allows for the prediction \(p_{c}\) based solely on the scalar characteristics of a lattice or a network topology, where the percolation phenomenon occurs. Probably, searching for such dependencies is not different much from searching for the alchemic formula for the philosopher's stone--allowing for converting anything (or at least something) into gold. Anyway, such attempts of proposing universal formula for percolation threshold were more or less successfully made earlier.
For instance, Galam and Mauger proposed an universal formula
\[p_{c}=\frac{p_{0}}{\left[(d-1)(z-1)\right]^{a}}\] (1a) depending on the connectivity of the lattice \[z\] and its dimension \[d\]. For a site percolation problem they identified two groups of lattices, i.e., two sets of parameters \[p_{0}\] and \[a\]. Their paper was immediately criticized by van der Marck who indicated two lattices with identical \[z\] and \[d\] but different values of \[p_{c}\] associated with these lattices [26; 27]. For two-dimensional lattices the Galam-Mauger formula reduces to \[p_{c}=\frac{p_{0}}{(z-1)^{a}}, \tag{1b}\]
with \(p_{0}=0.8889\) and \(a=0.3601\) for triangular, square and honeycomb lattices [25]. Their studies were extended to anisotropic lattices without equivalent nearest neighbors, non-Bravais lattices with two atom unit cells, and quasicrystals which required the substitution of \(z\) in Equation (1) by an effective (non integer) value \(z_{\mathrm{eff}}\)[28; 29].
Very recently, Xun _et al._ in extensive numerical simulations showed that all Archimedean lattices (uniform tilings, i.e., lattices built of repeatably sequences of tails of regular polygons able to cover a two-dimensional plane) exhibit a simple relation [30]
\[p_{c}=c_{1}/z,\] (2a) which due to finite size effects should be written as \[p_{c}=c_{2}/z-b. \tag{2b}\]
For example, for the square lattice and extended compact neighborhoods, these constants are \(c_{2}=4.527\) and \(b=3.341\)[31]. In two dimensions, for Archimedean lattices up to the 10-th coordination zone [30], correlations are also seen by plotting
\[z\text{ versus }-1/\ln(1-p_{c}). \tag{3}\]
Yet another investigated by Galam and Mauger formulas [32; 33] included
\[p_{c}=1/\sqrt{z-1} \tag{4}\]
or by Koza _et al._ in References [34; 35]
\[p_{c}=1-\exp(d/z). \tag{5}\]
The formula (2a) works well also for distorted lattices [36; 37], where lattice distortion means random moving of lattice nodes not too far from their regular position in non-distorted lattices. In this case, the number of sites in the neighborhood \(z\) should be replaced by an average site degree \(\bar{z}\)[38].
The studies mentioned above were concentrated in compact neighborhoods. When holes in the neighborhoods are taken into account, there is a strong degeneration of \(p_{c}\) on total \(z\), and Equations (1) to (5)--which depend solely on the lattice dimension \(d\) and connectivity \(z\)--must fail. To avoid this \(p_{c}(z)\) degeneracy in the case of triangular lattice, a weighted square distance
\[\xi=\sum_{i}r_{i}^{2}z_{i}/i \tag{6}\]
was proposed, where \(z_{i}\) is the number of sites in the given neighbourhood in \(i\)-th coordination zone and these sites distance to the central site in neighbourhood is \(r_{i}\)[39]. Unfortunately, the clear dependence
\[p_{c}\propto\xi^{-\gamma} \tag{7}\]
(with \(\gamma_{\text{nc}}^{\xi}\approx 0.710(19)\)) is lost for the honeycomb lattice [40]. Thus, instead, the weighted coordination number
\[\zeta=\sum_{i}z_{i}r_{i} \tag{8}\]
was proposed [40] which gives a nice power law
\[p_{c}\propto\zeta^{-\gamma} \tag{9}\]
with \(\gamma_{\text{nc}}^{\zeta}\approx 0.4981\). As \(\gamma_{\text{nc}}^{\zeta}\) is very close to \(\frac{1}{2}\) also the dependence
\[p_{c}=c_{3}/\sqrt{\zeta} \tag{10}\]
was checked yielding \(c_{3}\approx 1.2251(99)\)[40].
Very recently, we tested formulas (7) and (9) also for the square lattice up to the sixth coordination zone and found that Eq. (9) also holds for a square lattice with \(\gamma_{\text{sq}}^{\zeta}\approx 0.5454(60)\)[41].
Our results show that for all three (square, triangular, and honeycomb) lattice shapes, the power law is recovered in dependence of \(p_{c}(\zeta/K)\), where \(K\) is the connectivity of the network with the nearest-neighbour interaction, that is, with \(K=3\), 4 and 6 for the honeycomb, square, and triangular lattice, respectively. On the other hand, independent of the lattice topology, we see a more or less clear power law \(p_{c}(\zeta)\) for the data obtained on the values of \(p_{c}\) for the three lattices with complex neighbourhoods containing sites up to the sixth coordination zone.
## II Methodology
In this paper--using exactly the same methodology as that used to study percolation in a square lattice with complex neighborhoods that contain sites up to the sixth coordination zone [41]--we extend our previous studies for sites up to the sixth coordination zone for triangular (Figure 1) and honeycomb (Figure 2) lattices. Namely, using the fast Monte Carlo scheme proposed by Newman and Ziff [42] and the finite-size scaling theory [43; 44] we found 64 values of percolation thresholds for complex neighborhoods containing sites from the sixth coordination zone.
In Supplemental Material the mapping of the 6th coordination zone in the honeycomb lattice into the brick-wall-like square lattice (as proposed in Reference 45) is presented in Figure 5 in Appendix A together with Listing 1 (for tr-6 neighborhood) and Listing 2 (for hc-6 neighborhood) showing implementations of boundaries() functions to be replaced in original Newman-Ziff algorithm [42]. The mapping of the 1st to 5th coordination zones in the honeycomb lattice into the brick-wall-like square lattice are presented in Figure 3 in Reference [40].
## III Results
In Figure 3 we present examples of results used to predict the percolation thresholds \(p_{c}\), that is,
* the dependencies of the size of the largest cluster \(\mathcal{S}_{\text{max}}/L^{2}\) normalized to the lattice size vs. number of occupied sites also normalized to the lattice size [Figures 3(a) and 3(c)]
* and the dependencies of the probability that a randomly selected site belongs to the largest cluster, scaled by \(L^{\beta/\nu 1}\) vs. occupation probability \(p\) [Figures 3(b) and 3(d)]
for triangular (Figures 3(a) and 3(b)) and honeycomb (Figures 3(c) and 3(d)) lattice and neighbourhoods containing all considered basic neighbourhoods presented in Figure 1 (for the triangular lattice) and Figure 2 (for the honeycomb lattice). The linear sizes \(L\) of the simulated systems range from 127 to 4096 and the results of these simulations are averaged over \(R=10^{5}\) samples. All dependencies \(\mathcal{P}_{\rm max}\cdot L^{\beta/\nu}\) vs. \(p\) studied here are presented in Figure 6 (for the triangular lattice) and Figure 7 (for the honeycomb lattice) in Appendix C in the Supplemental Material. The common point of the curves \(\mathcal{P}_{\rm max}\cdot L^{\beta/\nu}\) vs. \(p\) for various system sizes \(L\) predicts \(p_{c}\). The computed values of \(p_{c}\), associated with various neighborhoods, together with their uncertainties (also estimated earlier for neighborhoods containing sites up to the sixth coordination zone--for square lattice [41; 46; 47] and the fifth coordination zone--for triangular [39; 48] and honeycomb [40] lattices) are collected in Table 2 in Appendix B in the Supplemental Material.
Figure 4 presents the \(p_{c}\) for neighborhoods containing sites up to the sixth coordination zone on square (\(\square\)), honeycomb (\(\bigcirc\)) and triangular (\(\triangle\)) lattices as dependent on
* total coordination number \(z\) [Figure 4(a)];
* index \(\zeta\) [Figure 4(b)];
* index \(\zeta/K\) [Figure 4(c)].
The crosses (\(\times\)) indicates inflated neighborhoods, that is, non-compact neighborhoods reducible to other complex neighborhoods by shrinking the lattice constants. The detected inflated neighborhoods and their lower index equivalents are presented in Table 1. These values \(p_{c}\) are excluded from the fitting procedure.
As we mentioned in the Introduction, for complex non-compact neighborhoods, strong \(p_{c}(z)\) degeneration is observed [see Figure 4(a)]. On the contrary, introducing the index \(\zeta\) (8) allows a nearly perfect separation of the values of \(p_{c}\). After excluding inflated neighborhoods (presented in Table 1) the linear fit of the data presented in Figure 4(c) with the least-squares method gives in the power law
\[p_{c}\propto(\zeta/K)^{-\gamma} \tag{11}\]
exponents \(\gamma_{\rm fr}=0.5932(47)\), \(\gamma_{\rm so}=0.5439(63)\), \(\gamma_{\rm HC}=0.526(11)\), for triangular, square and honeycomb lattices, respectively. The analogous fit according to Equation (9) of the data presented in Figure 4(b) gives the exponent \(\gamma_{\rm 2p}=0.5546(67)\).
Figure 2: Basic neighborhoods corresponding to subsequent coordination zones \(i=1,\cdots,6\) on the honeycomb lattice. The symbol \(r\) stands for the Euclidean distance of the black sites from the central one, and \(z\) indicates the number of sites in the neighborhood. (a) \({\rm HC}\)-\(1\): \(i=1\), \(r^{2}=1\), \(z=3\). The lattices (b) \({\rm HC}\)-\(2\): \(i=2\), \(r^{2}=3\), \(z=6\), (e) \({\rm HC}\)-\(5\): \(i=5\), \(r^{2}=9\), \(z=6\) and (f) \({\rm HC}\)-\(6\): \(i=6\), \(r^{2}=12\), \(z=6\) are equivalent to a triangular lattice \({\rm tr}\)-\(1\) [Figure 1(a)] with enlarged lattice constants \(\sqrt{3}\), \(3\) and \(2\sqrt{3}\) times, respectively. (d) \({\rm HC}\)-\(4\): \(i=4\), \(r^{2}=7\), \(z=6\). The lattice (c) \({\rm HC}\)-\(3\) (\(i=3\), \(r^{2}=4\), \(z=3\),) is equivalent to \({\rm HC}\)-\(1\) [Figure 2(a)] with a lattice constant twice larger than for \({\rm HC}\)-\(1\)
Figure 1: Basic neighborhoods corresponding to subsequent coordination zones \(i=1,\cdots,6\) in the triangular lattice. The symbol \(r\) stands for the Euclidean distance of the black sites from the central one, and \(z\) indicates the number of sites in the neighborhood. (a) \({\rm tr}\)-\(1\): \(i=1\), \(r^{2}=1\), \(z=6\), (b) \({\rm tr}\)-\(2\): \(i=2\), \(r^{2}=3\), \(z=6\), (c) \({\rm tr}\)-\(3\): \(i=3\), \(r^{2}=4\), \(z=6\), (d) \({\rm tr}\)-\(4\): \(i=4\), \(r^{2}=7\), \(z=12\), (e) \({\rm tr}\)-\(5\): \(i=5\), \(r^{2}=9\), \(z=6\), (f) \({\rm tr}\)-\(6\): \(i=6\), \(r^{2}=12\), \(z=6\).
## IV Discussion
The introduction of the \(\zeta\) index solves the problem of multiple degeneration of the value of \(p_{c}\). Eliminating inflated neighborhoods (including those that occur pairwise between a triangular and a hexagonal lattice) allows fitting \(p_{c}\) to the power laws according to Equations (9) or (11). Without comparing the hexagonal and triangular lattices, it was necessary to introduce the index \(\xi\) to maintain the power law relationship according to Equation (7). The index \(\xi\) turned out to be redundant vs. \(\zeta\) index for the site percolation problem, because previously outlier points turned out to belong to the inflated neighborhoods, but the low-index neighborhoods associated with them were located on a different type of lattice. However, the introduction of the index \(\xi\) turned out to be quite useful for the bond percolation problem, where the relationship (6) is perfectly satisfied with the exponent \(\xi\approx 1\)[49].
Finally, we propose some unification of the nomenclature appearing in the literature, and applying terms:
**basic neighborhoods:**: for those containing sites from a single coordination zone (like sq-1, sq-1, sq-2, sq-3, etc. and those presented in Figures 1 and 2);
\begin{table}
\begin{tabular}{l c c c l} \hline \hline inflated & \(\zeta\) & \(p_{c}\) & \(z\) & equivalent \\ neighborhood & & & & neighborhood \\ \hline sq-2 & 5.6568 & 0.5927 & 4 & sq-1 \\ sq-3 & 8 & 0.5927 & 4 & sq-1 \\ sq-5 & 11.3137 & 0.5927 & 4 & sq-1 \\ sq-6 & 12 & 0.5927 & 4 & sq-1 \\ sq-2,3 & 13.6568 & 0.4073 & 8 & sq-1,2 \\ sq-2,5 & 16.9705 & 0.337 & 8 & sq-1,3 \\ sq-3,5 & 19.3137 & 0.4073 & 8 & sq-1,2 \\ sq-2,3,5 & 24.9705 & 0.288 & 12 & sq-1,2,3 \\ \hline Tr-2 & 10.3923 & 0.5 & 6 & tr-1 \\ tr-3 & 12 & 0.5 & 6 & tr-1 \\ tr-5 & 18 & 0.5 & 6 & tr-1 \\ tr-6 & 20.7846 & 0.5 & 6 & tr-1 \\ tr-2,5 & 28.3923 & 0.29028 12 & tr-1,2 \\ tr-2,6 & 31.1769 & 0.26455 12 & tr-1,3 \\ tr-3,6 & 32.7846 & 0.29030 12 & tr-1,2 \\ tr-5,6 & 38.7846 & 0.23200 12 & tr-2,3 \\ tr-2,5,6 & 49.1769 & 0.21550 18 & tr-1,2,3 \\ \hline nc-2 & 10.3923 & 0.5 & 6 & tr-1 \\ nc-3 & 6 & 0.697 & 3 & nc-1 \\ nc-5 & 15.5884 & 0.5 & 6 & tr-1 \\ nc-6 & 20.7846 & 0.5 & 6 & tr-1 \\ nc-2,5 & 28.3923 & 0.29028 12 & tr-1,2 \\ nc-2,6 & 31.1769 & 0.26453 12 & tr-1,3 \\ nc-3,6 & 26.7846 & 0.36301 9 & 9 & nc-1,2 \\ nc-5,6 & 38.7846 & 0.23202 12 & tr-2,3 \\ nc-2,5,6 & 49.1769 & 0.21547 18 & tr-1,2,3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Detected inflated (together with the associated \(\zeta\) index) and equivalent neighborhoods. The percolation thresholds \(p_{c}\) and total number of sites \(z\) are common for both neighborhoods.
**complex neighborhoods:**: for any combination of the basic ones;
**extended neighborhoods:**: for complex and compact neighborhoods (like sq-1,2, tr-1,2,3, hc-1,2,3,4, etc.) and
**inflated neighbourhoods:**: for complex neighborhoods reducible to other complex neighborhoods but with lower indexes by shrinking the lattice constant (like those presented in Table 1).
In conclusion, in this paper we estimate percolation thresholds for the random site percolation problem on triangular and honeycomb lattices for neighborhoods containing sites from the sixth coordination zone. The obtained values of \(p_{c}\) satisfy the power law: independently of the underlying lattice (according to \(p_{c}\propto\zeta^{-\gamma}\)) or even better for separately considered lattices (according to \(p_{c}\propto(\zeta/K)^{-\gamma}\), where \(K\) is the connectivity of the lattice).
###### Acknowledgements.
The authors thank Hubert Skawina for preparing the figures for Table 2 in Appendix B in the Supplemental Material. We gratefully acknowledge Poland's high-performance computing infrastructure PLGrid (HPC Center: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2023/016295.
|
2309.12961 | On schemes evinced by generalized additive decompositions and their
regularity | We define and explicitly construct schemes evinced by generalized additive
decompositions (GADs) of a given $d$-homogeneous polynomial $F$. We employ GADs
to investigate the regularity of $0$-dimensional schemes apolar to $F$,
focusing on those satisfying some minimality conditions. We show that
irredundant schemes to $F$ need not be $d$-regular, unless they are evinced by
special GADs of $F$. Instead, we prove that tangential decompositions of
minimal length are always $d$-regular, as well as irredundant apolar schemes of
length at most $2d+1$. | Alessandra Bernardi, Alessandro Oneto, Daniele Taufer | 2023-09-22T15:58:27Z | http://arxiv.org/abs/2309.12961v2 | # On schemes evinced by generalized additive decompositions and their regularity
###### Abstract.
We define and explicitly construct schemes evinced by generalized additive decompositions (GADs) of a given \(d\)-homogeneous polynomial \(F\). We employ GADs to investigate the regularity of \(0\)-dimensional schemes apolar to \(F\), focusing on those satisfying some minimality conditions. We show that irredundant schemes to \(F\) need not be \(d\)-regular, unless they are evinced by special GADs of \(F\). Instead, we prove that tangential decompositions of minimal length are always \(d\)-regular, as well as irredundant apolar schemes of length at most \(2d+1\).
Key words and phrases:Generalized additive decompositions, \(0\)-dimensional schemes, Hilbert function, regularity, cactus rank 2020 Mathematics Subject Classification: 14N07, 13D40
## 1. Introduction
Algebraic and geometric properties of \(0\)-dimensional schemes have been largely studied from several perspectives in algebraic geometry, commutative algebra, and computational algebra. Through _apolarity theory_, these studies find applications in the study of _additive decompositions_ of homogeneous polynomials and, more in general, _tensor decompositions_[1, 2, 3].
In this paper, we are interested in \(0\)-dimensional schemes that are _apolar_ to a given \(d\)-homogeneous polynomial \(F\), namely the \(0\)-dimensional schemes defined by ideals annihilating \(F\) by derivation. Understanding the possible Hilbert functions of _minimal_ apolar schemes is a deep and largely open question, which could give useful information on the nature of additive decompositions of polynomials and _secant varieties_, and whose grasp is challenging even in moderately small cases [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 1].
Our work aims to study when these Hilbert functions stabilize, and more specifically at discerning essential conditions for a given \(d\)-homogeneous polynomial to have _minimal_\(0\)-dimensional apolar schemes that are regular in degree \(d\). This subtle problem carries far-reaching implications spanning the domains of classical algebraic geometry and complexity theory. In the context of algebraic geometry, these concepts are part of a longstanding tradition of exploring secant varieties and Waring problems, see [2] for a general overview. From a complexity theory perspective, the knowledge of the regularity of minimal apolar schemes to a given polynomial might improve the efficiency of symbolic algorithms for computing ranks and minimal decomposition of polynomials [2, 3, 10].
### Additive decompositions
As already recalled, the study of apolar schemes is related to notions of _rank_ and _additive decompositions_ associated with homogeneous polynomials. The minimal length of a \(0\)-dimensional scheme apolar to \(F\) is the _cactus rank_ of \(F\)[1, 2]. If we restrict to schemes that are locally contained in \((d+1)\)-fat points, then they correspond to _generalized additive decompositions_ (GADs) of \(F\), namely expressions as
\[F=\sum_{i=1}^{r}L_{i}^{d-k_{i}}G_{i},\]
where the \(L_{i}\)'s are pairwise non-proportional linear forms not dividing the corresponding \(G_{i}\)'s [1, 2]. Special cases of such decompositions include _tangential decompositions_, when \(k_{i}=1\)[3, 1, 12], and _Waring decompositions_, when \(k_{i}=0\)[1, 1].
This algebraic description of ranks and additive decompositions has a geometric interpretation in terms of _Veronese varieties_ and their _secant varieties_[2, 1, 1]. A Waring decomposition corresponds to a set of points on the Veronese variety whose linear span contains the projective point corresponding to the polynomial \(F\). Analogously, tangential decompositions (generalized additive decompositions, respectively) correspond to a set of points on the tangential variety (osculating varieties, respectively) of the Veronese variety whose linear span contains the projective point of \(F\)[2, 1,
CGG, BCGI07, BCGI09, BF03]. In this view, GADs parameterize generic points of a _joint variety_ of osculating varieties to certain Veronese variety.
### Content of the paper and main results
After recalling the standard definition and results in Section2, we define and provide an explicit construction of schemes evinced by GADs in Section3. This construction locally agrees with the natural apolar schemes defined in [1], but is made effective by delving into the computational details. An implementation of this construction routine in Macaulay2 [11] and Magma [1] can be found in [1].
In Section4 we investigate the weaker and more geometric irredundancy condition, i.e. we look at schemes that are minimal by inclusion among the apolar schemes to a given form \(F\) of degree \(d\). With Example4.4 we observe that schemes evinced by GADs might well be redundant, whereas we prove in Proposition4.3 that irredundant schemes are evinced by a GAD of \(F\) precisely when their connected components are contained in \((d+1)\)-fat points. Therefore, all schemes apolar to \(F\) with _short_ components are evinced by certain families of GADs of \(F\). However, Example4.6 shows that schemes with _long_ components may only arise from GADs of higher degree polynomials.
In Section5 we tackle the regularity of minimal apolar schemes. We show that non-redundancy to a degree-\(d\) form is not enough to ensure \(d\)-regularity. Indeed, in Examples5.8 and 5.10 we present degree-\(d\) homogenous polynomials admitting an apolar scheme that is irredundant but not \(d\)-regular. However, we notice that in both cases such schemes are not minimal by length.
In Proposition5.2 we show that the addenda constituting a GAD evincing an irredundant scheme \(Z\) may never appear in its inverse systems. We use this result in Proposition5.3 to guarantee \(d\)-regularity for schemes evinced by GADs such that the \(L_{i}\)'s are linearly independent and the \(k_{i}\)'s are small enough, regardless of the scheme being minimal. However, we point out in Remark5.7 that all the assumptions of Proposition5.3 are sharp.
Drawing from the intuition that schemes with components of low multiplicity usually exhibit low regularity, in Proposition5.9 we prove that minimal tangential decompositions of degree-\(d\) forms always evince \(d\)-regular schemes. Example5.10 shows the condition of having minimal length is essential, while irredundancy is not enough.
Finally, we show in Proposition5.11 that if the cactus rank of a degree-\(d\) form is not greater than \(2d+1\), then non-redundancy is actually enough to guarantee \(d\)-regularity. In particular, all the schemes of minimal length apolar to degree-\(d\) forms with length smaller or equal to \(2d+1\) are \(d\)-regular.
### Acknowledgements
We sincerely thank E. Ballico, W. Buczynska, J. Buczynski, M.V. Catalisano, C. Ciliberto and B. Mourrain for fruitful conversations. DT acknowledges the hospitality of the TensorDec Laboratory during a research stay at the Department of Mathematics at the University of Trento, where part of the present work has been conducted.
**Funding.** AB has been partially supported by GNSAGA of INDAM. DT has been supported by the European Union's H2020 Programme ERC-669891, and by the Research Foundation - Flanders via the FWO postdoctoral fellowship 12ZZC23N and the travel grant V425623N. All the authors have been partially supported by the Thematic Research Programme "Tensors: geometry, complexity and quantum entanglement", University of Warsaw, Excellence Initiative - Research University and the Simons Foundation Award No. 663281.
## 2. Preliminaries
In this paper, \(\Bbbk\) will always be an algebraically closed field of characteristic \(0\). Given \(\alpha=(\alpha_{0},\ldots,\alpha_{n})\) and \(\beta=(\beta_{0},\ldots,\beta_{n})\) in \(\mathbb{N}^{n+1}\), let \(|\alpha|=\sum_{i=0}^{n}\alpha_{i}\) and \(\alpha!=\prod_{i=0}^{n}\alpha_{i}!\). We write \(\alpha\succeq\beta\) if \(\alpha_{i}\geq\beta_{i}\) for every \(0\leq i\leq n\). We use the standard short notation \(X^{\alpha}=X_{0}^{\alpha_{0}}\cdots X_{n}^{\alpha_{n}}\).
### Apolarity
Let \(\mathcal{S}=\Bbbk[X_{0},\ldots,X_{n}]=\bigoplus_{d\in\mathbb{N}}\mathcal{S}_{d}\) and \(\mathcal{R}=\Bbbk[Y_{0},\ldots,Y_{n}]=\bigoplus_{d\in\mathbb{N}}\mathcal{R}_{d}\) be standard graded polynomial rings, where \(\mathcal{S}_{d}\) and \(\mathcal{R}_{d}\) denote the \(\Bbbk\)-vector spaces of degree-\(d\) homogeneous polynomials. We also write \(\mathcal{S}_{\leq d}=\bigoplus_{e\leq d}\mathcal{S}_{e}\) and \(\mathcal{R}_{\leq d}=\bigoplus_{e\leq d}\mathcal{R}_{e}\).
We consider the apolarity action of \(\mathcal{R}\) on \(\mathcal{S}\) given by differentiation, i.e.,
\[Y^{\beta}\circ X^{\alpha}=\begin{cases}\partial_{\beta}(X^{\alpha})=\frac{ \alpha!}{(\alpha-\beta)!}X^{\alpha-\beta}&\text{ if }\alpha\succeq\beta,\\ 0&\text{ otherwise,}\end{cases}\]
extended by \(\Bbbk\)-linearity. Given \(F\in\mathcal{S}\), we consider its annihilator
\[\operatorname{Ann}(F)=\{G\in\mathcal{R}\ :\ G\circ F=0\},\]
which is an ideal of \(\mathcal{R}\).
This action defines a non-degenerate perfect pairing \(\mathcal{R}_{d}\times\mathcal{S}_{d}\to\Bbbk\) for every \(d\in\mathbb{N}\). Given a subspace \(V\subseteq\mathcal{S}_{d}\), we denote by \(V^{\perp}\subseteq\mathcal{R}_{d}\) its orthogonal space with respect to such pairing. If \(V=\langle F\rangle\), we simply denote its orthogonal space by \(F^{\perp}\).
**Remark 2.1**.: A classical result by Macaulay [14] shows that graded Artinian Gorenstein algebras are all, and only, quotient rings of polynomial rings by annihilator ideals of homogeneous polynomials, see [1, Theorem 8.7], [13, Lemma 2.12] or [11, Theorem 21.6].
In the following, we always identify \(\mathcal{R}\) with the coordinate ring of \(\mathbb{P}^{n}=\mathbb{P}(\mathcal{S}_{1})\).
**Definition 2.2**.: _Let \(F\in\mathcal{S}_{d}\). A \(0\)-dimensional scheme \(Z\subset\mathbb{P}^{n}\) is **apolar** to \(F\) if \(I(Z)\subseteq\operatorname{Ann}(F)\)._
A famous characterization of schemes apolar to a given form is provided by the well-known _Apolarity Lemma_, see e.g. [13, Lemma 1.15] in the classical case of reduced schemes, [1, Lemma 1] for non-reduced scheme or [13, Lemma 1.3] into a more general framework.
**Lemma 2.3** (Apolarity Lemma).: _Let \(F\in\mathcal{S}_{d}\). The following are equivalent:_
* \(F\in I(Z)_{d}^{\perp}\)_;_
* \(I(Z)\subset\operatorname{Ann}(F)\)_._
Let \(\mathcal{S}_{\mathrm{dp}}\) be the polynomial ring \(\mathcal{S}\) equipped with a _divided power structure_, i.e. endowed with the divided powers monomial basis \(X^{[\alpha]}=\frac{1}{\alpha!}X^{\alpha}\). We denote by \(F_{\mathrm{dp}}\in\mathcal{S}_{\mathrm{dp}}\) the polynomial \(F\in\mathcal{S}\) expressed in divided powers.
For convenience in our computation throughout the paper, we also consider the action of \(\mathcal{R}\) on \(\mathcal{S}_{\mathrm{dp}}\) by contraction, namely,
\[Y^{\beta}\mathbin{\rightharpoonup}X^{\alpha}=\begin{cases}X^{\alpha-\beta}& \text{ if }\alpha\succeq\beta,\\ 0&\text{ otherwise.}\end{cases}\]
For a given \(F\in\mathcal{S}_{\mathrm{dp}}\), its annihilator with respect to this action will be denoted by
\[\operatorname{Ann}^{\rightharpoonup}(F)=\{G\in\mathcal{R}\ :\ G\mathbin{ \rightharpoonup}F=0\}\,.\]
One can directly verify that \(G\mathbin{\rightharpoonup}F_{\mathrm{dp}}=(G\circ F)_{\mathrm{dp}}\).
### Minimality
In this paper, we consider the \(0\)-dimensional schemes apolar to a given \(F\in\mathcal{S}_{d}\). Among them, we are particularly interested in those that are minimal by inclusion or length.
**Definition 2.4**.: _Let \(Z\subset\mathbb{P}^{n}\) be a \(0\)-dimensional scheme apolar to \(F\in\mathcal{S}_{d}\). We say that \(Z\) is **irredundant** to \(F\) if there is no strict subscheme \(Z^{\prime}\subsetneq Z\) among the schemes apolar to \(F\)._
The minimal length of a \(0\)-dimensional scheme apolar to \(F\) is called both **scheme length** of \(F\)[13] or **cactus rank** of \(F\)[13, 14].
**Definition 2.5**.: _Let \(Z\subset\mathbb{P}^{n}\) be a \(0\)-dimensional scheme apolar to \(F\in\mathcal{S}_{d}\). We say that \(Z\)**evinces the cactus rank**, or **evinces the scheme length** of \(F\), or simply is **minimal apolar** to \(F\), if \(Z\) is of minimal length among the \(0\)-dimensional schemes in \(\mathbb{P}^{n}\) and apolar to \(F\)._
### Regularity
We study when the Hilbert function of minimal apolar schemes stabilizes.
**Definition 2.6**.: _Given a homogeneous ideal \(I\subset\mathcal{R}\), the **Hilbert function** of the quotient \(\mathcal{R}/I\) is the function \(\operatorname{HF}_{\mathcal{R}/I}:\mathbb{N}\to\mathbb{N}\) such that \(\operatorname{HF}_{\mathcal{R}/I}(i)=\dim\mathcal{R}_{i}/I_{i}\), where \(I_{i}=I\cap\mathcal{R}_{i}\). For a scheme \(Z\subset\mathbb{P}^{n}\) we denote the Hilbert function of \(Z\) as \(\operatorname{HF}_{Z}=\operatorname{HF}_{\mathcal{R}/I(Z)}\)._
We simply write \(\operatorname{HF}_{Z}=(a_{0},a_{1},a_{2},\dots)\) to denote \(\operatorname{HF}_{Z}(i)=a_{i}\).
The Hilbert function of a \(0\)-dimensional scheme \(Z\) is always strictly increasing until it reaches its length \(\operatorname{len}(Z)\), and then it remains constant.
**Definition 2.7**.: _Given a \(0\)-dimensional scheme \(Z\subset\mathbb{P}^{n}\), the **regularity** of \(Z\) is_
\[\operatorname{reg}(Z)=\min_{i\in\mathbb{N}}\{\operatorname{HF}_{Z}(i)= \operatorname{HF}_{Z}(i+1)=\operatorname{len}(Z)\}.\]
_We say that \(Z\) is **regular in degree \(d\)**, or \(d\)**-regular**, if \(\operatorname{reg}(Z)\leq d\)._
## 3. Schemes evinced by GADs
We devote the present section to connecting two well-known concepts such as natural apolar schemes [1] and generalized additive decomposition [13]. Their link serves as the cornerstone of our paper, while their explicit construction may be beneficial even for expert readers. A complete implementation in Macaulay2 [12] and Magma [1] of these procedures may be found in [1].
### Natural apolar scheme to \(F\) supported at \(L\)
There is a natural way to associate a local scheme apolar to a given \(F\in\mathcal{S}_{d}\) supported at a prescribed point \([L]\in\mathbb{P}^{n}\)[1, Section 4]. Let \(f_{L}\in\mathcal{S}_{\operatorname{dp}}/(L-1)=\underline{\mathcal{S}}_{ \operatorname{dp}}\) be the dehomogenization of \(F_{\operatorname{dp}}\) by \(L\). We consider the projection \(\mathcal{S}_{\operatorname{dp}}\to\underline{\mathcal{S}}_{\operatorname{dp}}\) and its dual projection \(\mathcal{R}\to\underline{\mathcal{R}}\). We denote the latter projection of an ideal \(J\subset\mathcal{R}\) by \(\underline{J}\subset\underline{\mathcal{R}}\). We will always use lowercase letters for the elements and the variables after these projections, e.g., we identify \(\underline{\mathcal{S}}_{\operatorname{dp}}\simeq\Bbbk[x_{1},\dots,x_{n}]_{ \operatorname{dp}}\) and \(\underline{\mathcal{R}}\simeq\Bbbk[y_{1},\dots,y_{n}]\).
**Definition 3.1**.: _Let \(F\in\mathcal{S}_{d}\) and \(L\in\mathcal{S}_{1}\). We define the **natural apolar scheme to \(F\) supported at \(L\)** the scheme \(Z_{F,L}\subset\mathbb{P}^{n}\) supported at \([L]\in\mathbb{P}^{n}\) and locally defined by \(\underline{I}(Z_{F,L})=\operatorname{Ann}^{-}(f_{L})\subset\underline{ \mathcal{R}}\)._
Note that \(\underline{\mathcal{R}}\) can be regarded as the coordinate ring of the affine chart \(U_{0}=\{[L]\ :\ Y_{0}\circ L\neq 0\}\subset\mathbb{P}^{n}\) and \(Z_{F,L}\) is a local \(0\)-dimensional scheme supported at the origin of \(U_{0}\).
Contraction behaves well with dehomogenization with respect to dual variables. In particular, if \(g\in\underline{\mathcal{R}}\) is the dehomogenization of \(G\in\mathcal{R}\) with respect to \(Y_{0}\), and \(g\operatorname{\text{\textperiodcentered}}f_{X_{0}}=0\), then \(G\operatorname{\textperiodcentered}F_{\operatorname{dp}}=0\)[1, Corollary 3], and the last equality implies that \(G\circ F=0\) as observed in Section 2.1. Hence, the scheme \(Z_{F,L}\) is apolar to \(F\) according to Definition 2.2.
**Lemma 3.2** ([1, Corollary 4]).: _The scheme \(Z_{F,L}\) is apolar to \(F\)._
Here we detail how to concretely construct the ideal defining such a scheme.
Fix \(F\in\mathcal{S}_{d}\) and \(L=\ell_{0}X_{0}+\dots+\ell_{n}X_{n}\in\mathcal{S}_{1}\). Without loss of generalities we may assume \(\ell_{0}=1\). Over \(\mathcal{S}\), we consider the change of variables given by
\[\phi:\mathcal{S}\to\mathcal{S},\qquad\begin{cases}X_{0}\mapsto X_{0}-\sum_{i= 1}^{n}\ell_{i}X_{i},\\ X_{i}\mapsto X_{i},\end{cases}\text{ for }i\in\{1,\dots,n\}. \tag{1}\]
We have \(\phi(L)=X_{0}\) and \(\tilde{F}=\phi(F)\), therefore we can represent \(f_{L}\) as \(\tilde{f}_{X_{0}}=\tilde{F}_{\operatorname{dp}}(1,x_{1},\dots,x_{n})\in \underline{\mathcal{S}}_{\operatorname{dp}}\). Then \(\operatorname{Ann}^{-}(f_{L})\) is the kernel of the infinite-dimensional _Hankel operator_[1, 1]:
\[H(f_{L}):\underline{\mathcal{R}}\to\underline{\mathcal{S}}_{\operatorname{dp}}, \quad g\mapsto g\operatorname{\textperiodcentered}-f_{L}.\]
However, since \(y^{\beta}\operatorname{\textperiodcentered}f_{L}=0\) for every \(|\beta|>\deg(f_{L})\), the annihilator of \(f_{L}\) is generated by the kernel of a truncated Hankel operator. Let \(e=\deg(f_{L})\) and consider the restriction
\[H^{e+1}(f_{L}):\underline{\mathcal{R}}_{\leq e+1}\to\left(\underline{\mathcal{ S}}_{\operatorname{dp}}\right)_{\leq e}.\]
Then, \(\mathrm{Ann}\char 127(f_{L})=\ker H^{e+1}(f_{L})\).
Note that the coefficients of the Hankel matrix can be computed directly from \(\tilde{F}\). Indeed, if we label rows and columns of \(H^{e+1}(f_{L})\) accordingly to the divided powers monomial basis of \(\big{(}\underline{\mathcal{S}}_{\mathrm{dp}}\big{)}_{\leq e}\) and the standard monomial basis of \(\underline{\mathcal{R}}_{\leq e+1}\), respectively, we have
\[[H^{e+1}(f_{L})]_{\alpha,\beta}=\mathrm{eval}_{(0,\ldots,0)}\left(y^{\alpha+ \beta}\rightharpoonup f_{L}\right)=\begin{cases}Y^{(d-(|\alpha|+|\beta|), \alpha_{1}+\beta_{1},\cdots,\alpha_{n}+\beta_{n})}\circ\tilde{F}&\text{ if }|\alpha|+|\beta|\leq d,\\ 0&\text{ otherwise.}\end{cases} \tag{2}\]
**Remark 3.3**.: Let \(g_{\mathrm{dp}}\in\underline{\mathcal{S}}_{\mathrm{dp}}\) be a degree-\(d\) polynomial obtained from \(g\in\underline{\mathcal{S}}\) by passing to divided powers. The ideal \(\mathrm{Ann}\char 127(g_{\mathrm{dp}})=\mathrm{Ann}(g)\) has minimal generators in degree \(d+1\) if and only if \(g\) is a pure \(d\)-th power. When it is the case, we actually need to consider the kernel of \(H^{e+1}(g_{\mathrm{dp}})\) to compute \(\mathrm{Ann}\char 127(g_{\mathrm{dp}})\), see e.g. Example 3.10. However, whenever \(g\) is not a pure power, we may compute its annihilator by restricting its Hankel matrix to \(H^{e}(g_{\mathrm{dp}}):\underline{\mathcal{R}}_{\leq e}\to\big{(}\underline{ \mathcal{S}}_{\mathrm{dp}}\big{)}_{\leq e}\), which makes the kernel computation more efficient, see e.g. Examples 3.8 and 3.9.
The homogenization \(\tilde{I}=I(Z_{\tilde{F},X_{0}})=[\mathrm{Ann}\char 127(f_{L})]^{ \mathrm{hom}}\subset\mathcal{R}\) with respect to \(Y_{0}\) defines a \(0\)-dimensional scheme apolar to \(\tilde{F}\) and supported at \([X_{0}]\in\mathbb{P}^{n}\) as in Definition 3.1. Note that the ideal homogenization is the only step in which non-linear algebra (e.g. Grobner bases) may be required.
Finally, to obtain the ideal defining \(Z_{F,L}\) as in Definition 3.1, we need to support \(\tilde{I}\) on \([L]\in\mathbb{P}^{n}\), hence we perform the change of coordinate in \(\mathcal{R}\) given by the dualization of the inverse of eq. (1):
\[\psi=(\phi^{-1})^{T}:\mathcal{R}\to\mathcal{R},\qquad\begin{cases}Y_{0}\mapsto Y _{0},\\ Y_{i}\mapsto\ell_{i}Y_{0}+Y_{i},\quad\text{ for }i\in\{1,\ldots,n\}.\end{cases} \tag{3}\]
The ideal \(I=\psi(\tilde{I})\subset\mathcal{R}\) defines a \(0\)-dimensional scheme which is supported at \([L]\) and apolar to \(F\). Indeed, the following lemma shows that the action by derivation is preserved under the changes of coordinates given by eqs. (1) and (3).
**Lemma 3.4**.: _Let \(\phi\) and \(\psi\) be changes of coordinates of eqs. (1) and (3). Then we have_
\[\psi(Y^{\beta})\circ\phi^{-1}(X^{\alpha})=Y^{\beta}\circ X^{\alpha}.\]
Proof.: We write
\[\psi(Y^{\beta})\circ\phi^{-1}(X^{\alpha})=\psi(Y_{0}^{\beta_{0}})\circ\left( \psi(Y_{1}^{\beta_{1}})\circ\big{(}\ldots\psi(Y_{n}^{\beta_{n}})\circ\phi^{- 1}(X^{\alpha})\big{)}\right).\]
By the chain rule of derivation, if \(L\circ M=0\) then \(L^{b}\circ M^{a}=0\) for any \(a,b\in\mathbb{N}\). In particular, for every \(j\in\{1,\ldots,n\}\) we have
\[\psi(Y_{j}^{\beta_{j}})\circ\phi^{-1}(X^{\alpha}) =(-\ell_{j}Y_{0}+Y_{j})^{\beta_{j}}\circ\left[(X_{0}+\ell_{1}X_{1 }+\ldots+\ell_{n}X_{n})^{\alpha_{0}}\prod_{i=1}^{n}X_{i}^{\alpha_{i}}\right]\] \[=\begin{bmatrix}(X_{0}+\ell_{1}X_{1}+\ldots+\ell_{n}X_{n})^{ \alpha_{0}}\prod_{\begin{subarray}{c}1\leq i\leq n\\ i\neq j\end{subarray}}X_{i}^{\alpha_{i}}\\ \end{bmatrix}\cdot(Y_{j}^{\beta_{j}}\circ X_{j}^{\alpha_{j}}).\]
Therefore, by repeatedly applying the above equation for every \(j\) we obtain
\[\psi(Y_{1}^{\beta_{1}})\circ\big{(}\ldots\psi(Y_{n}^{\beta_{n}})\circ\phi^{-1} (X^{\alpha})\big{)}=(X_{0}+\ell_{1}X_{1}+\ldots+\ell_{n}X_{n})^{\alpha_{0}} \cdot(Y_{1}^{\beta_{1}}\circ X_{1}^{\alpha_{1}})\cdots(Y_{n}^{\beta_{n}} \circ X_{n}^{\alpha_{n}}).\]
The result follows by acting with \(\psi(Y_{0}^{\beta_{0}})=Y_{0}^{\beta_{0}}\) on the above quantity.
Note that our choice of the change of coordinates in eq. (1) was arbitrary. It would have been enough to consider any set of linear forms \(\{L_{1},\ldots,L_{n}\}\) completing a basis of \(\mathcal{S}_{1}\) together with \(L\). Then, \(\phi\) is the change of coordinates inverse to the one sending \(X_{0}\mapsto L_{0}\) and \(X_{i}\mapsto L_{i}\), for any \(i\in\{1,\ldots,n\}\).
**Algorithm 1** (Natural Apolar Scheme).: Summary of construction of natural apolar schemes.
**Input:** A homogeneous polynomial \(F\in\mathcal{S}_{d}\) and a linear form \(L\in\mathcal{S}_{1}\).
**Output:** The ideal \(I(Z_{F,L})\subseteq\mathcal{R}\).
1. Define \(\tilde{F}\) as the base-change of \(F\) as in eq.1.
2. Compute \(f_{L}\) as \(\tilde{F}_{\mathrm{dp}}(1,x_{1},\ldots,x_{n})\) and set \(e=\deg(f_{L})\).
3. Compute the ideal \(\underline{I}=\ker H^{e+1}(f_{L})\).
4. Compute the homogenization \(I\subset\mathcal{R}\) of \(\underline{I}\subset\underline{\mathcal{R}}\).
5. Return the base-change of the ideal \(I\) as in eq.3.
### Generalized Additive Decompositions (GADs)
We recall the definition of the so-called generalized additive decompositions as introduced in [13], and we associate to them \(0\)-dimensional schemes by employing the notion of natural apolar scheme introduced in Section3.1.
**Definition 3.5**.: _Let \(F\in\mathcal{S}_{d}\) and let \(L_{1},\ldots,L_{s}\in\mathcal{S}_{1}\) be pairwise non-proportional linear forms. A **generalized additive decomposition** (GAD) of \(F\)**supported at \(\{L_{1},\ldots,L_{s}\}\) is an expression_
\[F=\sum_{i=1}^{s}L_{i}^{d-k_{i}}G_{i}, \tag{4}\]
_where for every \(i\in\{1,\ldots,s\}\) we have \(0\leq k_{i}\leq d\) and \(G_{i}\in\mathcal{S}_{k_{i}}\) is not divisible by \(L_{i}\). If \(s=1\), we call this GAD local._
Following [1, 1], we associate a \(0\)-dimensional scheme to any GAD as eq.4.
**Definition 3.6**.: _The **scheme evinced by a GAD** as in eq.4 is the union of the natural apolar schemes to each summand with respect to the corresponding \(L_{i}\), i.e.,_
\[Z=\bigcup_{i=1}^{s}Z_{L_{i}^{d-k_{i}}G_{i},L_{i}}.\]
_The **size** of a GAD as in eq.4 is the length of the evinced scheme \(Z\)._
Note that the same scheme may be evinced by different GADs. Indeed, \(L^{d-k}G\) and \(L^{d-k}G^{\prime}\) evince the same scheme whenever \(\mathrm{Ann}\tilde{\ }(g_{L})=\mathrm{Ann}\tilde{\ }(g_{L}^{\prime})\). However, schemes evinced by GADs of a given \(F\) are always apolar to it.
**Lemma 3.7**.: _Let \(Z\) be the scheme evinced by a GAD of \(F\). Then \(Z\) is apolar to \(F\)._
Proof.: To ease notation, denote \(F_{i}=L_{i}^{d-k_{i}}G_{i}\) in eq.4. Let \(I(Z)_{d}=I(Z)\cap\mathcal{R}_{d}\) and let \(I(Z)_{d}^{\perp}\) be the orthogonal space via the non-degenerate pairing \(\mathcal{R}_{d}\times\mathcal{S}_{d}\to\Bbbk\) induced by derivation. Then,
\[I(Z)_{d}^{\perp}=\left(I\left(Z_{F_{1},L_{1}}\right)_{d}\cap\ldots\cap I\left( Z_{F_{s},L_{s}}\right)_{d}\right)^{\perp}=I\left(Z_{F_{1},L_{1}}\right)_{d}^{ \perp}+\ldots+I\left(Z_{F_{s},L_{s}}\right)_{d}^{\perp},\]
see e.g. [11, Proposition 2.6]. For every \(i\in\{1,\ldots,s\}\) we have \(F_{i}\in I\left(Z_{F_{i},L_{i}}\right)_{d}^{\perp}\) by Lemma3.2. Hence, \(F\in I(Z)_{d}^{\perp}\) and, by the Apolarity Lemma2.3, this implies that \(I(Z)\subseteq\mathrm{Ann}(F)\).
The ideal defining schemes evinced by GADs can be easily computed by intersecting the ideals defining natural apolar schemes to local pieces of the additive decomposition, computed as in Algorithm1.
### Examples
Here we illustrate the above construction with some examples.
**Example 3.8**.: Let \(F=(X_{0}+3X_{1}-2X_{2})(X_{1}+X_{2})X_{2}\in\mathcal{S}_{3}\) and \(L=X_{0}+3X_{1}-2X_{2}\in\mathcal{S}_{1}\). Following Algorithm 1 we obtain \(\tilde{F}=X_{0}X_{1}X_{2}+X_{0}X_{2}^{2}\in\mathcal{S}\) by
\[X_{0}\gets X_{0}-3X_{1}+2X_{2}.\]
In divided powers it becomes \(\tilde{F}_{\mathrm{dp}}=X_{0}X_{1}X_{2}+2X_{0}X_{2}^{[2]}\), whose de-homogenization by \(X_{0}=1\) is equal to \(f_{L}=X_{1}X_{2}+2X_{2}^{[2]}\in\underline{\mathcal{S}}_{\mathrm{dp}}\). Since \(X_{1}(X_{2}+X_{2})\) is not a pure power, by Remark 3.3 we consider the truncation of the Hankel matrix in degree \(2=\deg(f_{L})\), i.e.,
\[H^{2}(f_{L})=\begin{array}{c}1\quad y_{1}\quad y_{2}\quad y_{1}^{2}\quad y _{1}y_{2}\quad y_{2}^{2}\\ y_{1}\quad\begin{pmatrix}0&0&0&0&1&2\\ 0&0&1&0&0&0\\ y_{2}&0&1&2&0&0&0\\ y_{1}^{2}&0&0&0&0&0&0\\ y_{1}y_{2}&1&0&0&0&0&0\\ y_{2}^{2}&0&0&0&0&0&0\end{pmatrix},\end{array}\]
whose kernel defines the ideal \(\mathrm{Ann}^{\neg}(f_{L})=\left(y_{2}(2y_{1}-y_{2}),y_{1}^{2}\right)\subset \underline{\mathcal{R}}\). Its homogenization in \(\mathcal{R}\) is the ideal \(\left(Y_{2}(2Y_{1}-Y_{2}),Y_{1}^{2}\right)\), which we need to base-change as in eq. (3), i.e.
\[Y_{1}\gets-3Y_{0}+Y_{1},\quad Y_{2}\gets 2Y_{0}+Y_{2}.\]
This way we obtain the ideal \(I=\left((2Y_{0}+Y_{2})(8Y_{0}-2Y_{1}+Y_{2}),(3Y_{0}-Y_{1})^{2}\right)\subset \mathcal{R}\). Its radical ideal is \((2Y_{1}+3Y_{2},2Y_{0}+Y_{2})\), i.e. it defines a \(0\)-dimensional scheme supported at \([L]=[X_{1}+3X_{2}-2X_{3}]\in\mathbb{P}^{n}\). One can directly verify that this scheme has length \(4\) and it is apolar to \(F\). Indeed, it is immediate to check that
\[(2Y_{0}+Y_{2})(8Y_{0}-2Y_{1}+Y_{2})\circ F =(-16Y_{0}^{2}+4Y_{0}Y_{1}-10Y_{0}Y_{2}+2Y_{1}Y_{2}-Y_{2}^{2}) \circ F=0,\] \[(3Y_{0}-Y_{1})^{2}\circ F =(9Y_{0}^{2}-6Y_{0}Y_{1}+Y_{1}^{2})\circ F=0.\]
Hence, \(I\) is the ideal defining \(Z_{F,L}\).
**Example 3.9**.: Let \(F=(X_{0}+3X_{1}-2X_{2})(X_{1}+X_{2})X_{2}\in\mathcal{S}_{3}\) the same polynomial of Example 3.8 and consider \(L=X_{0}\in\mathcal{S}_{1}\). As the support is \(X_{0}\), we do not need to change coordinates, so we directly de-homogenize \(F_{\mathrm{dp}}\) with respect to \(L\), obtaining \(f_{L}=y_{1}y_{2}+2y_{2}^{[2]}+6y_{1}^{[2]}y_{2}+2y_{1}y_{2}^{[2]}-12y_{2}^{[3]}\). Since \(F\) is not a pure cube, we consider the truncation of the Hankel matrix in degree \(3=\deg(f_{L})\), namely
\[H^{3}(f_{L})=\begin{array}{c}1\quad y_{1}\quad y_{2}\quad y_{1}^{2}\quad y _{1}y_{2}\quad y_{2}^{2}\quad y_{1}^{3}\quad y_{1}^{2}y_{2}\quad y_{1}y_{2}^{ 2}\quad y_{3}^{3}\\ y_{1}\quad\begin{pmatrix}0&0&0&0&1&2&0&6&2&-12\\ y_{2}&0&1&2&6&2&-12&0&0&0&0\\ y_{1}^{2}&0&0&6&0&0&0&0&0&0\\ y_{1}y_{2}&1&6&2&0&0&0&0&0&0&0\\ y_{2}^{2}&2&-12&0&0&0&0&0&0&0&0\\ y_{1}^{3}&0&0&0&0&0&0&0&0&0&0\\ y_{1}^{2}y_{2}&6&0&0&0&0&0&0&0&0&0\\ y_{1}y_{2}^{2}&2&0&0&0&0&0&0&0&0&0\\ y_{2}^{2}&0&0&0&0&0&0&0&0&0\\ \end{pmatrix}.\]
Its kernel is given by the ideal
\[\mathrm{Ann}^{\neg}(f_{L})=(5y_{2}^{3}+76y_{1}^{2}-12y_{1}y_{2}+36y_{2}^{2},2 y_{1}^{2}y_{2}+y_{2}^{3},3_{1}^{3},6y_{1}y_{2}^{2}+y_{2}^{3})\subset\underline{ \mathcal{R}}.\]
To homogenize it, we compute a Grobner basis with respect to the graded lexicographic ordering:
\[\mathrm{Ann}^{\neg}(f_{L})=(y_{1}^{3},5y_{1}^{2}y_{2}-38y_{1}^{2}+6y_{1}y_{2}- 18y_{2}^{2},15y_{1}y_{2}^{2}-38y_{1}^{2}+6y_{1}y_{2}-18y_{2}^{2},5y_{2}^{3}+76 y_{1}^{2}-12y_{1}y_{2}+36y_{2}^{2}).\]
Hence the natural apolar scheme is defined by the ideal
\[\begin{pmatrix}Y_{1}^{3},5Y_{1}^{2}Y_{2}-38Y_{0}Y_{1}^{2}+6Y_{0}Y_{1}Y_{2}-18Y_{ 0}Y_{2}^{2},\\ 15Y_{1}Y_{2}^{2}-38Y_{0}Y_{1}^{2}+6Y_{0}Y_{1}Y_{2}-18Y_{0}Y_{2}^{2},5Y_{2}^{3}+76Y _{0}Y_{1}^{2}-12Y_{0}Y_{1}Y_{2}+36Y_{0}Y_{2}^{2}\end{pmatrix}\subset\mathcal{R}.\]
One can easily verify that this ideal indeed defines a \(0\)-dimensional scheme apolar to \(F\) and supported at \([X_{0}]\), whose length is \(6\).
**Example 3.10**.: Let \(F=(X_{0}+3X_{1}-2X_{2})(X_{1}+X_{2})X_{2}\in\mathcal{S}_{3}\) be the polynomial of Example 3.8. From the equality \((X_{1}+X_{2})X_{2}=(\frac{X_{1}}{2}+X_{2})^{2}-(\frac{X_{1}}{2})^{2}\) we immediately get another non-local GAD of \(F\), namely
\[F=\left(\frac{X_{1}}{2}+X_{2}\right)^{2}(X_{0}+3X_{1}-2X_{2})-\left(\frac{X_{1 }}{2}\right)^{2}(X_{0}+3X_{1}-2X_{2}). \tag{5}\]
We compute the scheme \(Z\) evinced by the above GAD, supported at \([X_{1}+2X_{2}]\) and \([X_{1}]\).
We begin with the first addendum \(F_{1}=\frac{1}{4}(X_{1}+2X_{2})^{2}(X_{0}+3X_{1}-2X_{2})\) and \(L_{1}=X_{1}+2X_{2}\). We can neglect the constant factor \(\frac{1}{4}\), and since \(L_{1}\) has no \(X_{0}\) terms, we simply switch the roles of \(X_{0}\) and \(X_{1}\). In order to de-homogenize with respect to \(L_{1}\), we perform the substitution
\[X_{1}\gets X_{1}-2X_{2},\]
and we get \((f_{1})_{L_{1}}=x_{0}+3-8x_{2}\). Since \(X_{0}+3X_{1}-8X_{2}\) is a pure power, we need to consider the truncation of the Hankel matrix in degree \(2=\deg\left((f_{1})_{L_{1}}\right)+1\), i.e.
\[\mathbb{H}^{2}\big{(}(f_{1})_{L_{1}}\big{)}=\begin{array}{cccccc}1&y_{0}&y_ {2}&y_{0}^{2}&y_{0}y_{2}&y_{2}^{2}\\ 1&8&2&-16&0&0&0\\ y_{0}\begin{pmatrix}2&0&0&0&0&0\\ -16&0&0&0&0\end{pmatrix}\!\!,\end{array}\]
whose kernel defines the ideal \((8y_{0}+y_{2},y_{2}^{2})\subset\underline{\mathcal{R}}\). After the homogenization and the base-change
\[Y_{2}\leftarrow-2Y_{1}+Y_{2},\]
we obtain the ideal \(\left(8Y_{0}-2Y_{1}+Y_{2},(2Y_{1}-Y_{2})^{2}\right)\subset\mathcal{R}\) defining the scheme \(Z_{1}=Z_{F_{1},X_{1}+2X_{2}}\), which is \(0\)-dimensional, of length \(2\) and supported at the point \([X_{1}+2X_{2}]\in\mathbb{P}^{n}\).
We proceed with the second addendum \(F_{2}=\frac{1}{4}X_{1}^{2}(X_{0}+3X_{1}-2X_{2})\) and \(L_{2}=X_{1}\). As above, \(X_{1}\) plays the role of \(X_{0}\). Since \((f_{2})_{L_{2}}=x_{0}+3-2x_{2}\), we again consider the truncation of the Hankel matrix in degree \(2\):
\[\mathbb{H}^{2}\big{(}(f_{2})_{L_{2}}\big{)}=\begin{array}{cccccc}1&y_{0}&y _{2}&y_{0}^{2}&y_{0}y_{2}&y_{2}^{2}\\ 1&2&-4&0&0&0\\ y_{0}\begin{pmatrix}2&0&0&0&0&0\\ -4&0&0&0&0&0\end{pmatrix}\!\!,\end{array}\]
whose kernel defines the ideal \((2y_{0}+y_{2},y_{2}^{2})\subset\underline{\mathcal{R}}\). Hence, the scheme \(Z_{2}=Z_{F_{2},X_{1}}\) is defined by the ideal \((2Y_{0}+Y_{2},Y_{2}^{2})\subset\mathcal{R}\), it is \(0\)-dimensional of length \(2\) and is supported at the point \([X_{1}]\in\mathbb{P}^{n}\). In conclusion, the GAD of eq. (5) evinces the scheme \(Z=Z_{1}\cup Z_{2}\) defined by
\[I(Z)=\left(8Y_{0}-2Y_{1}+Y_{2},(2Y_{1}-Y_{2})^{2}\right)\cap(2Y_{0}+Y_{2},Y_{ 2}^{2})=(4Y_{0}Y_{1}-10Y_{0}Y_{2}+2Y_{1}Y_{2}-Y_{2}^{2},Y_{0}^{2}).\]
One can directly check that \(Z\) has length \(4\), it is supported at the points \([X_{1}]\) and \([X_{1}+2X_{2}]\), and it is apolar to \(F\).
## 4. GADs and Irredundant schemes
In this section, we investigate irredundant schemes evinced by GADs by employing ideas on natural apolar schemes from [1].
**Remark 4.1**.: Let \([L]\in\mathbb{P}^{n}\) be a simple point defined by the ideal \(\wp_{L}\subset\mathcal{R}\). Recall that the _\(j\)-fat point supported at \([L]\)_ is the \(0\)-dimensional scheme defined by the ideal \(\wp_{L}^{j}\) For any \(k\leq d\), the natural apolar scheme of \(F=L^{d-k}G\in\mathcal{S}_{d}\) supported at \([L]\) is contained in the \((k+1)\)-fat point supported at \([L]\), since the localization \(f_{L}\) has degree at most \(k\). Thus, \(Z_{F,L}\) is \(k\)-regular, as a \((k+1)\)-fat point is always \(k\)-regular and the containment preserves the regularity [1, 1]. Finally, if \(F\) is _concise_ in \(n+1\) variables, i.e., \(\operatorname{HF}_{Z}(1)=n+1\), then \(Z_{F,L}\) is regular in degree \(k-n\) since its Hilbert function starts with \(\operatorname{HF}_{Z}=(1,n+1,\dots)\) and is strictly increasing until it stabilizes.
**Remark 4.2**.: By [1, Lemma 3], given a local scheme \(Z\subset\mathbb{P}^{n}\) apolar to \(F\in\mathcal{S}_{d}\) and supported at \([L]\), there exists \(G\in\mathcal{S}_{D}\) (\(D\geq d\)) such that \(Z_{G,L}\subseteq Z\) and \(F=H\circ G\) for some \(H\in\mathcal{R}_{D-d}\). Furthermore, in [1, Proposition 1] it is shown that, under minimality assumption, the localizations of \(F_{\mathrm{dp}}\) and \(G_{\mathrm{dp}}\) with respect to \(L\) are equal up to degree \(d\). In that result, the minimality requirement is in terms of minimal length among the schemes supported at \([L]\) and apolar to \(F\). However, we observe that in that proof irredundancy is actually enough. For the sake of completeness, we report here the proof of the following statement, which may be seen as a non-local version of [1, Proposition 1].
**Proposition 4.3**.: _Let \(Z\) be a \(0\)-dimensional scheme apolar and irredundant to \(F\in\mathcal{S}_{d}\). Then \(Z\) is evinced by a GAD of \(F\) if and only if there are \(L_{1},\dots,L_{s}\in\mathcal{S}_{1}\) such that \(I(Z)\supseteq\bigcap_{i=1}^{s}\wp_{L_{i}}^{d+1}\)._
Proof.: Let \(Z=Z_{1}\cup\dots\cup Z_{s}\) be the irreducible decomposition of \(Z\).
If \(Z\) is evinced by a GAD as in eq.4, then each \(Z_{i}=Z_{L_{i}^{d-k_{i}}G_{i}}\) is contained in a \((k_{i}+1)\)-fat point by Remark4.1, hence \(I(Z_{i})\supseteq\wp_{L_{i}}^{k_{i}+1}\supseteq\wp_{L_{i}}^{d+1}\). Note that this implication does not need irredundancy.
Conversely, since \(I(Z)\subseteq\mathrm{Ann}(F)\), then we have
\[F\in I(Z)_{d}^{\perp}=I(Z_{1})_{d}^{\perp}+\dots+I(Z_{s})_{d}^{\perp}.\]
Therefore, we have an additive decomposition \(F=\sum_{i=1}^{s}F_{i}\) with \(F_{i}\in I(Z_{i})_{d}^{\perp}\). By Remark4.2 there are \(G_{i}\in\mathcal{S}_{D_{i}}\) and \(H_{i}\in\mathcal{R}_{D_{i}-d}\) such that \(Z_{G_{i},L_{i}}\subseteq Z_{i}\) and \(H_{i}\circ G_{i}=F_{i}\). By [1, Lemma 3] we know that \(h_{i}\mathbin{\rightharpoonup}(g_{i})_{L_{i}}\) and \((f_{i})_{L_{i}}\) are equal up to degree \(d\), but since \(I(Z_{i})\supseteq\wp_{L_{i}}^{d+1}\), the degree of the local generator \(h_{i}\mathbin{\rightharpoonup}(g_{i})_{L_{i}}\) is bounded by \(d\), so it equals \((f_{i})_{L_{i}}\). Hence, we have
\[\mathrm{Ann}^{\neg}\big{(}(f_{i})_{L_{i}}\big{)}=\mathrm{Ann}^{\neg}\big{(} h_{i}\mathbin{\rightharpoonup}(g_{i})_{L_{i}}\big{)}\supseteq\mathrm{Ann}^{ \neg}\big{(}(g_{i})_{L_{i}}\big{)},\]
therefore the natural apolar scheme \(Z_{F_{i},L_{i}}\) is contained in \(Z_{G_{i},L_{i}}\) and is apolar to \(F_{i}\). However, the scheme \(Z_{i}\) needs to be irredundant to \(F_{i}\), hence we conclude that \(Z_{F_{i},L_{i}}=Z_{G_{i},L_{i}}=Z_{i}\). Therefore \(Z\) is the scheme evinced by the additive decomposition \(\sum_{i=1}^{s}F_{i}\) supported at \(L_{1},\dots,L_{s}\).
In the following example, we observe that even if \(Z\) is evinced by a GAD of \(F\in\mathcal{S}_{d}\) and its components are contained in \((d+1)\)-fat points, \(Z\) may still be redundant to \(F\).
**Example 4.4**.: Consider the GAD \(F=X_{0}G_{1}+X_{1}^{2}G_{2}\in\mathcal{S}_{3}\), where
\[G_{1}=4X_{0}^{2}+2X_{0}X_{1}-4X_{1}^{2},\quad G_{2}=-3X_{0}-5X_{1}.\]
The scheme \(Z\) evinced by such GAD is given by the ideal
\[I(Z)=(Y_{0}^{2}Y_{1}^{3})=\wp_{X_{0}}^{3}\cap\wp_{X_{1}}^{2}\subset\mathcal{R}.\]
Its Hilbert function is \(\mathrm{HF}_{Z}=(1,2,3,4,5,5,\dots)\), hence it is not \(3\)-regular. We move the addendum in \(G_{1}\) containing \(X_{1}^{2}\) to \(G_{2}\), obtaining a different GAD supported at the same points: \(F=X_{0}^{2}\tilde{G}_{1}+X_{1}^{2}\tilde{G}_{2}\), where
\[\tilde{G}_{1}=4X_{0}+2X_{1},\quad\tilde{G}_{2}=-7X_{0}-5X_{1}.\]
The scheme \(\tilde{Z}\) evinced by the last GAD is by construction apolar to \(F\), and it is defined by
\[I(\tilde{Z})=(Y_{0}^{2}Y_{1}^{2})=\wp_{X_{0}}^{2}\cap\wp_{X_{1}}^{2}\subset \mathcal{R}.\]
The Hilbert function of \(\tilde{Z}\neq Z\) is \(\mathrm{HF}_{\tilde{Z}}=(1,2,3,4,4,\dots)\), and clearly \(I(Z)\subseteq I(\tilde{Z})\subseteq\mathrm{Ann}(F)\). Hence, \(Z\) is redundant.
It can be directly verified that \(\tilde{Z}\) is irredundant to \(F\) (e.g. as in Example5.10), but not minimal, since
\[I_{W}=(79Y_{0}^{2}-166Y_{0}Y_{1}+88Y_{2}^{2})\subset\mathcal{R}\]
is evinced by the (unique) Waring decomposition of \(F\), defining a scheme of length \(2\) apolar to \(F\).
**Corollary 4.5**.: _Let \(L_{1},\dots,L_{s}\in\mathcal{S}_{1}\) and \(Z=Z_{1}\cup\dots\cup Z_{s}\) be a \(0\)-dimensional scheme apolar to \(F\in\mathcal{S}_{d}\) such that for every \(i\in\{1,\dots,s\}\) we have \(I(Z_{i})\supset\wp_{L_{i}}^{\tilde{k}_{i}+1}\) with \(\tilde{k}_{i}\leq d\). Then \(Z\) contains a scheme evinced by a GAD of \(F\) as in eq.4, with \(k_{i}\leq\tilde{k}_{i}\)._
Proof.: Let \(Y=Y_{1}\cup\ldots\cup Y_{s}\subseteq Z\) be non-redundant and apolar to \(F\), with \(Y_{i}\subseteq Z_{i}\). Then, it is enough to apply the proof of Proposition 4.3 to \(Y\), since
\[I(Y_{i})_{d}^{\perp}\subseteq I(Z_{i})_{d}^{\perp}\subseteq(\wp_{i}^{\tilde{k}_ {i}+1})_{d}^{\perp}=\langle L_{i}^{d-\tilde{k}_{i}}Q\ :\ Q\in\mathcal{S}_{\tilde{k}_{i}}\rangle,\]
where the last equality is a classical result, see e.g. [1, Theorem 3.2]. We conclude that \(I(Y_{i})\) is evinced by \(F_{i}=L_{i}^{d-\tilde{k}_{i}}Q_{i}\), which becomes a valid (local) GAD after collecting all the factors \(L_{i}\) in \(Q_{i}\). Thus, \(Y\) is evinced by the GAD \(F=\sum_{i=1}^{s}F_{i}\) supported at \(L_{1},\ldots,L_{s}\).
In the following example, we show that the degree \(D\) of the polynomial \(G\) from Remark 4.2 may well exceed \(d=\deg(F)\). We thank J. Buczynski for pointing it out.
**Example 4.6**.: Consider the following polynomial:
\[F =24\,X_{0}^{3} +70\,X_{0}^{2}X_{1}+75\,X_{0}^{2}X_{2}+70\,X_{0}^{2}X_{3}+180\,X _{0}^{2}X_{4}+10\,X_{0}^{2}X_{5}+10\,X_{0}X_{1}^{2}\] \[+70\,X_{0}X_{2}^{2}+360\,X_{0}X_{2}X_{3}+120\,X_{0}X_{2}X_{4}+60\, X_{0}X_{3}^{2}+60\,X_{2}^{2}X_{3}\in\mathcal{S}_{3},\]
and let \(Z\) be the scheme defined by the ideal
\[I(Z)=(-Y_{0}Y_{3}+Y_{2}^{2},\,-Y_{1}Y_{4}+Y_{2}Y_{3},\,-Y_{1}Y_{ 5}+Y_{1}^{2},\,-6Y_{1}Y_{5}+Y_{2}Y_{4},\,-6Y_{1}Y_{5}+Y_{3}^{2},\] \[Y_{1}Y_{2},\,Y_{1}Y_{3},\,Y_{1}Y_{4},\,Y_{1}Y_{5},\,Y_{2}Y_{5},\, Y_{3}Y_{4},\,Y_{3}Y_{5},\,Y_{4}^{2},\,Y_{4}Y_{5},\,Y_{5}^{2})\subset\mathcal{R}.\]
One can computationally check that \(Z\) is a local \(0\)-dimensional scheme apolar to \(F\), of minimal length \(6\) and supported at \([X_{0}]\in\mathbb{P}^{n}\). One can also verify that it is the unique scheme of minimal length apolar to such \(F\), by explicitly computing minimal apolar schemes [1], or by observing that \(I(Z)=\operatorname{Ann}(F)\cap\mathcal{R}_{\leq 2}\) and the Hilbert function of \(\mathcal{R}/\operatorname{Ann}(F)\) is \((1,6,6,1)\). In particular, \(Z\) is non-redundant. Since \(I(Z)\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{$\sim$}} \hrule height 0.0pt width 100 \kern-1.0pt\hbox{$\geq$}}}\limits}\phi_{X_{0}}^{4}\), by Proposition 4.3 there is no GAD of \(F\) that evinces this apolar scheme. However, as recalled in Remark 4.2, since \(I(Z)\supseteq\wp_{X_{0}}^{5}\) then \(Z\) is evinced by a GAD of a degree-\(4\) polynomial \(G\) having \(F\) among its partials. Indeed, let us consider the polynomial
\[G =6X_{0}^{4}+\frac{70}{3}X_{0}^{3}X_{1}+25X_{0}^{3}X_{2}+\frac{70}{ 3}X_{0}^{3}X_{3}+60X_{0}^{3}X_{4}+\frac{10}{3}X_{0}^{3}X_{5}+5X_{0}^{2}X_{1}^{2 }+35X_{0}^{2}X_{2}^{2}\] \[+180X_{0}^{2}X_{2}X_{3}+60X_{0}^{2}X_{2}X_{4}+30X_{0}^{2}X_{3}^{2} +60X_{0}X_{2}^{3}+60X_{0}X_{2}^{2}X_{3}+5X_{2}^{4}\in\mathcal{S}_{4}.\]
Note that \(Y_{0}\circ G=F\). Moreover, \(Z=Z_{G,X_{0}}\), i.e., it is evinced by the trivial GAD of \(G\) given by \(G=X_{0}^{0}G\). This example shows why the containment in \((d+1)\)-fat points is crucial for Proposition 4.3 and Corollary 4.5. In particular, we have that
\[g_{X_{0}} =120x_{2}^{4}+f_{X_{0}}=\] \[=120x_{2}^{4}+360x_{2}^{3}+120x_{2}^{2}x_{3}+20x_{1}^{2}+140x_{2} ^{2}+360x_{2}x_{3}+120x_{2}x_{4}+120x_{3}^{2}+140x_{1}+150x_{2}\] \[+140x_{3}+360x_{4}+20x_{5}+144.\]
We observe that \(g_{X_{0}}\) and \(f_{X_{0}}\) are equal up to degree \(3\), but since
\[(y_{2}^{2}-y_{3})\mathbin{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{$\sim$}} \hrule height 0.0pt width 100 \kern-1.0pt\hbox{$\sim$}}}\limits}f_{X_{0}}=-120x_{2}^{2}\neq 0,\]
then \(\operatorname{Ann}^{\neg}(g_{X_{0}})\not\subseteq\operatorname{Ann}^{\neg}(f_ {X_{0}})\).
## 5. Regularity of schemes evicing GADs
### Apolar schemes with low multiplicities and independent supports
For a given \(L\in\mathcal{S}_{1}\), let \(D_{L}=L^{\perp}\cap\mathcal{R}_{1}\) and \(D_{L}^{\varepsilon}\subset\operatorname{Sym}^{\varepsilon}\!\mathcal{R}_{1}\) be its \(e\)-th symmetric power. We also define the k-vector spaces
\[\mathcal{D}_{L}^{\varepsilon}(F)=\langle H\circ F\ :\ H\in D_{L}^{\varepsilon} \rangle\subseteq\mathcal{S}_{d-e},\]
and given a vector space \(V\subseteq\mathcal{S}_{m}\) and \(H\in\mathcal{S}_{l}\), we write
\[H\cdot V=\{HF\ :\ F\in V\}\subseteq\mathcal{S}_{l+m}.\]
With the notation of the previous sections, as in [1, Remark 3], we have
\[I(Z_{F,L})_{d}^{\perp}=\mathbb{P}\left(\sum_{e=0}^{d}L^{\varepsilon}\cdot \mathcal{D}_{L}^{\varepsilon}(F)\right)\subset\mathbb{P}(\mathcal{S}_{d}).\]
When \(F=L^{d-k}G\), from the above equality and the chain rule of derivation we get
\[I(Z_{L^{d-k}G,L})_{d}^{\perp}=\mathbb{P}\left(\sum_{e=0}^{k}L^{d-k+e}\cdot \mathcal{D}_{L}^{e}(G)\right)\subset\mathbb{P}(\mathcal{S}_{d}).\]
**Remark 5.1**.: Let \(Z=\cup_{i=1}^{s}Z_{i}\subset\mathbb{P}^{n}\) be the irreducible decomposition of a \(0\)-dimensional scheme. Then \(Z\) is \(h\)-regular precisely when \(\dim I(Z)_{h}^{\perp}=\deg(Z)=\sum_{i=1}^{s}\deg(Z_{i})\), therefore there cannot be \(\Bbbk\)-linear relations involving generators of \(I(Z_{i})_{h}^{\perp}\) for different \(i\)'s.
If there is a relation between different \(I(Z_{i})_{d}^{\perp}\) as in Remark 5.1, the scheme \(Z\) is not \(d\)-regular. However, the following proposition shows that if such \(Z\) is evinced by a GAD of \(F\in\mathcal{S}_{d}\) and is irredundant to it, such a relation cannot involve addenda appearing in that GAD.
**Proposition 5.2**.: _Let \(Z\) be the scheme evinced by the GAD \(F=\sum_{i=1}^{s}L_{i}^{d-k_{i}}G_{i}\in\mathcal{S}_{d}\). If, for some \(i\in\{1,\ldots,s\}\), we have_
\[L_{i}^{d-k_{i}}G_{i}\in\sum_{1\leq e_{i}\leq k_{i}}L_{i}^{d-k_{i}+e_{i}}\cdot \mathcal{D}_{L_{i}}^{e_{i}}(G_{i})+\sum_{\begin{subarray}{c}1\leq j\leq s\\ j\neq i\end{subarray}}\sum_{0\leq e_{j}\leq k_{j}}L_{j}^{d-k_{j}+e_{j}}\cdot \mathcal{D}_{L_{j}}^{e_{j}}(G_{j}), \tag{6}\]
_then \(Z\) is redundant to \(F\). It is intended that the first sum in eq. (6) is empty if \(k_{i}=0\)._
Proof.: Without loss of generality, we may assume that in eq. (6) we have \(i=1\). We define a scheme \(Z^{\prime}\) apolar to \(F\) as follows.
\(\bullet\) If \(k_{1}=0\), by eq. (6), we simplify the GAD as \(F=\sum_{j=2}^{s}L_{j}^{d-k_{i}}G_{j}^{\prime}\) with \(G_{j}^{\prime}\in\sum_{e=0}^{k_{i}}L_{j}^{e}D_{L_{j}}^{e}(G_{j})\). We call \(Z^{\prime}\) the scheme evinced by this GAD of \(F\).
\(\bullet\) If \(k_{1}>0\), we replace \(L_{1}^{d-k_{1}}G_{1}\) in the GAD of \(F\) with the linear combination deduced from eq. (6). In particular, there are elements \(H_{j,e_{j}}\in\mathcal{D}_{L_{j}}^{e_{j}}\) and integers \(m_{j}\in\mathbb{N}\) such that we can write
\[F=\sum_{j=1}^{s}L_{j}^{d-k_{j}+m_{j}}\left(\sum_{m_{j}\leq e_{j}\leq k_{j}}L_{ j}^{e_{j}-m_{j}}\left(H_{j,e_{j}}\circ G_{j}\right)\right). \tag{7}\]
Since \(k_{1}>0\), then we have \(m_{1}\geq 1\) in eq. (7). The last equation is a GAD of \(F\) up to deleting vanishing addenda and, for all the others, choosing \(m_{j}\) such that \(H_{j,m_{j}}\neq 0\). Let \(Z^{\prime}\) be the scheme evinced by the new GAD in eq. (7).
By construction, \(Z^{\prime}\) is apolar to \(F\), so it is sufficient to show that \(Z^{\prime}\subsetneq Z\).
Following the notation introduced in Section 3.1, let \(g_{j}=(g_{j})_{L_{j}}\in\underline{\mathcal{S}}\) be the de-homogenization of \((G_{j})_{\mathrm{dp}}\) with respect to \(L_{j}\), and let \(h_{j,e_{j}}\in\underline{\mathcal{R}}\) be the dehomogenization of \(H_{j,e_{j}}\) with respect to the dual linear form \(L_{j}^{*}\) of \(L_{j}\). Since \(H_{j,e_{j}}\in D_{L_{j}}^{e_{j}}\subset\mathrm{Sym}^{e_{j}}L_{j}^{\perp}\), then \(H_{j,e_{j}}\) does not involve \(L_{j}^{*}\), so its dehomogenization \(h_{j,e_{j}}\) is equal to \(H_{j,e_{j}}\). Thus, the de-homogenization of \((H_{j,e_{j}}\circ G_{j})_{\mathrm{dp}}=H_{j,e_{j}}\lrcorner(G_{j})_{\mathrm{dp}}\) with respect to \(L_{j}\) coincides with \(h_{j,e_{j}}\lrcorner g_{j}\). In particular, the \(j\)-th component of \(Z^{\prime}\) is defined by \(\mathrm{Ann}\lnot\left(\sum_{m_{j}\leq e_{j}\leq k_{j}}h_{j,e_{j}}\lrcorner g_ {j}\right)\). Since
\[\mathrm{Ann}\lnot\left(\sum_{m_{j}\leq e_{j}\leq k_{j}}h_{j,e_{j}}\lrcorner g_ {j}\right)\supseteq\mathrm{Ann}\lnot(g_{j}),\]
we deduce that \(Z^{\prime}\subseteq Z\). We now show that this containment is proper.
In the case \(k_{1}=0\), then the containment is strict because \(Z^{\prime}\) has no support on \([L_{1}]\). In the case \(k_{1}>0\), since \(m_{1}\geq 1\), \(\deg(\sum_{m_{i}\leq e_{i}\leq k_{i}}h_{1,e_{i}}-g_{1})<\deg(g_{1})\) so the socle degree of the first component of \(I(Z^{\prime})\) and \(I(Z)\) are different, so again they must be different.
**Proposition 5.3**.: _Let \(s>1\) and \(L_{1},\ldots,L_{s}\in\mathcal{S}_{1}\) be \(\Bbbk\)-linearly independent forms and \(Z\) be the scheme evinced by a GAD of \(F\in\mathcal{S}_{d}\) as in eq. (4). If either_
1. \(d>\max_{i\neq j}\{k_{i}+k_{j}\}\)_, or_
2. \(d>\max_{i\neq j}\{k_{i}+k_{j}-2\}\) _and_ \(Z\) _is irredundant,_
_then \(Z\) is \(d\)-regular._
Proof.: For every \(1\leq i\leq s\) we let \(Z_{i}\) be the natural apolar scheme to \(F_{i}=L_{i}^{d-k_{i}}G_{i}\) supported at \(L_{i}\), so \(Z=\cup_{i=1}^{s}Z_{i}\). By Remark4.2 each \(Z_{i}\) is \(d\)-regular, therefore \(\dim I(Z_{i})_{d}^{\perp}=\deg(Z_{i})\). By Remark5.1, we only need to show that there are no \(\Bbbk\)-linear relations involving generators of \(I(Z_{i})_{d}^{\perp}\) for different \(i\)'s. If there was such a relation, there should exist \(Q_{i}\in\sum_{e=0}^{k_{i}}L_{i}^{e}\cdot D_{L_{i}}^{e}(G_{i})\), for \(i=1,\ldots,s\), such that
\[L_{i}^{d-k_{i}}Q_{i}=\sum_{i\neq j}L_{j}^{d-k_{j}}Q_{j}. \tag{8}\]
Since the \(L_{i}\)'s are linearly independent, up to a change of coordinates we can write the above as
\[X_{i}^{d-k_{i}}\tilde{Q}_{i}=\sum_{i\neq j}X_{j}^{d-k_{j}}\tilde{Q}_{j}.\]
In case 1, the hypothesis \(d-k_{i}>k_{j}=\deg(Q_{j})=\deg(\tilde{Q}_{j})\) prevents form factoring \(X_{i}^{d-k_{i}}\) out of the right-hand side of the above equation. Thus, no such relation may hold.
In case 2, since \(Z\) is irredundant, by Proposition5.2 we may assume that any relation between the \(I(Z_{i})_{d}^{\perp}\)'s does not involve any of the terms \(L_{i}^{d-k_{i}}G_{i}\). Thus, eq.8 actually leads to a relation of the form
\[X_{i}^{d-k_{i}+1}\tilde{Q}_{i}=\sum_{i\neq j}X_{j}^{d-k_{j}+1}\tilde{Q}_{j}.\]
As in the previous case, the factor \(X_{i}^{d-k_{i}+1}\) cannot appear on the right-hand side of the above sum due to \(d>\max_{i\neq j}\{k_{i}+k_{j}\}-2\).
In conclusion, in both cases, the \(I(Z_{i})_{d}^{\perp}\)'s cannot intersect, so \(Z\) is \(d\)-regular.
**Remark 5.4**.: We note that requiring \(s>1\) in Proposition5.3 is not restrictive, as in the local case (\(s=1\)) Remark4.2 already contains a stronger result.
An immediate corollary of Proposition5.3 is the following.
**Corollary 5.5**.: _Let \(Z\) be the scheme evinced by the GAD \(F=\sum_{i=1}^{s}L_{i}^{d-k_{i}}G_{i}\in\mathcal{S}_{d}\), such that \(L_{1},\ldots,L_{s}\) are \(\Bbbk\)-linearly independent and \(k_{i}<\frac{d}{2}\) for every \(i\in\{1,\ldots,s\}\). Then \(Z\) is \(d\)-regular._
**Corollary 5.6**.: _Let \(Z=\bigcup_{i=1}^{s}Z_{i}\) be a \(0\)-dimensional scheme apolar and irredundant to \(F\in\mathcal{S}_{d}\), such that \(I(Z_{i})\supset\wp_{L_{i}}^{\lceil\frac{d}{2}\rceil+1}\) and the \(L_{i}\) are \(\Bbbk\)-linearly independent. Then \(Z\) is \(d\)-regular._
Proof.: It follows by Corollary4.5 together with Corollary5.5.
**Remark 5.7**.: We notice that every requirement of Proposition5.3 is sharp. In fact, Example4.4 shows that the inequality in 1 cannot be improved: if \(d=\max_{i\neq j}\{k_{i}+k_{j}\}\) the scheme \(Z\) may be not \(d\)-regular. Similarly, the following Example5.8 shows that the inequality in 2 is also sharp. Finally, Example5.10 will show that the \(\Bbbk\)-linear independence of the supports is also needed.
The following example shows that schemes that are irredundant to \(F\in\mathcal{S}_{d}\) may be not \(d\)-regular.
**Example 5.8**.: Let us consider the scheme \(Z\) evinced by the GAD \(F=X_{0}G_{1}+X_{1}G_{2}\in\mathcal{S}_{4}\), where
\[G_{1} =10X_{0}^{3}-4X_{0}^{2}X_{1}+4X_{0}^{2}X_{2}-4X_{0}X_{1}^{2}-8X_{ 0}X_{1}X_{2}-3X_{0}X_{2}^{2}-8X_{1}^{3}-4X_{2}^{3}\in\mathcal{S}_{3},\] \[G_{2} =5X_{0}^{3}+9X_{0}X_{1}^{2}-5X_{1}^{3}-7X_{1}^{2}X_{2}+6X_{1}X_{ 2}^{2}-X_{2}^{3}\in\mathcal{S}_{3}.\]
Its defining ideal is
\[I(Z)=(Y_{0}^{3}Y_{1}^{3}-2Y_{0}^{3}Y_{2}^{3}+5Y_{1}^{3}Y_{2}^{3},3Y_{0}^{2}Y_{ 1}Y_{2}-2Y_{0}Y_{2}^{3},Y_{0}Y_{1}^{2}Y_{2},Y_{0}Y_{1}Y_{2}^{2},Y_{4}^{4}),\]
whose minimal primary decomposition is \(I(Z)=I_{1}\cap I_{2}\), where
\[I_{1}=(-3Y_{0}Y_{1}Y_{2}+Y_{1}^{3},Y_{1}^{2}Y_{2},Y_{1}Y_{2}^{2},Y_{1}Y_{2}^{3} -2Y_{2}^{3}),\quad I_{2}=(Y_{2}^{4},Y_{0}^{3}+5Y_{2}^{3},Y_{0}Y_{2}).\]
Its Hilbert function is \(\operatorname{HF}_{Z}=(1,3,6,10,11,12,12,\ldots)\), hence \(Z\) is not regular in degree \(4=\deg(F)\).
**Claim**.: \(Z\) _is irredundant to \(F\)._
Proof of Claim.: The connected components of \(Z\) are both contained in \(4\)-fat points, i.e. \(I_{i}\supset\wp_{X_{i-1}}^{4}\), hence by Corollary4.5 it is sufficient to show that the unique scheme \(Y\subseteq Z\) evinced by a GAD of \(F\) of type \(F=X_{0}^{a_{0}}Q_{1}+X_{1}^{a_{1}}Q_{2}\) with \(a_{0},a_{1}\geq 1\) is \(Z\) itself. Since in the expression of \(F\) appear the monomials \(-4X_{0}X_{2}^{3}\) and \(-X_{1}X_{2}^{3}\), then it is easy to see that there is no such a GAD of \(F\) for \(a_{0}>1\) or \(a_{1}>1\), therefore we assume \(a_{0}=a_{1}=1\).
Since this new additive decomposition is still equal to \(F\), we have
\[X_{0}(Q_{1}-G_{1})+X_{1}(Q_{2}-G_{2})=0,\]
hence there is \(T\in\mathcal{S}_{2}\) such that
\[X_{1}T=Q_{1}-G_{1},\quad X_{2}T=-Q_{2}+G_{2}.\]
This means that \(Y\) is evinced by a GAD of \(F\) of type
\[F=X_{0}(G_{1}+X_{1}T)+X_{1}(G_{2}-X_{0}T),\]
for some
\[T=\lambda_{1}X_{0}^{2}+\lambda_{2}X_{0}X_{1}+\lambda_{3}X_{0}X_{2}+\lambda_{4 }X_{1}^{2}+\lambda_{5}X_{1}X_{2}+\lambda_{6}X_{2}^{2}\in\mathcal{S}_{2}.\]
If \(Y=Y_{1}\cup Y_{2}\subseteq Z\), then we have
\[I_{1}\subseteq I(Y_{1})=I(Z_{X_{0}(G_{1}+X_{1}T),X_{0}})\subseteq\operatorname{ Ann}\bigl{(}X_{0}(G_{1}+X_{1}T)\bigr{)},\]
which implies
\[\begin{cases}0=(-3Y_{0}Y_{1}Y_{2}+Y_{1}^{3})\circ\bigl{(}X_{0}(G_{1}+X_{1}T) \bigr{)}=6(-\lambda_{3}+\lambda_{4})X_{0}-6\lambda_{5}X_{1}-6\lambda_{6}X_{2},\\ 0=(Y_{2}^{3}Y_{2})\circ\bigl{(}X_{0}(G_{1}+X_{1}T)\bigr{)}=2\lambda_{5}X_{0}, \\ 0=(Y_{1}Y_{2}^{2})\circ\bigl{(}X_{0}(G_{1}+X_{1}T)\bigr{)}=2\lambda_{6}X_{0}, \\ 0=(Y_{1}^{3}-2Y_{2}^{3})\circ\bigl{(}X_{0}(G_{1}+X_{1}T)\bigr{)}=6\lambda_{4}X _{0}.\end{cases}\]
Similarly, from \(I_{2}\subseteq\operatorname{Ann}\bigl{(}X_{1}(G_{2}+X_{0}T)\bigr{)}\) we obtain
\[\begin{cases}0=(Y_{2}^{4})\circ\bigl{(}X_{1}(G_{2}+X_{0}T)\bigr{)}=0,\\ 0=(Y_{0}^{3}+5Y_{3}^{3})\circ\bigl{(}X_{1}(G_{2}+X_{0}T)\bigr{)}=-6\lambda_{1} X_{1},\\ 0=(Y_{0}Y_{2})\circ\bigl{(}X_{1}(G_{2}+X_{0}T)\bigr{)}=-2\lambda_{3}X_{0}X_{1} -\lambda_{5}X_{1}^{2}-2\lambda_{6}X_{1}X_{2}.\end{cases}\]
The above systems imply
\[\lambda_{1}=\lambda_{3}=\lambda_{4}=\lambda_{5}=\lambda_{6}=0,\]
thus we conclude that \(T=\lambda_{2}X_{0}X_{1}\). We computationally verify that the scheme evinced by the GAD
\[X_{0}(G_{1}+\lambda_{2}X_{0}X_{1}^{2})+X_{1}(G_{2}-\lambda_{2}X_{0}^{2}X_{1})\]
is independent on \(\lambda_{2}\in\Bbbk\), and it is always equal to \(I(Z)\). Therefore, we conclude that \(Y=Z\), i.e. \(Z\) is irredundant.
The proof of the above claim shows an effective way for establishing irredundancy to \(F\) by symbolically testing its GADs.
### Tangential decompositions
In this section, we prove that if a minimal apolar scheme to \(F\in\mathcal{S}_{d}\) is a union of simple points and \(2\)_-jets_ (i.e. local \(0\)-dimensional schemes of length \(2\)), then it is \(d\)-regular. Such schemes are evinced by GADs as in eq.9, which are called _tangential decompositions_ due to their relation with secant varieties of tangential varieties of Veronese varieties [1, 1].
**Proposition 5.9**.: _Let \(Z=Z_{1}\cup\ldots\cup Z_{r}\) such that \(\operatorname{len}(Z_{i})\leq 2\) for every \(i\in\{1,\ldots,r\}\). If \(Z\) is of minimal length among the apolar schemes to \(F\in\mathcal{S}_{d}\), then \(Z\) is \(d\)-regular._
Proof.: By Corollary4.5, \(Z\) is evinced by a GAD of \(F\) of type
\[F=\sum_{i=1}^{s}L_{i}^{d-1}G_{i}+\sum_{i=s+1}^{r}L_{i}^{d} \tag{9}\]
for some \(0\leq s\leq r\) and \(L_{i},G_{i}\in\mathcal{S}_{1}\). Moreover, we have
\[I(Z)_{d}^{\perp}=\langle L_{i}^{d},L_{j}^{d-1}G_{j}\rangle_{\begin{subarray}{c}1 \leq i\leq r\\ 1\leq j\leq s\end{subarray}}.\]
Since \(\operatorname{len}(Z)\) is \(r+s\), which is also equal to the number of generators of \(I(Z)_{d}^{\perp}\), in order to prove that \(Z\) is \(d\)-regular it is sufficient to show that all those generators are \(\Bbbk\)-linearly independent. We prove that if there is a linear relation between the \(L_{i}^{d}\)'s and the \(L_{j}^{d-1}G_{j}\)'s as above, then we can explicitly produce an apolar scheme that has smaller length than \(Z\), contradicting its minimality.
When such a relation involves an addenda appearing in the above GAD of \(F\), then \(Z\) is redundant by Proposition5.2, contradicting the minimality. Thus, we only need to show that \(L_{1}^{d},\ldots,L_{s}^{d}\) are linearly independent. We will prove a stronger fact, namely that \(L_{1}^{d-1},\ldots,L_{s}^{d-1}\) are linearly independent. Suppose by contradiction that \(L_{1}^{d-1}=\sum_{i=2}^{s}\lambda_{i}L_{i}^{d-1}\) for some \(\lambda_{i}\in\Bbbk\). By substituting this relation in the above GAD, we get
\[F=\sum_{i=2}^{s}L_{i}^{d-1}(G_{i}+\lambda_{i}G_{1})+\sum_{i=s+1}^{r}L_{i}^{d}.\]
The scheme \(Z^{\prime}\) evinced by this new GAD of \(F\) has length at most \(s+r-2=\operatorname{len}(Z)-2<\operatorname{len}(Z)\).
Notice that in the proof of Proposition5.9 we have employed the length-minimality of the scheme \(Z\) apolar to \(F\). Indeed, the irredundancy of an apolar scheme of 2-jets is not sufficient to guarantee the regularity in the degree of \(F\), as shown in the following example.
**Example 5.10**.: Let \(Z\) be the scheme evinced by the GAD
\[F=X_{0}^{2}X_{2}+X_{1}^{2}X_{3}+(X_{0}+X_{1})^{2}X_{4}+(X_{0}-X_{1})^{2}(X_{2 }-3X_{3}-2X_{4})+(X_{0}+2X_{1})^{2}(X_{2}+X_{3}+X_{4})\in\mathcal{S}_{3}.\]
It is easy to check that \(F\) is written in essential variables [14, 15], and that \(Z\) is the union of five 2-jets \(Z_{1},\ldots,Z_{5}\) supported on points \([L_{1}],\ldots,[L_{5}]\in\mathbb{P}^{n}\) of the rational normal cubic. Its Hilbert function is \(\operatorname{HF}_{Z}=(1,5,8,9,10,10,\ldots)\), therefore \(Z\) is not regular in degree \(3=\deg(F)\).
However, \(Z\) is irredundant: any proper subscheme of \(Z=\cup_{i=1}^{5}Z_{i}\) has to be contained in one of the following, for \(i\in\{1,\ldots,5\}\):
\[Y_{i}=[L_{i}]\cup\bigcup_{j\neq i}Z_{j}.\]
We computationally verify that for every \(i\) we have \(I(Y_{i})\subsetneq\operatorname{Ann}(F)\), therefore no proper subscheme of \(Z\) is apolar to \(F\).
We now verify that the strategy of Proposition5.9 produces an apolar scheme that is shorter than \(Z\), but not contained in it. Substituting the relation
\[(X_{0}-X_{1})^{2}=2X_{0}^{2}+2X_{1}^{2}-(X_{0}+X_{1})^{2}\]
we obtain the new GAD of \(F\):
\[X_{0}^{2}(3X_{2}-6X_{3}-4X_{4})+X_{1}^{2}(2X_{2}-5X_{3}-4X_{4})+(X_{0}+X_{1})^ {2}(-X_{2}+3X_{3}+3X_{4})+(X_{0}+2X_{1})^{2}(X_{2}+X_{3}+X_{4}).\]
The scheme evinced by this GAD has length \(8\) but is not contained in \(Z\). We can repeat the procedure with the relation
\[(X_{0}+2X_{1})^{2}=2(X_{0}+X_{1})^{2}-X_{0}^{2}+2X_{1}^{2},\]
which leads us to another GAD
\[F=X_{0}^{2}(2X_{2}-7X_{3}-5X_{4})+X_{1}^{2}(4X_{2}-3X_{3}-2X_{4})+(X_{0}+X_{1}) ^{2}(X_{2}+5X_{3}+5X_{4}).\]
The scheme evinced the last GAD is minimal among the apolar schemes to \(F\): as it has length \(6\) and, up to a change of variables, \(F\) is the Perazzo cubic [11] which has cactus rank \(6\) (see eg. [1, Example 2.8], [1, Section 4]). This can also be directly verified with [1, Algorithm 3].
### Apolar schemes with low length
**Proposition 5.11**.: _Let \(Z\subset\mathbb{P}^{n}\) be a \(0\)-dimensional scheme apolar and irredundant to \(F\in\mathcal{S}_{d}\). If \(\operatorname{len}(Z)\leq 2d+1\), then \(Z\) is \(d\)-regular._
Proof.: By contradiction, let us assume that \(Z\) is not \(d\)-regular. Then, by [1, Lemma 34], there exists a line \(L\) such that \(\operatorname{len}(Z\cap L)\geq d+2\). Let \(\operatorname{Res}_{L}(Z)\) be the residual scheme of \(Z\) with respect to \(L\) defined by the colon ideal \(\big{(}I(Z):(L)\big{)}\). Since
\[\operatorname{len}(Z\cap L)+\operatorname{len}\bigl{(}\operatorname{Res}_{L}(Z )\bigr{)}=\operatorname{len}(Z)\leq 2d+1,\]
then given the irreducible decomposition \(Z=Z_{1}+\dots+Z_{s}\), there exists a component \(Z_{i}\) such that the schematic intersection \(Z_{i}\cap L\) satisfies \(\operatorname{len}(Z_{i}\cap L)>\operatorname{len}(\operatorname{Res}_{L}(Z_{ i}))\). Without loss of generality, we may assume that \(i=1\), \(I(Z_{1})\subseteq\wp_{X_{0}}\) and \(I(L)=(X_{1},\dots,X_{n})\). Let \(H\) be the orthogonal hyperplane to \(X_{0}\), i.e. \(I(H)=(X_{0})\), and let \(m=\operatorname{len}(Z_{1}\cap L)\). We consider the scheme \(Z^{\prime}\) defined by
\[I(Z^{\prime})=I\left(Z_{1}\cap(m-1)H\right)\cap I(Z_{2})\cap\dots\cap I(Z_{s}).\]
It is clear that \(Z^{\prime}\subsetneq Z\), hence to get the desired contradiction it is sufficient to show that \(Z^{\prime}\) is apolar to \(F\), which follows directly from the following fact by the Apolarity Lemma (Lemma 2.3).
**Claim 5.1**.: \(I(Z)_{d}^{\perp}=I(Z^{\prime})_{d}^{\perp}\)_._
Proof of Claim 5.1.: Since \(m>\operatorname{len}\bigl{(}\operatorname{Res}(Z_{1})\bigr{)}\) we have \((X_{0}^{m-1})\cap(X_{1},\dots,X_{n})\subseteq I(Z_{1})\), hence
\[I(Z_{1})=\big{(}I(Z_{1})+(X_{0}^{m-1})\big{)}\cap\big{(}I(Z_{1})+(X_{1},\dots, X_{m})\big{)}.\]
\(\bullet\) We prove that \(I(Z_{1})+(X_{0}^{m-1})\) equals the saturated ideal \(I\big{(}Z_{1}\cap(m-1)H\big{)}\).
There are obvious ideal inclusions:
\[I(Z_{1})\subseteq I(Z_{1})+(X_{0}^{m-1})\subseteq I\big{(}Z_{1}\cap(m-1)H \big{)}. \tag{10}\]
It is enough to show that the last two ideals have the same Hilbert function. Since \(Z_{1}\cap(m-1)H\) has colength \(1\) inside \(Z_{1}\) and their homogeneous defining ideals agree up to degree \(m-2\), we deduce that
\[\operatorname{HF}_{Z_{1}\cap(m-1)H}(i)=\begin{cases}\operatorname{HF}_{Z_{1}} (i)&\text{for }i\leq m-2,\\ \operatorname{HF}_{Z_{1}}(i)-1&\text{for }i\geq m-1.\end{cases}\]
By eq. (10) the Hilbert function \(\operatorname{HF}_{*}\) of \(\mathcal{S}\big{/}\big{(}I(Z_{1})+(X_{0}^{m-1})\big{)}\) is squeezed: \(\operatorname{HF}_{Z_{1}\cap(m-1)H}\leq\operatorname{HF}_{*}\leq\operatorname {HF}_{Z_{1}}\). However, for every \(k\geq m-1\) we have \(X_{0}^{k}\in\big{(}I(Z_{1})+(X_{0}^{m-1})\big{)}\setminus I(Z_{1})\), thus \(\operatorname{HF}_{*}(k)<\operatorname{HF}_{Z_{1}}(k)\) for every \(k\geq m-1\). This implies that \(\operatorname{HF}_{*}\) completely agrees with \(\operatorname{HF}_{Z_{1}\cap(m-1)H}\).
\(\bullet\) For every \(i\in\{2,\dots,s\}\), we trivially have
\[I(Z_{i})=I(Z_{i})\cap\big{(}I(Z_{i})+(X_{1},\dots,X_{n})\big{)}.\]
Hence, we can write:
\[I(Z) =I\big{(}Z_{1}\cap(m-1)H\big{)}\cap\big{(}I(Z_{1})+(X_{1},\dots,X_ {m})\big{)}\cap\bigcap_{i=2}^{s}\big{(}I(Z_{i})+(X_{1},\dots,X_{n})\big{)}\cap I (Z_{i})\] \[=I(Z^{\prime})\cap\left(\bigcap_{i=1}^{s}I(Z_{i})+(X_{1},\dots,X_ {n})\right)=I(Z^{\prime})\cap I(Z\cap L).\]
\(\bullet\) From the non-degeneracy of the apolar action we get
\[I(Z)_{d}^{\perp}=[I(Z^{\prime})\cap I(Z\cap L)]_{d}^{\perp}=I(Z^{\prime})_{d}^ {\perp}+I(Z\cap L)_{d}^{\perp}\]
but \(I(Z\cap L)_{d}=I(Z^{\prime}\cap L)_{d}\) because they define schemes of length \(d+1\) on the same normal curve \(\nu_{d}(L)\subset\mathbb{P}^{d}\). Thus, we conclude
\[I(Z)_{d}^{\perp}=I(Z^{\prime})_{d}^{\perp}+I(Z^{\prime}\cap L)_{d}^{\perp}=I(Z^ {\prime})_{d}^{\perp},\]
which proves the claim and then concludes the proof.
We notice that Proposition 5.11 provides a good criterion for proving that the minimal apolar schemes to a _given_\(F\in\mathcal{S}_{d}\) is \(d\)-regular, by exhibiting at least one scheme \(Z\) apolar to \(F\) and of length not bigger than \(2d+1\).
**Example 5.12**.: Let \(F\in\mathcal{S}_{4}\) be the polynomial considered in Example 5.8. We consider another GAD \(F=X_{0}\tilde{G}_{1}+X_{1}\tilde{G}_{2}\), where
\[\tilde{G}_{1} =10X_{0}^{3}+X_{0}^{2}X_{1}+4X_{0}^{2}X_{2}-4X_{0}X_{1}^{2}-8X_{0} X_{1}X_{2}-3X_{0}X_{2}^{2}-4X_{2}^{3}\in\mathcal{S}_{3},\] \[\tilde{G}_{2} =X_{0}X_{1}^{2}-5X_{1}^{3}-7X_{1}^{2}X_{2}+6X_{1}X_{2}^{2}-X_{2}^{ 3}\in\mathcal{S}_{3}.\]
This GAD evinces the scheme \(\tilde{Z}\) defined by
\[I(\tilde{Z})=\left(Y_{0}^{2}Y_{1}Y_{2}-\frac{2}{3}Y_{0}Y_{2}^{3},Y_{0}Y_{1}Y_{ 2}^{2},Y_{2}^{4},Y_{0}Y_{1}^{2}-\frac{5}{2}Y_{0}Y_{1}Y_{2}+Y_{2}^{3}\right).\]
Its Hilbert function is \(\mathrm{HF}_{Z}=(1,3,6,9,9,\dots)\). Since \(\mathrm{len}(Z)=9\leq 2\cdot 4+1\), by Proposition 5.11 we can guarantee that minimal schemes apolar to such a \(F\) are \(4\)-regular, even without computing them.
## 6. Conclusion
In the present work, we investigated the \(d\)-regularity of certain families of schemes apolar to \(F\in\mathcal{S}_{d}\). In all the examples we presented, the schemes of minimal lengths were \(d\)-regular, so it is natural to ask whether this is always the case.
**Question 1**.: _Let \(F\in\mathcal{S}_{d}\) and \(Z\) be a \(0\)-dimensional scheme evincing its cactus rank. Is \(Z\)\(d\)-regular?_
Actually, a careful reader would have noticed that none of the examples we considered really required to reach degree \(d\) for regularity, hence we may state an even more compelling question.
**Question 2**.: _Let \(F\in\mathcal{S}_{d}\) and \(Z\) be a \(0\)-dimensional scheme evincing its cactus rank. Is \(Z\)\((d-1)\)-regular?_
To the best of our knowledge, we do not know the answer to Question 1 and Question 2. We believe that our results and examples could be useful in either direction. Our positive results restrict the identifkit of a possible example providing a negative answer to Question 1 to have some component of high multiplicity. On the other side, when trying to prove a positive answer to Question 1 or Question 2, Example 4.4 shows that we really need to use the _global_ assumption of minimality in terms of the cactus rank, and that cannot be relaxed with the _local_ condition of minimality by inclusion.
|
2303.17842 | Shepherding Slots to Objects: Towards Stable and Robust Object-Centric
Learning | Object-centric learning (OCL) aspires general and compositional understanding
of scenes by representing a scene as a collection of object-centric
representations. OCL has also been extended to multi-view image and video
datasets to apply various data-driven inductive biases by utilizing geometric
or temporal information in the multi-image data. Single-view images carry less
information about how to disentangle a given scene than videos or multi-view
images do. Hence, owing to the difficulty of applying inductive biases, OCL for
single-view images remains challenging, resulting in inconsistent learning of
object-centric representation. To this end, we introduce a novel OCL framework
for single-view images, SLot Attention via SHepherding (SLASH), which consists
of two simple-yet-effective modules on top of Slot Attention. The new modules,
Attention Refining Kernel (ARK) and Intermediate Point Predictor and Encoder
(IPPE), respectively, prevent slots from being distracted by the background
noise and indicate locations for slots to focus on to facilitate learning of
object-centric representation. We also propose a weak semi-supervision approach
for OCL, whilst our proposed framework can be used without any assistant
annotation during the inference. Experiments show that our proposed method
enables consistent learning of object-centric representation and achieves
strong performance across four datasets. Code is available at
\url{https://github.com/object-understanding/SLASH}. | Jinwoo Kim, Janghyuk Choi, Ho-Jin Choi, Seon Joo Kim | 2023-03-31T07:07:29Z | http://arxiv.org/abs/2303.17842v1 | # Shepherding Slots to Objects:
###### Abstract
Object-centric learning (OCL) aspires general and compositional understanding of scenes by representing a scene as a collection of object-centric representations. OCL has also been extended to multi-view image and video datasets to apply various data-driven inductive biases by utilizing geometric or temporal information in the multi-image data. Single-view images carry less information about how to disentangle a given scene than videos or multi-view images do. Hence, owing to the difficulty of applying inductive biases, OCL for single-view images remains challenging, resulting in inconsistent learning of object-centric representation. To this end, we introduce a novel OCL framework for single-view images, SLot Attention via SHepherding (SLASH), which consists of two simple-yet-effective modules on top of Slot Attention. The new modules, Attention Refining Kernel (ARK) and Intermediate Point Predictor and Encoder (IPPE), respectively, prevent slots from being distracted by the background noise and indicate locations for slots to focus on to facilitate learning of object-centric representation. We also propose a weak semi-supervision approach for OCL, whilst our proposed framework can be used without any assistant annotation during the inference. Experiments show that our proposed method enables consistent learning of object-centric representation and achieves strong performance across four datasets. Code is available at [https://github.com/object-understanding/SLASH](https://github.com/object-understanding/SLASH).
## 1 Introduction
_Object-centric learning_ (OCL) decomposes an image into a set of vectors corresponding to each distinct object to acquire object-wise representations [16]. Learning object-centric representation enables machines to perceive the visual world in a manner similar to humans. We recognize the world as a composition of _objects_[27] and extend the object-related knowledge to various environments [48]. Therefore, OCL enables a compositional understanding of an image and generalization for downstream tasks, such as visual reasoning [36] and object localization [6].
Mainstream OCL has adopted an autoencoding-based compositional generative model [10, 15, 35]. Slot Attention [35] is the most prominent technique for OCL, which
Figure 1: Results of training Slot Attention [35] with different seeds, which show inconsistent learning results. In the first trial, object-centric representations fail to grasp each distinct object due to the background noise. In the second, the model succeeds in distinguishing each different object from the background.
uses _slots_ as the intermediate representation bottlenecks. In the Slot Attention, randomly initialized slots compete with each other to occupy their attention regions in terms of pixels. Eventually, each slot attains object-centric representation by aggregating visual features according to the attention map between the slot and pixels.
Recently, OCL has been extended to multi-view images [42, 4] and videos [30, 46, 9]. Multi-view image [43] or video [13, 51, 14] datasets allow models to learn spatial geometry or temporal dynamics of objects through supplementary objective tasks such as novel view synthesis [42] and optical flow inference [30]. Consequently, these datasets provide additional information that enables the adoption of data-driven inductive biases, facilitating the learning of better object-centric representations.
In contrast, it is challenging to obtain data-driven inductive biases, such as geometric or temporal information, for single-view images. To address this problem, novel architectures, such as auto-regressive generative models [3, 10, 11, 15] and Transformer [53] for encoders [44] and decoders [45], have been proposed. However, owing to the absence of additional inductive biases, OCL for complex single-view images suffers from unstable training results.
This stability issue implies inconsistent learning of object-centric representation, that is, not all trials of training a model with the same architecture consistently succeed in distinguishing objects from the background (Fig. 1). The attention-leaking problem, or bleeding issue, can mislead a model to yield object-centric representations based on distorted attention maps. The bleeding issue is fatal for OCL because it is difficult to predict the behavior of a model, that is, whether a slot will seize a distinct object or an object entangled with a background.
To solve this bleeding issue, we propose a novel OCL framework, SLASH (**SL**ot **A**ttention via **S**H**epherding). SLASH resolves the bleeding by guiding the randomly initialized slots to successfully grasp objects 1) without being distracted by the background and 2) by keeping informed of the destination. These are accomplished by adding two simple-yet-effective modules, Attention Refining Kernel (ARK) and Intermediate Point Predictor and Encoder (IPPE), to the Slot Attention framework.
ARK is a single-channel single-layer convolutional kernel, designed to prevent slots from focusing on a noisy background. We adopt the Weights-Normalized Convolutional (WNConv) kernel, a learnable low-pass filter, as the kernel for ARK. This simple kernel refines the attention map between slots and pixels by reducing noise and solidifying object-like patterns.
IPPE serves as an indicator to nudge a slot to focus on the proper location. Thus, the slots can consistently update their representations without being confused by the background. IPPE consists of two submodules with simple MLPs. The first submodule predicts the position of an object in two-dimensional coordinates, and the second encodes the predicted coordinates into a high-dimensional vector.
Since IPPE needs to be trained to provide locational cues to slots, it is necessary to introduce positional labels. However, using fully annotated ground-truths is costly, particularly for densely-annotated labels such as object masks. Hence, we adopt a weak semi-supervision approach in which only a small subset of the dataset includes weak annotations, such as the centers of bounding boxes. We show that IPPE can be successfully trained with weakly semi-supervised learning and can be deployed under circumstances where no assistant ground-truth exists.
For a comprehensive study, we validate our method on numerous datasets, including CLEVR, CLEVRTEX, PTR, and MOVi. Moreover, we conduct 10 trials of training for each method, including the baselines and ours, to thoroughly evaluate the results. We estimate the performance of the models using three metrics: mean Intersection over Union (mIoU), Adjusted Rand Index (ARI), and foreground-ARI (fg-ARI) In particular, mIoU and ARI investigate whether the bleeding issue occurs by considering the background separation. A model is defined as being stable over the metrics when deviations are lower, and as being robust when averages are higher across all datasets. Experimental results demonstrate that our method achieves stable and robust OCL that prevents the bleeding issue.
Our main contributions are as follows:
* We observe OCL for single-view images suffers from the stability issue with inconsistent training results. To resolve this issue, we propose a novel framework, SLASH (**SL**ot **A**ttention via **S**H**epherding) consisting of two simple-yet-strong modules: ARK and IPPE.
* ARK is a learnable low-pass filter designed to prevent the bleeding issue where the attention of a slot leaks into a background.
* IPPE is introduced to inform slots of the regions to be focused. By leveraging weak semi-supervision, IPPE can inject positional information into a slot.
* We empirically prove SLASH achieves stable and robust OCL against four distinctive datasets. SLASH shows the best stability while outperforming the previous methods for all datasets over multiple metrics.
## 2 Related Works
### Object-Centric Representation Learning
A line of works for OCL adopts the scene reconstruction, where a model learns to decompose an image into several components without using any human-annotated ground-truths [7, 12, 16, 17, 23, 34, 50]. MONet [3] and IODINE
[15] proposed unsupervised auto-regressive approaches to sequentially disentangle object-centric representations from a scene. GENESIS [10, 11] improved object-centric learning by enabling interactions between slots while using an auto-regressive approach. Slot Attention [35] introduced an attention-based mechanism between slots and pixels, where slots parallelly and iteratively compete with each other to occupy their own territory in the pixel space. Slot Attention improved training speed and memory efficiency by enabling the parallel update of slots. Recently, SLATE [45] and DINOSAUR [44] adopted Transformer [53] as an encoder and decoder for Slot Attention, respectively, to learn object-centric representations over real-world images.
Several studies have adapted _novel view synthesis_ (NVS) to OCL [4, 42, 49, 20]. ROTS [4] proposed an approach to infer 3D disentangled object representation using 3D-to-2D perspective projection [19] with multi-view images. Other studies [42, 49] directly applied Slot Attention for multi-view images and demonstrated that using multi-view images with NVS significantly improves OCL performance.
OCL for videos has been actively studied [26, 23, 30, 46, 52]. SAVi [30], SAVi++ [9] and STEVE [46] extended Slot Attention to videos, in which a model iteratively infers object-centric representations across a sequence of images. With a sequence of images, models can learn to distinguish objects from backgrounds by referring to the temporal consistency and dynamics of the objects. In this study, we focus on a more challenging case, OCL for single images, where less information about an object and its background is provided than in OCL for multi-view images and videos.
### Weakly Supervised OCL
In weakly supervised learning, training is conducted with human-annotated labels that provide insufficient or indirect information but are pertinent to obtaining the target outputs. In OCL, GFS-Net [40] viewed the learning of object representations as a combination of _what and where_ problem. GFS-Net is first trained with images containing only a single object and then fine-tuned with images containing multiple objects to resolve the what and where problem. PriSMONet [8] used shape priors in weakly supervised learning for multi-object 3D scene decomposition over RGB-D images. Furthermore, in OCL for videos, SAVi [30] and SAVi++ [9] used position information, such as the center of a mass, bounding box, or object mask, for each object in the first frame of the given video to deal with the video-level OCL. In this work, motivated by [9, 30], we utilize the point information of objects in a scene to iron out the subjectiveness problem in the image-level OCL.
### Semi-Supervised Learning
In deep learning studies, any type of human-annotated labels, even with coarse or sparse information, can help enhance the performance of models. However, it is unfeasible to place ground-truths everywhere during the training and testing phases. Several semi-supervision studies have investigated how to leverage the lack of labels to solve image classification [2, 33, 38, 18], object detection [5, 22, 47, 56], and semantic/instance segmentation [1, 29, 32, 39] problems.
In this study, we adopt a novel approach for OCL, where models can only use weak supervision labels for a fraction of a given dataset. The most comparable study for different tasks is Point DETR [5]. Point DETR focused on weakly semi-supervised object detection using a dataset consisting of a few fully-annotated images with bounding boxes and object category labels, and a rich amount of weakly-annotated images with center points and object category labels. However, instead of fully-annotated images with semantic labels, our method only uses a few amounts of weakly-annotated images with point-level labels and a significant amount of non-annotated images.
## 3 Method
### Preliminary: Slot Attention
In Slot Attention [35], the object-centric representation is implemented with the concept of \(\texttt{slots}\in\mathbb{R}^{K\times D_{slot}}\), which is a set of \(K\) vectors of dimension \(D_{slot}\). The slots are initialized by a Gaussian distribution with a learnable mean \(\mu\) and sigma \(\sigma\), and updated over \(T\) iterations by the Slot Attention module. The slots are then decoded into the target reconstruction image.
We first describe the overall procedure of how Slot Attention is trained for the completeness of this study. Given an image, a convolutional neural network (CNN) encoder produces a visual feature map of dimension \(HW\times D_{enc}\), where \(H\) and \(W\) are the height and width of an input image. The Slot Attention module takes \(\texttt{slots}\) and the visual feature map, called \(\texttt{inputs}\), then projects them to dimension \(D\) by a linear transformation \(k\) for slots and \(q\), \(v\) for \(\texttt{inputs}\in\mathbb{R}^{HW\times D_{enc}}\). Dot-product attention is applied to generate an attention map, \(\texttt{attn}\), with query-wise normalized coefficients where slots compete with each other to occupy the more relevant pixels of the visual feature map (Eq. (1) [35]).
\[\begin{split}&\texttt{attn}_{i,j}:=\frac{\exp(M_{i,j})}{\Sigma_{l} \exp(M_{i,l})},\quad where\\ & M:=\frac{1}{\sqrt{D}}k(\texttt{inputs})\cdot q(\texttt{slots}) ^{T}\in\mathbb{R}^{HW\times K}.\end{split} \tag{1}\]
The projected visual feature map weighted by the attention map coefficients (Eq. (2) [35]) is aggregated to produce the updated slots, \(\texttt{updates}\). As the Slot Attention module runs iteratively, \(\texttt{slots}\) can gradually update their representation. Each updated slot is then decoded into an RGB
A image using a spatial broadcast decoder [55], where the weights are shared across slots. The decoded images are blended into a single image using alpha masks to yield the final reconstructed image. The mean squared error (MSE) between the original input image and the predicted reconstruction image is chosen for the objective function so that the overall training follows unsupervised learning.
\[\mathtt{updates}\coloneqq W^{T}\cdot v(\mathtt{inputs})\in\mathbb{R}^{K \times D}, \tag{2}\] \[where\quad W_{i,j}:=\frac{\mathtt{attn}_{i,j}}{\Sigma_{l=1}^{N} \mathtt{attn}_{l,j}}.\]
### SLot Attention via SHepherding (SLASH)
In this work, our goal is to achieve stable and robust OCL in single-view images by preventing the bleeding issue incurred when slots are distracted by background noise. To achieve this goal, the model needs to provide guidance to the slots about where to focus or not. To this end, we propose a novel OCL framework, SLASH (**SL**ot **A**ttention via **SH**epherding), which steers slots to correctly seize objects using two newly introduced modules: Attention Refining Kernel (ARK) and Intermediate Point Predictor and Encoder (IPPE). ARK guards and stabilizes slots against background noise by reducing the noise and solidifying object-like patterns in the attention map between the slots and pixels. IPPE guides slots towards the area where an object is likely to exist by providing positional indications to the slots. Using these two simple-yet-effective modules that shepherd slots to the desired region, SLASH accomplishes stable and robust OCL. The overall architecture of SLASH is shown in Fig. 2.
#### 3.2.1 Attention Refining Kernel
Attention Refining Kernel (ARK) is designed to prevent slots from being distracted by background noise by refining the attention map between slots and visual features. As depicted in the upper part of Fig. 3, we can observe that Slot Attention [35] generates attention maps with salt-and-pepper-like noise around the objects. Noisy attention maps are likely to provoke unstable learning of object-centric representations. We address this issue by introducing an inductive bias for _local density_ of objects. The bias for local density assumes that the density of the attention values should be higher near an object and lower outside the object. Thus, the inductive bias is materialized using the Weights-Normalized Convolutional (WNConv) kernel which aims to refine an attention map by reducing noise and solidifying object-like patterns around objects. WNConv kernel is a single-channel single-layer convolutional network trained under the constraints that the sum of the kernel weights equals \(1\) while maintaining every weight greater than or equal to \(0\). With these constraints, the WNConv kernel serves as a low-pass filter, smoothing the attention map as shown in the lower part of Fig. 3. As depicted in Fig. 2 (b), ARK is applied to the logit values of the attention map.
#### 3.2.2 Intermediate Point Predictor and Encoder
Intermediate Point Predictor and Encoder (IPPE) expedites learning "where objects exist". In order for IPPE to understand the location of objects, it is necessary to introduce external supervision related to the object positions. To train our model practically, we utilize a weak semi-supervision approach. We use the low-cost information, center points
Figure 2: (a) Overall architecture of the proposed framework. Upon Slot Attention (modules without color fillings), we add Attention Refining Kernel (ARK, filled with orange color) and Intermediate Point Predictor and Encoder (IPPE, filled with yellow color). (b) Within the Slot Attention module, we insert ARK before the softmax function. (c) IPPE predicts 2D point coordinates and encode the coordinates into vectors of dimension \(D_{slots}\). The point-encoded vectors are then added to the slots so that the slots can incorporate position information.
of bounding boxes, as the weak supervision among the possible positional cues. Furthermore, instead of using a fully annotated dataset, we assume that only a fraction (10%) of the dataset and not all objects in a given image (75%) have labels. The following describes how IPPE leverages weak semi-supervision.
IPPE consists of two modules, a point predictor and a point encoder, as shown in Fig. 2 (c). The point predictor is a 3-layer MLP that predicts 2D point coordinates of objects from slots. The point encoder, also a 3-layer MLP, encodes the point coordinates into \(D_{slot}\) dimensional vectors, which are added to the original slots. The updated slots can now contain information about the location of objects and become less likely to wander around the background.
The point predictor is trained by weak semi-supervision with an auxiliary point loss which is MSE between the predicted and ground-truth coordinates. Hungarian algorithm [31] is used to match the predictions and ground-truths. Fig. 4 shows the results of the point predictor, where the predictions get closer to objects through slot updates.
The point encoder is trained using the image reconstruction loss in a self-supervised manner [35]. As the reconstruction loss is shared with the Slot Attention module, the point encoder generates position-encoded vectors that are well-aligned with the Slot Attention module. It is worth noting that the point encoder can take either ground-truths as inputs, if available, or the predicted coordinates from the point predictor, otherwise.
Our method differs from the previous weakly supervised OCL method for videos [30] in that we use weak semi-annotations as the ground-truth labels for the module (Point Predictor) as well as the input for the module (Point Encoder). Conversely, SAVi exploits weak supervision to initialize slots using an MLP with the ground-truth position information as its input. That weakly supervised slot initialization shows outstanding performance for video OCL; however, it is limited in the sense that the model requires labels for all samples, even during inference time. This limitation arises from the lack of preparation for the cases where the ground-truths are not provided or are partially provided in both the training and inference phases. By virtue of the design of IPPE, our method can be trained in a weakly semi-supervised manner and can be deployed under circumstances where no ground-truth exists. In the following section, we validate the proposed method against a SAVi-like OCL method for images.
## 4 Experiments
### Experimental Setup
**Task & Metrics** To validate the effectiveness of our method, we conduct experiments on the object discovery task following the previous OCL works [9, 10, 11, 30, 40]. In the object discovery task, a model is required to cluster pixels into object segments. Though the task seems similar to the instance segmentation, the object discovery differs from the image segmentation in that it does not require semantic classes or captions for each segmentation.
To evaluate the models, we use mean Intersection over Union (mIoU) and Adjusted Rand Index (ARI) [41]. Similar to [37, 10], we avoid focusing on foreground-ARI (fg-ARI) where the annotation for the background is excluded
Figure 4: Visualization of point predictions by Intermediate Point Predictor and Encoder (IPPE). The leftmost column shows the input images, and the right three columns show the prediction results by IPPE for each slot. Each number stands for the order of iteration \(T\). Best viewed in color.
Figure 3: Visualization of the results by Attention Refining Kernel (ARK). Each colored box shows an attention map between a slot and pixels. The upper part of a rectangular box is a visualization of the attention map on the input image. The lower part represents the attention map in grayscale. The top and bottom row, split by the dotted line, corresponds to the attention map before and after applying ARK, respectively. One can observe that ARK refines the scattered attention around objects so that slots can escape from the background noise.
from the evaluation. Fig. 5 demonstrates that fg-ARI cannot describe the stability issue, such as the bleeding. On the other hand, the stability issue can be demonstrated using mIoU and ARI since the annotation of backgrounds is concerned with those metrics.
**Baselines** We compare SLASH with Slot Attention (SA) [35], GENESIS-v2 (GenV2) [11], and weakly supervised Slot Attention (WS-SA). GenV2 is a recent study on OCL in single-view images, derived from GENESIS [10]. The official GenV21 is used in our datasets. Additionally, we compare SLASH with WS-SA, a simple variant of SA, equipped with a weakly supervised slot initializer. The WS-SA initializes each slot using an MLP, which takes the point coordinates of an object as input by following SAVi [30]. Unlike SAVi, we assume the datasets do not contain labels for the precise number of objects in an image. Therefore, we initialize the surplus slots with randomly sampled values from the Gaussian distribution as opposed to SAVi which initializes surplus slots that receive no ground-truth point coordinates with \((-1,-1)\) to deactivate the slots.
Footnote 1: [https://github.com/applied-ai-lab/genesis](https://github.com/applied-ai-lab/genesis)
**Datasets** The experiments cover four multi-object datasets: CLEVR6 [24], CLEVRTEX [28], PTR [21] and MOVi-C [14]. CLEVR was designed to assess the models' comprehension of compositional elements, such as visual reasoning. CLEVR6 contains 35K train and 7.5K validation samples consisting of scenes with three to six objects [25, 35]. CLEVRTEX is a variant of CLEVR, having complicated shapes, textures, materials, and backgrounds. CLEVRTEX contains 50K samples, which we split into 40K train and 10K validation set. PTR, which contains 52K train and 9K validation samples, is a visual reasoning dataset in which objects have part-whole hierarchies. MOVi-C is a synthetic video dataset comprising realistic and textured daily objects and backgrounds. We collected the first frames of the randomly rendered videos that have scenes with at most six objects. Our MOVi-C dataset contains 39K train and 9K validation samples. The supplementary material contains details of the data collection process.
**Training** All models are trained by the MSE reconstruction loss in an autoencoding fashion. The training environments for WS-SA and SLASH are the same as those of SA [35], while those of GenV2 follow the official paper [11]. The number of slots, \(K\), is set to 7 for CLEVR6, PTR, and MOVi-C, and 11 for CLEVRTEX.
### Object Discovery
The quantitative results on the object discovery task are summarized in Tab. 1. The bleeding case causes significant degradation of mIoU and ARI, that is, the metrics have higher deviations and lower averages. We argue that
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & mIoU & ARI & fg-ARI \\ \hline \hline \multicolumn{5}{c}{CLEVR6} \\ \hline SA [35] & 49.5 \(\pm\) 20.5 & 63.0 \(\pm\) 42.1 & 97.1 \(\pm\) 1.6 \\ SA\(\dagger\) & 46.4 \(\pm\) 24.6 & 59.2 \(\pm\) 48.2 & 98.3 \(\pm\) 0.9 \\ WS-SA\({}^{*}\) & 61.8 \(\pm\) 4.3 & 90.7 \(\pm\) 2.0 & 93.3 \(\pm\) 1.6 \\
**SLASH\({}^{*}\)** & 63.6 \(\pm\) 4.3 & 90.3 \(\pm\) 4.3 & 94.2 \(\pm\) 1.3 \\ \hline \multicolumn{5}{c}{CLEVRTEX} \\ \hline SA & 22.2 \(\pm\) 4.3 & 38.1 \(\pm\) 12.5 & 52.1 \(\pm\) 5.9 \\ MONet [3]\(\ddagger\) & 19.8 \(\pm\) 1.0 & — & 36.7 \(\pm\) 0.9 \\ IODINE [15]\(\ddagger\) & 29.2 \(\pm\) 0.8 & — & 59.2 \(\pm\) 2.2 \\ GenV2 [11]\(\ddagger\) & 7.9 \(\pm\) 1.5 & — & 31.2 \(\pm\) 12.4 \\ WS-SA\({}^{*}\) & 22.4 \(\pm\) 4.5 & 36.0 \(\pm\) 7.2 & 52.3 \(\pm\) 7.3 \\
**SLASH\({}^{*}\)** & 34.7 \(\pm\) 5.3 & 59.4 \(\pm\) 11.5 & 61.9 \(\pm\) 6.4 \\ \hline \multicolumn{5}{c}{PTR} \\ \hline SA & 17.6 \(\pm\) 14.7 & 19.6 \(\pm\) 29.8 & 44.5 \(\pm\) 18.8 \\ GenV2 & 28.5 \(\pm\) 11.3 & 41.0 \(\pm\) 25.4 & 56.8 \(\pm\) 13.0 \\ WS-SA\({}^{*}\) & 23.8 \(\pm\) 15.5 & 21.4 \(\pm\) 33.6 & 52.9 \(\pm\) 11.6 \\
**SLASH\({}^{*}\)** & 44.1 \(\pm\) 9.6 & 67.9 \(\pm\) 22.6 & 59.0 \(\pm\) 3.2 \\ \hline \multicolumn{5}{c}{MOVi} \\ \hline SA & 23.0 \(\pm\) 9.8 & 25.9 \(\pm\) 20.3 & 48.7 \(\pm\) 7.0 \\ GenV2 & 10.8 \(\pm\) 1.1 & 3.6 \(\pm\) 0.2 & 47.1 \(\pm\) 5.8 \\ WS-SA\({}^{*}\) & 21.6 \(\pm\) 11.5 & 22.8 \(\pm\) 21.3 & 46.2 \(\pm\) 8.5 \\
**SLASH\({}^{*}\)** & 27.7 \(\pm\) 5.9 & 34.6 \(\pm\) 13.5 & 51.9 \(\pm\) 4.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results over the object discovery task (mean \(\pm\) std for 10 trials, reported in \(\%\)). * indicates that the model is trained by weakly semi-supervised learning. All models performed inference with no assistant label. \(\dagger\) is for the results by [35] which uses a center crop. \(\ddagger\) is for the results from [28] which conducted three trials of training for each method with a center crop.
Figure 5: Demonstration of the bleeding issue. The top row of each case shows decoded images by a model [35] and the bottom row shows segmentation masks corresponding to the decoded images. At the bottom of each case, evaluation results by three different metrics are reported in \(\%\). One can observe that fg-ARI cannot represent the bleeding case in contrast to ARI and mIoU.
a model is stable when it has lower deviations and robust when it has higher averages across all datasets. Thus it is crucial to prevent the bleeding case for stable and robust OCL. SLASH records the highest average value of mIoU and ARI for almost all datasets except for ARI on CLEVR6 with a minimal difference. In addition, SLASH demonstrates lower standard deviation values of mIoU and ARI across overall datasets. To sum up, SLASH scores the highest and the most consistent performance across all datasets, achieving stable and robust OCL. We provide abundant qualitative results in the supplementary material due to spatial constraints.
### Ablation Studies
#### 4.3.1 ARK and IPPE
To prove the effectiveness of ARK and IPPE, we conduct an ablation study on those modules by training SA [35] with or without each module. Tab. 2 demonstrates that SLASH benefits from both ARK and IPPE.
We observe that ARK apparently stabilizes the model, resulting in low standard deviation values for overall datasets. IPPE boosts the performance of both SA and '+ARK'. Although the standard deviation values tend to be high due to the absence of ARK, we find that IPPE aids in learning against a complicated dataset, i.e. MOVi, where slots struggle to grasp the visual patterns of objects. We argue that the positional information given by IPPE can aid the slots in binding appropriate objects more effectively.
#### 4.3.2 Kernels in ARK
In our method, ARK is applied as a low-pass filter with a WNConv network to eliminate the noise and strengthen the object-like patterns in the attention maps. In this study, we look into the alternatives for the WNConv. Firstly, we compare our WNConv with the global smoothing scheme where we increase the temperature \(\tau\) The logit values of an attention map are divided by \(\tau\) so that the larger \(\tau\) yield a more smoothed attention map. Secondly, we apply a representative smoothing technique, Gaussian filter [54]. Lastly, we conduct experiments on a Convolutional (Conv) kernel without any constraints like weight normalization.
Tab. 3 demonstrates the results of the possible kernels for ARK. We observe that the SA models with high temperature and the Gaussian smoothing outperform the original SA. This result implies that simple global and local smoothing can help a model boost its performance by erasing noises in the attention maps. For the Conv kernel, we observe that the overall performance is worse than the other kernels. We argue that the poor performance of the Conv kernel is incurred due to the high degree of freedom for the single-channel single-layer network. In contrast, owing to the reduced degree of freedom, WNConv consistently performs well by recording high average values than the alternatives.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & mIoU & ARI & fg-ARI \\ \hline \hline \multicolumn{6}{c}{CLEVR6} \\ \hline SA [35] & 49.5 \(\pm\) 20.5 & 63.0 \(\pm\) 42.1 & 97.1 \(\pm\) & 1.6 \\ + ARK & 64.1 \(\pm\) 3.1 & 89.9 \(\pm\) 2.2 & 95.3 \(\pm\) & 1.3 \\ + IPPE & 57.8 \(\pm\) 14.6 & 84.9 \(\pm\) 27.9 & 95.7 \(\pm\) & 1.0 \\ + ARK + IPPE & 63.6 \(\pm\) 4.3 & 90.3 \(\pm\) 4.3 & 94.2 \(\pm\) & 1.3 \\ \hline \multicolumn{6}{c}{CLEVR7EX} \\ \hline SA & 22.2 \(\pm\) 4.3 & 38.1 \(\pm\) 12.5 & 52.1 \(\pm\) 5.9 \\ + ARK & 31.4 \(\pm\) 6.6 & 55.6 \(\pm\) 13.2 & 57.8 \(\pm\) 7.7 \\ + IPPE & 25.1 \(\pm\) 7.4 & 40.4 \(\pm\) 15.6 & 54.9 \(\pm\) 7.3 \\ + ARK + IPPE & 34.7 \(\pm\) 5.3 & 59.4 \(\pm\) 11.5 & 61.9 \(\pm\) 6.4 \\ \hline \multicolumn{6}{c}{PTR} \\ \hline SA & 17.6 \(\pm\) 14.7 & 19.6 \(\pm\) 29.8 & 44.5 \(\pm\) 18.8 \\ + ARK & 43.8 \(\pm\) 3.0 & 62.3 \(\pm\) 19.4 & 60.4 \(\pm\) 3.2 \\ + IPPE & 38.4 \(\pm\) 12.8 & 58.4 \(\pm\) 31.3 & 58.5 \(\pm\) 3.1 \\ + ARK + IPPE & 44.1 \(\pm\) 9.6 & 67.9 \(\pm\) 22.6 & 59.0 \(\pm\) 3.2 \\ \hline \multicolumn{6}{c}{MOVi} \\ \hline SA & 23.0 \(\pm\) 9.8 & 25.9 \(\pm\) 20.3 & 48.7 \(\pm\) 7.0 \\ + ARK & 26.2 \(\pm\) 6.1 & 33.2 \(\pm\) 13.7 & 51.0 \(\pm\) 3.7 \\ + IPPE & 27.2 \(\pm\) 7.9 & 36.2 \(\pm\) 16.8 & 50.8 \(\pm\) 5.7 \\ + ARK + IPPE & 27.7 \(\pm\) 5.9 & 34.6 \(\pm\) 13.5 & 51.9 \(\pm\) 4.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of the ablation studies on the modules of SLASH (mean \(\pm\) std for 10 trials, reported in \(\%\)).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & mIoU & ARI & fg-ARI \\ \hline \hline \multicolumn{6}{c}{CLEVR6} \\ \hline SA (\(\tau=1\)) [35] & 49.5 \(\pm\) 20.5 & 63.0 \(\pm\) 42.1 & 97.1 \(\pm\) 1.6 \\ SA (\(\tau=2\)) & 52.7 \(\pm\) 24.6 & 66.4 \(\pm\) 40.6 & 95.8 \(\pm\) 2.8 \\ Gaussian & 50.1 \(\pm\) 14.9 & 75.6 \(\pm\) 29.4 & 93.2 \(\pm\) 2.4 \\ Conv & 52.7 \(\pm\) 19.6 & 76.8 \(\pm\) 32.0 & 92.5 \(\pm\) 2.2 \\ WNConv & 64.1 \(\pm\) 3.1 & 89.9 \(\pm\) 2.2 & 95.3 \(\pm\) 1.6 \\ \hline \multicolumn{6}{c}{CLEVRTEX} \\ \hline SA (\(\tau=1\)) & 22.2 \(\pm\) 4.3 & 38.1 \(\pm\) 12.5 & 52.1 \(\pm\) 5.9 \\ SA (\(\tau=2\)) & 25.6 \(\pm\) 2.0 & 39.6 \(\pm\) 3.7 & 54.9 \(\pm\) 2.7 \\ Gaussian & 26.0 \(\pm\) 8.5 & 43.5 \(\pm\) 14.6 & 55.9 \(\pm\) 11.0 \\ Conv & 24.8 \(\pm\) 6.0 & 42.5 \(\pm\) 9.7 & 54.3 \(\pm\) 11.1 \\ WNConv & 31.4 \(\pm\) 6.6 & 55.6 \(\pm\) 13.2 & 57.8 \(\pm\) 7.7 \\ \hline \multicolumn{6}{c}{PTR} \\ \hline SA (\(\tau=1\)) & 17.6 \(\pm\) 14.7 & 19.6 \(\pm\) 29.8 & 44.5 \(\pm\) 18.8 \\ SA (\(\tau=2\)) & 34.3 \(\pm\) 12.0 & 56.6 \(\pm\) 26.5 & 50.0 \(\pm\) 7.9 \\ Gaussian & 20.6 \(\pm\) 15.1 & 20.0 \(\pm\) 29.9 & 53.8 \(\pm\) 10.2 \\ Conv & 12.4 \(\pm\) 9.7 & 11.6 \(\pm\) 13.1 & 32.1 \(\pm\) 26.0 \\ WNConv & 43.8 \(\pm\) 3.0 & 62.3 \(\pm\) 19.4 & 60.4 \(\pm\) 3.2 \\ \hline \multicolumn{6}{c}{MOVi} \\ \hline SA (\(\tau=1\)) & 23.0 \(\pm\) 9.8 & 25.9 \(\pm\) 20.3 & 48.7 \(\pm\) 7.0 \\ SA (\(\tau=2\)) & 27.1 \(\pm\) 5.5 & 28.7 \(\pm\) 12.5 & 54.6 \(\pm\) 1.8 \\ Gaussian & 25.5 \(\pm\) 10.8 & 33.7 \(\pm\) 21.5 & 48.5 \(\pm\) 7.1 \\ Conv & 25.5 \(\pm\) 8.8 & 28.2 \(\pm\) 18.0 & 53.0 \(\pm\) 2.4 \\ WNConv & 27.2 \(\pm\) 6.1 & 33.2 \(\pm\) 13.7 & 57.0 \(\pm\) 3.7 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of ablation studies on the alternatives of the kernel for ARK (mean \(\pm\) std for 10 trials, reported in \(\%\)). SA stands for the baseline Slot Attention [35]. \(\tau\) is a temperature coefficient in the attention mechanism.
### Analysis of Bleeding Issues
In this section, we investigate the cases where the baseline, Slot Attention [35], fails to prevent the bleeding issue and SLASH succeeds in that. Fig. 6 shows the results by the baseline and SLASH. Here, we present our analysis of the failure cases for each dataset.
As depicted in Fig. 6 (a), for CLEVR6, SA encounters the bleeding issue due to the simplicity of the background. As shown in the top-left image, CLEVR6 only contains simple white backgrounds without any complicated pattern. Since the background has almost no information, a model is likely to get into the trivial solution that every slot binds to the piece of the background.
As shown in Fig. 6 (b), stripping frequently occurs for PTR. The striping issue is a phenomenon where each slot is trapped into a simple and meaningless stripe pattern of an image. We assume the striping issue occurs as a model tends to focus on positional embedding rather than object-related patterns that are difficult for the model to figure out.
CLEVRTEX contains a variety of complex backgrounds as shown in Fig. 6 (c). In SA, slots tend to be attracted by the explicit eye-catching patterns on backgrounds. We argue that this phenomenon is attributed to the design of SA which focuses on the versatility towards domain- and task-agnostic models. This design principle results in the lack of inductive biases and locational information for discovering objects rather than backgrounds.
As SLASH is designed to not only eliminate background noise and solidify object-like patterns in the attention map, but also encode the positional information into the slots, we observe that SLASH is robust against the aforementioned failure cases in various datasets.
## 5 Conclusion and Limitation
In this paper, we observed that OCL for single-view images has a stability issue that some training trials end up with having the bleeding issue. We attributed this problem to the lack of inductive bias about the appearance of objects and additional cues like positional information. We presented a new OCL framework for single-view images, called SLASH, acting as a shepherd, guiding the slots to the correct destination without being distracted by the background noise. To accomplish this, we proposed two simple modules: ARK for smoothing the noise in the attention and IPPE for inducing positional information through a weak semi-supervision. Experimental results show the effectiveness of our method, which achieved strong and consistent results for stable and robust OCL.
Although our model shows impressive results on various challenging synthetic datasets, extending our method to real-world image datasets remains a problem and promising path for future work. There are several potential challenges to achieving real-world OCL: understanding backgrounds, controlling a large number of objects, handling the representation of an object having intricate shape and texture, and designing an efficient model that can process high-resolution images. We expound on additional limitations of our study in the supplementary material.
## 6 Acknowledgement
This research was supported and funded by Artificial Intelligence Graduate School Program under Grant 2020-0-01361, Artificial Intelligence Innovation Hub under Grant 2021-0-02068, and the Korean National Police Agency. [Project Name: XR Counter-Terrorism Education and Training Test Bed Establishment/Project Number: PR08-04-000-21].
Figure 6: Qualitative evaluation of SLASH compared to the baseline, Slot Attention (SA) [35]. The images in the top left of each dataset section are randomly selected inputs fed to the models. The bottom row shows the segmentation masks generated from attention maps of slots. The numbers in the top right of each section are for the qualitative results of each model over each dataset (reported in \(\%\)). |
2305.19539 | Few-shot Class-incremental Audio Classification Using Dynamically
Expanded Classifier with Self-attention Modified Prototypes | Most existing methods for audio classification assume that the vocabulary of
audio classes to be classified is fixed. When novel (unseen) audio classes
appear, audio classification systems need to be retrained with abundant labeled
samples of all audio classes for recognizing base (initial) and novel audio
classes. If novel audio classes continue to appear, the existing methods for
audio classification will be inefficient and even infeasible. In this work, we
propose a method for few-shot class-incremental audio classification, which can
continually recognize novel audio classes without forgetting old ones. The
framework of our method mainly consists of two parts: an embedding extractor
and a classifier, and their constructions are decoupled. The embedding
extractor is the backbone of a ResNet based network, which is frozen after
construction by a training strategy using only samples of base audio classes.
However, the classifier consisting of prototypes is expanded by a prototype
adaptation network with few samples of novel audio classes in incremental
sessions. Labeled support samples and unlabeled query samples are used to train
the prototype adaptation network and update the classifier, since they are
informative for audio classification. Three audio datasets, named NSynth-100,
FSC-89 and LS-100 are built by choosing samples from audio corpora of NSynth,
FSD-MIX-CLIP and LibriSpeech, respectively. Results show that our method
exceeds baseline methods in average accuracy and performance dropping rate. In
addition, it is competitive compared to baseline methods in computational
complexity and memory requirement. The code for our method is given at
https://github.com/vinceasvp/FCAC. | Yanxiong Li, Wenchang Cao, Wei Xie, Jialong Li, Emmanouil Benetos | 2023-05-31T03:59:47Z | http://arxiv.org/abs/2305.19539v1 | Few-shot Class-incremental Audio Classification Using Dynamically Expanded Classifier with Self-attention Modified Prototypes
###### Abstract
Most existing methods for audio classification assume that the vocabulary of audio classes to be classified is fixed. When novel (unseen) audio classes appear, audio classification systems need to be retrained with abundant labeled samples of all audio classes for recognizing base (initial) and novel audio classes. If novel audio classes continue to appear, the existing methods for audio classification will be inefficient and even infeasible. In this work, we propose a method for few-shot class-incremental audio classification, which can continually recognize novel audio classes without forgetting old ones. The framework of our method mainly consists of two parts: an embedding extractor and a classifier, and their constructions are decoupled. The embedding extractor is the backbone of a ResNet based network, which is frozen after construction by a training strategy using only samples of base audio classes. However, the classifier consisting of prototypes is expanded by a prototype adaptation network with few samples of novel audio classes in incremental sessions. Labeled support samples and unlabeled query samples are used to train the prototype adaptation network and update the classifier, since they are informative for audio classification. Three audio datasets, named NSynth-100, FSC-89 and LS-100 are built by choosing samples from audio corpora of NSynth, FSD-MIX-CLIP and LibriSpeech, respectively. Results show that our method exceeds baseline methods in average accuracy and performance dropping rate. In addition, it is competitive compared to baseline methods in computational complexity and memory requirement. The code for our method is given at [https://github.com/vinceasvp/FCAC](https://github.com/vinceasvp/FCAC).
Few-shot learning, incremental learning, audio classification, self-attention mechanism, modified prototype
## I Introduction
UDIO classification is a task to recognize different sounds in the environment. Audio classification has been an active research topic with a wide range of applications, including content analysis and retrieval of multimedia [1, 2], audio captioning [3], traffic surveillance [4], bio-acoustic monitoring [5, 6], automatic assisted driving [7], and smart home [8].
### _Related Works_
Many efforts were made on audio classification with the focus on designing discriminative features (e.g., embeddings) or training effective classifiers (e.g., deep neural networks) [9]-[13]. The assumption of most works on audio classification is that the number and type of audio classes to be classified are known in advance. That is, the vocabulary of audio classes is pregiven and fixed. Although these methods are satisfactory in accuracy, they still have shortcomings. For example, the trained classification systems can only recognize the audio classes that are contained in the predefined vocabulary. To recognize novel audio classes, the classification systems have to be retrained using lots of labeled samples of the base and novel audio classes. Retraining the classification systems is laborious and time-consuming for end-users. If the samples of base audio classes are no long available, finetuning the classification system with samples of novel audio classes will make the classification system quickly forget the knowledge of base audio classes, i.e., the problem of catastrophic forgetting [14]. However, the vocabulary of audio classes dynamically changes or expands in many application scenarios, since end-users need to customize the vocabulary according to their preferences. For instance, the end-users often add novel audio classes to audio classification system, such as abnormal sound events, rare musical instruments, audio wake-up words, or animal sounds.
To reduce the demand for the amount of training samples during the construction of classification system, some works were done on few-shot audio classification [15]-[18] and sound event detection [19, 20]. In these methods, the classification system can recognize novel audio classes from a few training samples. Metric-based methods [21] and optimization-based methods [22] are two main lines for few-shot learning. It has been shown that the metric-based method with prototypical networks [21] obtains better results for audio [17]-[19]. Besides the methods based on few-shot learning, Xie et al. [23] investigated zero-shot learning for audio classification using semantic embeddings that are learned from textual labels and sentence descriptions of audio classes. They aimed to recognize the audio classes with only semantic side information and without training samples. Although these methods above can recognize novel audio classes with only few samples, they do not maintain the audio class vocabulary of training samples. As a result, these methods cannot remember the knowledge of old audio classes when they recognize novel ones.
To continually recognize novel classes without forgetting base classes, some researchers proposed the methods based on incremental learning (continual learning, lifelong learning) [24, 25]. There are two main streams in recent works, namely the multi-class incremental learning [25]-[27] and the multi-task incremental learning [28, 29]. The incremental learning has
been applied in sound event detection [30] and classification [31]-[34] to recognize new sound events without forgetting old ones. Although the methods based on incremental learning can recognize both novel classes and base classes, they still have drawbacks. For example, they typically require to retrain (or update) the classification system with large amounts of labeled samples of novel classes for recognizing novel classes. The requirement for a large number of training samples of novel audio classes is obviously impractical and even infeasible for end-users when the training samples of novel audio classes are few or the computing resources are limited.
As a newly-emerging learning paradigm inspired by cognitive learning, Few-Shot Class-Incremental Learning (FSCIL) is recently proposed [35, 36]. The FSCIL-based methods aim to dynamically expand the capability of the classification system with few training samples in incremental sessions. Although they can combine the strongpoints of the methods of few-shot learning and incremental learning, they confront two challenges that are beyond previous learning paradigms. First, finetuning the classification system with training samples of novel classes leads to catastrophically forgetting the knowledge of base classes. Second, updating with few training samples of novel classes makes the classification system overfit the novel classes. To tackle these two problems above, a decoupled learning scheme is proposed, where a well-trained initial system consists of a feature extractor and a classifier [35, 36, 37, 38, 39, 40, 41]. For example, Tao et al. [35] designed a neural gas network to preserve the feature topology in the base and novel classes. Gidaris et al. [37] proposed a method of Dynamical Few-Shot Learning (DFSL) by an attention-based weight generator and a cosine-similarity based classifier. Mazumder et al. [38] reduced the complexity of network and alleviated the overfitting problem by squeezing parameters of the neural network. The techniques of Continual Evolution of Classifier (CEC) [39] and knowledge distillation [40] make the classification system memorize base classes and generalize to novel classes. Yang et al. [41] designed a dynamic support network to regularize feature space for overcoming the problems of forgetting and overfitting. Wang et al. applied the DFSL [42] to audio classification. Xie et al. proposed an audio classification method via Adaptively Refined Prototypes (ARP), where a random episodic training strategy and a dynamic relation projection module are used to produce prototypes [43].
Although these efforts above promote the development of the FSCIL technique and benefit to Few-shot Class-incremental Audio Classification (FCAC), we argue that three critical aspects for further performance improvement are largely ignored. First, the unlabeled query samples are not explicitly considered in training and updating the classification system in incremental sessions. Like the labeled support samples, the unlabeled query samples are also informative for updating the classification system. Second, existing works have not paid enough attention to the training of the initial classification system. Current training strategy needs to be optimized for improving the generalization ability of the initial classification system. Third, the knowledge of classifier prototypes in prior incremental sessions is not effectively utilized to update the classifier prototypes in current incremental session.
### _Our Contributions_
Based on the descriptions above, it is known that many works concerning the FSCIL have been done in the field of computer vision and there are still areas for improvement in these existing works (e.g., the three aspects given in the last paragraph of Section I.1). In addition, the FCAC work has not been carried out so far, which motivates us to address the FCAC problem in this paper. The FCAC task here and the FSCIL task in computer vision have some similarities. For example, they aim to obtain a classification system that can continuously recognize new classes without forgetting the old ones. However, there are differences between them. For instance, the implementation details of these two tasks are different. Specifically, the input features, training strategies and architectures of the classification systems, and performance metrics used in these two tasks are different.
We propose a method for FCAC, which can recognize novel audio classes using few training samples per novel audio class in incremental sessions without forgetting the knowledge of old ones. In the proposed method, we utilize a decoupled training scheme to construct an Embedding Extractor (EE) and then to train a classifier in the base session. We design a Prototype Adaptation Network (PAN) which is used for classifier update and is trained using the samples of base audio classes in the episodic way (i.e., the few-shot learning paradigm). The EE is frozen after construction in base session, whereas the classifier is continually expanded and updated by the PAN in incremental sessions. Because the information learned from both labeled support samples and unlabeled query samples is beneficial for audio classification, both of them, rather than only labeled support samples, are used to train PAN and update classifier. In addition, the classifier prototypes in prior incremental sessions are adopted for updating the counterparts in current incremental session. As a result, the distances among the updated prototypes can be enlarged, and thus the confusions among different audio classes are expected to be reduced.
Three audio datasets, named NSynth-100, FSC-89 and LS-100 are generated by selecting samples from three public audio corpora of the NSynth, FSD-MIX-CLIP and LibriSpeech, respectively. To reproduce our experiments, the generation details (including metadata, explanations) of these three audio datasets are described at [https://github.com/vinceasvp/FCAC](https://github.com/vinceasvp/FCAC). Results indicate that our method outperforms baseline methods in terms of Average Accuracy (AA) and Performance Dropping rate (PD), and has advantages over most baseline methods in terms of memory requirement and computational complexity.
In short, the main contributions of the work in this paper are summarized as follows:
1. To continually recognize novel audio classes and remember old audio classes in each incremental session, we design a dynamically expanded classifier with self-attention modified prototypes. The classifier is updated by the PAN. The PAN is a self-attention neural network which can effectively make use of prototypes of prior sessions and unlabeled query samples of current session to update all prototypes of the classifier. Although the basic module of the PAN is similar to the self-attention module used in prior works, on the whole, it is a new network with novel architecture and is specifically designed for updating the classifier. In addition, the update of the classifier using both prior prototypes and unlabeled query samples are not considered in prior works.
2. We propose a Strategy of Training Data Usage (STDU) in base session for training the EE and PAN. The STDU is
specifically designed for our FCAC method, which is also not used in previous works.
3. We propose a pseudo incremental learning strategy for training the PAN in episodic way. The proposed strategy can effectively use the large-scale training dataset of base session to train the PAN which is expected to have strong generalization capability in incremental sessions.
4. We propose a method for solving the problem of FCAC. We comprehensively evaluate the effectiveness of the proposed method, and compare it with baseline methods on three audio datasets from different aspects. Experimental results show that the proposed method has advantages over the baseline methods in terms of both AA and PD.
In summary, the four contributions above make the proposed FCAC method different from the existing FSCIL methods in computer vision, even if these two kinds of methods basically have the same framework (i.e., front-end feature extractor plus back-end classifier). The rest of this paper is organized as follows. Section II introduces the details of the proposed method. Section III presents the experiments and discussions, and the conclusions are finally drawn in Section IV.
## II Method
This section introduces our method, including descriptions of problem definition, whole framework, EE, PAN, and classifier.
### _Problem Definition_
In this work, we focus on the problem of FCAC, which aims to continually recognize novel audio classes from few training samples of novel audio classes without forgetting old ones. The incremental sessions of FCAC come in sequence. Once the update of the classification system enters next session, the training samples in all prior sessions are no longer available. However, the evaluation of the classification system in each session involves audio classes in current and all prior sessions.
The training and evaluation (testing) datasets of different sessions are denoted as \(\{\mathbf{D}^{t}_{0},\mathbf{D}^{t}_{1},\...,\mathbf{D}^{t}_{l},\...,\mathbf{D}^{t}_{l-1}\}\) and \(\{\mathbf{D}^{e}_{0},\ \mathbf{D}^{t}_{1},\...,\mathbf{D}^{t}_{l},\...,\mathbf{D}^{t}_{l-1}\}\), respectively, where \(t\), \(e\) and \(l\) stand for training, evaluation and total number of sessions, respectively. \(\mathbf{D}^{t}_{i}\) and \(\mathbf{D}^{e}_{l}\) have the same label set which is denoted by \(\mathbf{L}_{i}\). The datasets in different sessions do not have the same kind of audio classes, i.e., \(\forall i,j\) and \(i\neq j,\mathbf{L}_{i}\cap\mathbf{L}_{j}=\emptyset\). In the \(i\)th session, only \(\mathbf{D}^{t}_{i}\) can be used to train the classification system, and the trained classification system is evaluated on the evaluation datasets of both current and all prior sessions, namely \(\mathbf{D}^{e}_{0}\cup\mathbf{D}^{e}_{1}\)... \(\cup\mathbf{D}^{e}_{l}\). Session 0 is called base (initial) session, in which the audio classes, dataset \(\mathbf{D}^{t}_{0}\) and classifier are called base audio classes, base training dataset and base classifier, respectively. The dataset \(\mathbf{D}^{t}_{0}\) is a relatively large-scale dataset in which abundant samples per audio class are available to train the classification system. In converse, the datasets \(\mathbf{D}^{t}_{1}\) to \(\mathbf{D}^{t}_{l-1}\) in incremental sessions are small-scale datasets, and each of them consists of \(N\) audio classes, and each audio class has \(K\) training samples. That is, the training dataset \(\mathbf{D}^{t}_{i}\) in the \(i\)th (1\(\leq\)\(i\)(\(l\)-1)) incremental session is constructed as a \(N\)-way \(K\)-shot training dataset. For example, in the FSC-89 dataset, there are 59 audio classes in the base session and each audio class has 800 training samples, whereas 5 audio classes and 5 training samples per audio class are generally available in each incremental session.
### _Whole Framework_
As shown in Fig. 1, the proposed framework includes two kinds of sessions: base and incremental sessions. There are four sequential steps in the base session, namely pre-training EE, training PAN, training EE, and constructing base classifier. In each incremental session, the prototypes of the classifier are updated to recognize novel and old audio classes.
To prevent the audio classification system from forgetting the knowledge of old audio classes in incremental sessions, we decouple the learning of embeddings and the training of classifier. First, we divide \(\mathbf{D}^{t}_{0}\) into two parts with different kinds of audio classes: \(\mathbf{D}^{t}_{0,1}\) and \(\mathbf{D}^{t}_{0,2}\) (see Tables II to IV for their proportions in \(\mathbf{D}^{t}_{0}\) of three audio datasets). Then, we pre-train an EE in a typically supervised way with \(\mathbf{D}^{t}_{0,1}\) where adequate samples per audio class are available. The pre-trained EE can learn discriminative embeddings from Log Mel-spectra of samples for training the PAN. Then, \(\mathbf{D}^{t}_{0,1}\) and \(\mathbf{D}^{t}_{0,2}\) are used as pseudo base audio class and pseudo novel audio class, respectively, and independently split into many batches. Each batch consists of one support set and one query set. The PAN is episodically trained on each batch. After training on all batches, the PAN is frozen and used to update prototypes of classifier in incremental sessions. Afterwards, the EE is trained on \(\mathbf{D}^{t}_{0}\) in typically supervised way. After training, it is adopted to learn embeddings from Log Mel-spectra of audio samples. Finally, the mean vectors of embeddings of the same kind of audio class are computed and adopted as the prototypes of base classifier.
In base session, \(\mathbf{D}^{t}_{0}\) is split into \(\mathbf{D}^{t}_{0,1}\) and \(\mathbf{D}^{t}_{0,2}\). Label sets of \(\mathbf{D}^{t}_{0,1}\) and \(\mathbf{D}^{t}_{0,2}\) are denoted as \(\mathbf{L}_{0,1}\) and \(\mathbf{L}_{0,2}\), respectively, and \(\mathbf{L}_{0,1}\cap\mathbf{L}_{0,2}=\emptyset\), \(\mathbf{L}_{0,1}\cup\mathbf{L}_{0,2}=\mathbf{L}_{0}\). The motivation for splitting \(\mathbf{D}^{t}_{0}\) into two parts is based on three considerations. First, to make the PAN have strong generalization ability in incremental sessions, adequate samples of novel audio classes are needed to train the PAN under the incremental learning scenario, namely in the way of \(N\)-way \(K\)-shot learning. However, training data of only one session can be used in each incremental session where the amount of training data is quite limited. Second, abundant data is also required to train the EE for making it have strong stability and generalization ability in incremental sessions. Third, in our experiment, it is found that poor results for audio classification are obtained if the EE and PAN are trained with the same kinds of audio classes in \(\mathbf{D}^{t}_{0}\). The reason is probably that the EE pre-trained on the audio classes in \(\mathbf{D}^{t}_{0}\) can represent the audio classes very well and the PAN training on the same kind of audio classes will not acquire useful information for improving the generalization ability of the PAN. The above splitting of \(\mathbf{D}^{t}_{0}\) for pre-training EE and training PAN in base session is called the STDU. The reason why the STDU is designed as described above is that abundant samples in \(\mathbf{D}^{t}_{0}\) can be effectively used to train the EE and PAN, and make them have strong generalization ability in the incremental sessions.
After training the EE, PAN and base classifier in the base session, the framework can be used for incremental learning. In each incremental session, support and query embeddings are learned by the EE from Log Mel-spectra of samples in support and query sets, respectively. Next, embeddings of current session and prototypes of last session are all fed to the PAN for classifier update. Finally, all updated prototypes are used as the dynamically expanded classifier for evaluation.
### _Embedding Extractor_
The ResNet [44] based neural network has strong capability to learn discriminative embeddings, and has been successfully applied to many tasks related to audio and video processing [45, 46, 47]. Inspired by its success for embedding learning in these tasks, the EE adopted in this work is the backbone of a convolutional neural network based on the ResNet.
The architecture of the EE is shown in Fig. 2, which consists of one convolutional layer, eight ResNet blocks with four types of parameters, one average pooling layer, and one Fully Connected (FC) layer. The Softmax layer in the network of Fig. 2 is utilized for training the EE, and is then removed after finishing the EE training. Each ResNet block consists of two convolutional layers followed by the operations of ReLU (Rectified Linear Unit) and element-wise summation. The parameters of each layer and block are presented in Fig. 2. The EE is trained using the Adam optimizer [48] with cross-entropy loss, namely in typically supervised training way. In evaluation stage, the embeddings learned from Log Mel-spectrum of samples are output from the FC layer of the trained EE.
The input of the EE is Log Mel-spectrum which is widely used as input feature of neural network for embedding learning [49]-[51]. Its extraction procedure is briefly introduced as follows. First, each sample is split into overlapping audio frames with fixed length and the audio frames are windowed by a Hamming window. Next, the fast Fourier transformation is conducted on the windowed audio frames to generate power spectrum which is then smoothed by a set of Mel-scale filters. Finally, the logarithm operation is performed on the output of Mel-scale filters to produce the Log Mel-spectrum.
### _Prototype Adaptation Network_
To make the classifier have discriminative decision boundaries over all audio classes, we design a PAN to update prototypes of the classifier in incremental sessions. A prototype is initialized as the mean vector of all support (or training) embeddings of one audio class, and then updated by the PAN. The PAN is a self-attention network, which mainly consists of two modules, namely Attentive Prototype Generation Module (APGM) and Prototype Query-embedding Adaptation Module (PQAM). The APGM is used to generate prototypes of current session from support embeddings. The PQAM is used to update prototypes
Fig. 1: The proposed framework for FCAC includes two kinds of sessions: base session and incremental session. The embedding extractor, prototype adaptation network, and base classifier are sequentially trained in base session, while prototypes of classifier are updated in each incremental session. EE: embedding extractor: PAN: prototype adaptation network; \(\boldsymbol{P}\): prototypes; \(\boldsymbol{E}_{i}\)- query embeddings; \(\boldsymbol{E}_{i}\): support embeddings.
Fig. 2: The architecture of the embedding extractor.
and query embeddings, whose input is the concatenation of prototypes of current session, query embeddings of current session, and prototypes of last session. The architecture of the PAN is depicted in Fig. 3.
The motivation for designing the PAN as described above is based on the consideration that it is key for the PAN design to effectively use few accessible data to generate representative prototypes of novel classes and adjust the prototypes of all classes in current session. We design the APGM and PQAM mainly using self-attention mechanism [52] since it can acquire the mutual relationship among all input vectors (i.e., support embeddings of novel audio classes for the APGM, prototypes of all audio classes and query embeddings of current session for the PQAM). Hence, the information contained in few accessible samples can be effectively utilized. The generated prototypes of novel audio classes are expected to be representative, and the updated prototypes of all audio classes are hopefully to be separated from each other in the prototype space.
Abundant data of novel audio classes is needed to train the PAN for guaranteeing strong generalization capability of the PAN in incremental sessions. In addition, the base training dataset \(\mathbf{D}_{0}^{t}\) has adequate samples, whereas the incremental training dataset \(\mathbf{D}_{1}^{t}\) has few samples. Hence, we propose a pseudo incremental learning strategy for training the PAN in episodic way on \(\mathbf{D}_{0}^{t}\) to imitate the real evaluation scenario. The proposed strategy is actually a meta-learning-based algorithm [53], where a support set and a query set are a training subset and an evaluation subset, respectively.
The proposed strategy is summarized as Algorithm I, as shown in Table I. We first construct support set \(\mathbf{S}_{b}\) and query set \(\mathbf{Q}_{b}\) of pseudo base audio classes by randomly choosing samples from \(\mathbf{D}_{0,1}^{t}\). Similarly, we construct support set \(\mathbf{S}_{n}\) and query set \(\mathbf{Q}_{n}\) of pseudo novel audio classes by randomly selecting samples from \(\mathbf{D}_{0,2}^{t}\). Both \(\mathbf{S}_{b}\) and \(\mathbf{S}_{n}\) are merged to form support set \(\mathbf{S}\), while both \(\mathbf{Q}_{b}\) and \(\mathbf{Q}_{n}\) are merged to form query set \(\mathbf{Q}\). Next, embeddings of samples in \(\mathbf{S}\) and \(\mathbf{Q}\) are learned using the pre-trained EE. Prototypes of current session are generated by the APGM from support embeddings of \(\mathbf{S}\). All prototypes and query embeddings are updated using the PQAM whose input is the concatenation of three elements: prototypes of current session, query embeddings of current session, and prototypes of last session. After predicting labels \(\mathbf{\tilde{L}}_{q}\) for \(\mathbf{Q}\) by the PQAM based on the updated prototypes and updated query embeddings, cross-entropy loss of \(\mathcal{L}(\mathbf{L}_{\varphi},\mathbf{\tilde{L}}_{q})\) is computed for optimizing the PAN by the algorithm of stochastic gradient descent [54]. The steps above are repeatedly conducted by feeding various batches of support and query sets into the PAN until all samples in \(\mathbf{D}_{0}^{t}\) are selected once. After training, the PAN is frozen and will be utilized to update prototypes of the classifier in real incremental sessions.
```
Initialization: \(\mathbf{L}_{y}\) and \(\mathbf{\tilde{L}}_{y}\) are the ground-truth and predicted labels, respectively. \(\mathcal{L}(\cdot)\) is the cross-entropy loss. \(\mathbf{D}_{0,1}^{t}\) and \(\mathbf{D}_{1,2}^{t}\) are the first and rest parts of \(\mathbf{D}_{0}^{t}\), respectively. The EE is pre-trained on \(\mathbf{D}_{0,1}^{t}\). The PAN is randomly initialized on \(\mathbf{D}_{0}^{t}\), including the APGM and PQAM. While not done do: Construct support set \(\mathbf{S}_{a}\) and query set \(\mathbf{Q}_{b}\) for pseudo base audio classes by randomly selecting samples from \(\mathbf{D}_{0,1}^{t}\). Construct support set \(\mathbf{S}_{a}\) and query set \(\mathbf{Q}_{a}\) for pseudo novel audio classes by randomly selecting samples from \(\mathbf{D}_{0,2}^{t}\). Construct support set \(\mathbf{S}\) by merging \(\mathbf{S}_{a}\) and \(\mathbf{S}_{a}\); and construct query set \(\mathbf{Q}\) by merging \(\mathbf{Q}_{a}\) and \(\mathbf{Q}_{a}\); Learn embeddings for samples in \(\mathbf{S}\) and \(\mathbf{Q}\) by the pre-trained EE. Generate prototypes of current session by the APGM from support embeddings of \(\mathbf{S}\). Update all prototypes and query-embeddings by the PQAM whose input is the concatenation of prototypes of current session, query-embeddings of current session, and prototypes of last session. Predict labels \(\mathbf{\tilde{L}}_{q}\) for \(\mathbf{Q}\) by the PQAM based on the updated prototypes and query-embeddings. Compute cross-entropy loss of \(\mathcal{L}(\mathbf{L}_{\varphi},\mathbf{\tilde{L}}_{q})\). Optimize the PAN using the algorithm of stochastic gradient descent. End while Output: A trained PAN, including APGM and PQAM.
```
**Algorithm 1**Pseudo Incremental Learning Strategy for Training the PAN in episodic way on \(\mathbf{D}_{0}^{t}\).
#### Iii-B1 Attentive Prototype Generation Module
The architecture of the APGM is depicted in Fig. 4, whose main part is a self-attention module. The architecture design of the APGM is mainly inspired from the Transformer [52]. The self-attention module in the APGM can acquire the mutual relationship among all support embeddings of novel audio classes. Hence, all generated prototypes of audio classes are expected to be representative, which benefits for generating a classifier with strong discriminative ability.
Support embeddings \(\mathbf{E}_{s}\) that are learned from \(N_{nov}\times K\) support samples of current session are fed to the APGM, where \(N_{nov}\) and \(K\) denote number of novel audio classes and number of samples per novel audio class, respectively. Dimension of each embedding is \(D\). Three variables of \(\mathbf{X}_{1}\), \(\mathbf{X}_{2}\) and \(\mathbf{X}_{3}\) are obtained by conducting linear transformation on \(\mathbf{E}_{s}\), namely \(\mathbf{X}_{1}\)=\(\mathcal{V}_{1}(\mathbf{E}_{s})\), \(\mathbf{\tilde{X}}_{2}\)=\(\mathcal{V}_{2}(\mathbf{E}_{s})\) and \(\mathbf{X}_{3}\)=\(\mathcal{V}_{3}(\mathbf{E}_{s})\). Then, they are further processed by sequentially conducting the operations of matrix multiplication, scale, Softmax, matrix multiplication and linear transformation. That is, the output of the self-attention module in the APGM is \(\mathbf{X}^{\prime\prime}\), and is defined by
\[\mathbf{X}^{\prime\prime}=\mathcal{V}_{4}\Big{(}\text{softmax}\left(\frac{\mathbf{x}_{ 1}\mathbf{x}^{T}_{3}}{\sqrt{D}}\right)\mathbf{X}_{3}\Big{)}, \tag{1}\]
where T denotes transpose operation of a matrix; and the scale operation is defined as dividing by a coefficient \(\sqrt{D}\), namely \(\mathbf{Y}^{\prime}=\frac{\mathbf{Y}}{\sqrt{D}}=\frac{\mathbf{x}_{1}\mathbf{x}^{T}_{2}}{\sqrt{D}}\). Then, \(\mathbf{X}^{\prime\prime}\) and \(\mathbf{X}\) are element-wisely summed, and their summation is processed by an operation of layer normalization [55] for producing \(\mathbf{X}^{\prime\prime}\). Finally, the prototype of each audio class is generated by computing average of \(K\) vectors of \(\mathbf{X}^{\prime\prime\prime}\), and the \(K\) vectors belong to the same kind of audio class. After computing the average for all novel audio classes, \(N_{nov}\) prototypes of current session, \(\mathbf{P}_{nov}\), are obtained.
Fig. 3: The architecture of prototype adaptation network, which consists of an attentive prototype generation module (detailed in Fig. 4) and a prototype query-embedding adaptation module (detailed in Fig. 5).
The parameters of four Linear layers (linear transformations from \(\Psi_{1}\) to \(\Psi_{4}\)) are the tunable parameters of the APGM. Their parameters are iteratively tuned during the procedure of PAN training using the pseudo incremental learning strategy.
_2) Prototype Query-embedding Adaptation Module_
With the increase of prototypes of audio class, the prototype space will become crowded, which is unfavorable for audio classification. When audio classes come with groups in the incremental sessions, the prototypes generated in each session may only identify audio classes of current session satisfactorily. When audio classes of all prior sessions and current session are involved in evaluation, direct concatenation of prototypes cannot produce a classifier with strong discrimination ability and thus classification results will be unsatisfactory. To obtain discriminative decision boundaries over all novel and old audio classes, it is important for classifier update to acquire global location information of all prototypes in the prototype space. In addition, query embeddings of unlabeled query samples are also informative and should be used for classifier update. To reach these targets above, we design a PQAM to update prototypes of both old and novel audio classes, and to compute scores (cosine similarities) between query embeddings and prototypes. The architecture of the PQAM is depicted in Fig. 5.
The PQAM is similar to the APGM with minor differences. It also mainly consists of a self-attention module which can acquire the mutual relationship among prototypes of novel audio classes, prototypes of old audio classes and query embeddings. Hence, all updated prototypes of both novel and old audio classes are expected to be far apart in the prototype space, and the updated classifier will provide discriminative decision boundaries over all novel and old audio classes. To efficiently update the prototypes and compute the scores, \(K_{q}\) query embeddings \(\mathbf{E}_{q}\)=\(\{\mathbf{E}_{q,k}\}\), 1\(\leq\)\(k\)\(\leq\)\(K_{q}\), are simultaneously processed by the PQAM in practice. As illustrated in Fig. 5, the input variable of the PQAM is \(\mathbf{X}\)\(\in\)\(\mathbb{R}^{K_{q}\times(N_{out}+1)\times D}\) which is composed of \(K_{q}\) copies of the concatenation of \(N_{old}\) prototypes of \(\mathbf{P}_{old}\), \(N_{new}\) prototypes of \(\mathbf{P}_{new}\) and one embedding of \(\mathbf{E}_{q,k}\). The output of the self-attention module in the PQAM is \(\mathbf{X}^{\prime\prime}\) which can be also defined by Eq. (1). Afterwards, \(\mathbf{X}^{\prime\prime}\) and \(\mathbf{X}\) are element-wisely summed, followed by the operation of layer normalization [55] for obtaining \(\mathbf{X}^{\prime\prime\prime}\). Finally, the \(\mathbf{X}^{\prime\prime\prime}\) is split into two parts: updated query embeddings \(\mathbf{E}^{\prime}_{q}\) and updated prototypes \(\mathbf{P}^{\prime}\). The cosine similarities (scores) between each updated query embedding \(\mathbf{E}^{\prime}_{q,k}\) and \(\mathbf{P}^{\prime}\) is used to make decision in evaluation stage for each query sample.
Together with the parameters of four Linear layers of the APGM, the parameters of four Linear layers of the PQAM are iteratively tuned during the procedure of PAN training using the pseudo incremental learning strategy.
### _Dynamically Expanded Classifier_
The classifier consists of prototypes and each prototype stands for one audio class (i.e., one prototype per audio class). As depicted in Fig. 1, each prototype of base classifier is obtained by computing the mean vector of embeddings of the same kind
Fig. 4: The architecture of the APGM. \(\mathbf{E}_{i}\): support embeddings; \(N_{new}\): number of novel audio classes; \(K\): number of support samples per audio class; \(D\): dimension of each embedding(prototype); \(\mathbf{P}_{new}\): prototypesof novel audioclases.
Fig. 5: The architecture of the PQAM. \(\mathcal{K}_{i}\): number of query samples; \(\mathbf{P}_{old}\): prototypes of old audio classes; \(\mathbf{P}_{new}\): prototypes of novel audio classes; \(\mathbf{E}_{q}\): query embeddings; \(\mathbf{E}_{q,k}\): embedding of the \(k\)th query sample; \(N_{new}\): number of old and novel audio classes; \(N_{old}\): number of old audio classes; \(N_{new}\): number of novel audio classes; \(D\): dimension of each embedding (prototype).
of audio class. Each base audio class has abundant training samples, and thus the mean vector of embeddings of one base audio class can represent the audio class well. However, the number of training samples of each novel audio class is few, and thus simply using the mean vector of embeddings of few samples cannot effectively represent the differences between different kinds of audio classes. Hence, the classifier will produce unsatisfactory results in incremental sessions.
Here, we design a dynamically expanded classifier whose prototypes are expanded and updated by the PAN in each incremental session. The updating process of prototypes in each incremental session (real incremental learning) is the same as the training process of the PAN in base session (pseudo incremental learning), except for the samples they use. The samples that are used for updating the prototypes in incremental sessions and for training the PAN in base session are from the \(\mathbf{D}_{i}^{t}\) and the \(\mathbf{D}_{0}^{t}\), respectively. After updating all prototypes, the classifier can provide discriminative decision boundaries over all audio classes, and will perform well for audio classification.
## III Experiments And Discussions
In this section, experimental data and setups are first introduced. Then, we present ablation experiments and comparisons of different methods. Finally, we discuss generalization across audio datasets and conduct extended analyses for our method.
### _Experimental Datasets_
Experiments are performed on the datasets selected from three audio corpora, including FSD-MIX-CLIPS [56], NSynth [57], and LibriSpeech [58]. These three audio corpora are publicly available for research purposes and have been commonly used in previous works for audio classification.
The FSD-MIX-CLIPS is a programmatically mixed audio corpus, where the polyphony and signal-noise-ratio properties are controlled. In this audio corpus, there are 89 sound events (audio classes) and 614 K audio clips (samples). The length of each audio clip is 1 second. The set of audio classes covers a diverse range of real-world sounds, from human and animal sounds to natural, musical or miscellaneous sounds.
The NSynth is a large-scale audio corpus of musical notes. It includes 306,043 musical notes, and each of these notes is with a unique pitch, timbre, and envelope. There are 306,043 audio snippets in the NSynth, and each audio snippet is of four seconds for representing one type of instruments. There are 1,006 instruments (audio classes) in total in this audio corpus.
The LibriSpeech is a speech corpus of approximately 1,000 hours of audiobooks that are spoken by 2,484 speakers (audio classes). Training data is divided into 3 parts of 100 hours, 360 hours, and 500 hours. Development data and testing data are divided into the _clean_ and _other_ classes, respectively.
Audio datasets that are built from the FSD-MIX-CLIPS, NSynth and LibriSpeech, are denoted as FSC-89, NSynth-100 and LS-100, respectively. They are independently divided into two parts without overlaps of audio classes, namely base dataset \(\mathbf{D}_{0}\) and incremental dataset \(\mathbf{D}_{i}\) (1\(\leq\)\(i\)\(\leq\)(\(I\)-1)). The base dataset \(\mathbf{D}_{0}\) consists of base training dataset \(\mathbf{D}_{0}^{t}\) and base evaluation dataset \(\mathbf{D}_{0}^{t}\), while the incremental dataset \(\mathbf{D}_{i}\) is composed of incremental training dataset \(\mathbf{D}_{i}^{t}\) and incremental evaluation dataset \(\mathbf{D}_{0}^{t}\). \(\mathbf{D}_{0}^{t}\) is adopted to train the EE and the PAN in base session, while \(\mathbf{D}_{i}^{t}\) is used to update prototypes of the classifier for evaluation. When \(\mathbf{D}_{0}^{t}\) is used to train the EE in typically supervised way, its samples are fed to the EE as a whole instead of being divided into many small batches. When \(\mathbf{D}_{0}^{t}\) is used to train the PAN in episodic way, its samples are split into two parts: \(\mathbf{D}_{0,1}^{t}\) (pseudo base audio classes) and \(\mathbf{D}_{0,2}^{t}\) (pseudo novel audio classes). \(\mathbf{D}_{0,1}^{t}\) and \(\mathbf{D}_{0,2}^{t}\) are independently divided into many batches and each batch is composed of a support set and a query set. When \(\mathbf{D}_{i}^{t}\) is used to update prototypes in episodic way, its samples are split into many batches and each batch consists of a support set and a query set.
In each episodic training stage, \(N\)_K_ samples are randomly chosen from \(N\) audio classes (\(K\) samples per audio class) in the training dataset (\(\mathbf{D}_{0}^{t}\) or \(\mathbf{D}_{i}^{t}\)) to construct the support set, and then \(N\)_K_\(q\) different samples of the same \(N\) audio classes (\(K_{q}\) samples per audio class) are also randomly selected from the training dataset to generate the query set. The selections of both audio classes and samples per audio class are repeated until all audio classes and their samples in the training dataset are chosen once. The selected samples in different batches are different to each other. In evaluation stage, all evaluation datasets are fed to the EE and updated classifier as a whole rather than being divided into many small batches. Tables II, III and IV present the detailed information of the FSC-89, NSynth-100 and LS-100, respectively.
### _Experimental Setup_
All experiments are carried out on a machine whose main configurations are as follows: two CPUs of Intel Xeon 8124M with 3.5 GHz, a RAM of 128 GB, and three GPUs of RTX3090. The metric of accuracy is defined as the number of correctly classified samples divided by the total number of samples involved in classification, which is adopted to evaluate the performance of different methods in each session. The metrics of average accuracy AA and performance dropping rate PD are used to measure the overall performance of different methods. They are defined by
\[AA=\frac{1}{r}\sum_{i=0}^{i-1}A_{i}, \tag{2}\]
\[\left\{\begin{aligned} PD=A_{0}-A_{I-1},&\text{for the classes of Base and Both}\\ PD=A_{1}-A_{I-1},&\text{for the classes of Novel}\end{aligned}\right. \tag{3}\]
where \(A_{i}\) denotes the accuracy in session \(i\). The higher the AA or the lower the PD, the better the performance of the methods. Besides, computational complexity and memory requirement of different methods in incremental sessions are measured by the metrics of Average Training Time (ATT) and Storage Space (SS), respectively. The ATT is defined as the average time required to train the classification system in all incremental sessions. The SS is defined as the memory space used to store samples (or embeddings) and parameters of the classification system. The lower the ATT and the SS, the lower computational complexity and the memory requirement of the methods, respectively.
The framework for FCAC mainly includes the EE, PAN and classifier. Its main parameters are given in Table V.
### _Ablation Experiments_
In this subsection, we conduct ablation analyses to assess the effectiveness of main components of the proposed method. The NSynth-100 is used as experimental dataset in this experiment for simplicity.
As described in section II.\(B\), the datasets for pre-training EE and training PAN are \(\mathbf{D}_{0,1}^{t}\) and \(\mathbf{D}_{0}^{t}\), respectively. \(\mathbf{D}_{0}^{t}\) is further divided into \(\mathbf{D}_{0,1}^{t}\) and \(\mathbf{D}_{0,2}^{t}\) to generate pseudo base classes and pseudo novel classes, respectively. That is, the datasets used for pre-training EE and training PAN are different to each other. The method of data construction for pre-training EE and training PAN is called the STDU. In addition, APGM and PQAM are proposed to generate prototypes of novel audio classes and to update prototypes of all audio classes, respectively. We discuss the impacts of STDU, APGM and PQAM on the performance of the proposed method. In this experiment, the value of (\(N\), \(K\)) is set to (5, 5) without losing generality.
The results obtained by our method on NSynth-100 with different combinations of STDU, APGM and PQAM are listed in Table VI. When all modules of STDU, APGM and PQAM are used in the proposed framework, our method achieves the best performance. The highest AA score of 93.31% and the lowest PD score of 12.90% are obtained for the Both (both Base and Novel). In addition, by comparing the case of 1 to the cases of 2, 3 and 5 in Table VI, it can be known that each one of these three modules above has contribution to the performance improvement of our method in AA and PD.
### _Comparison of Different Methods_
In this subsection, we compare our method with five baseline methods for FCAC in AA and PD. The baseline methods are denoted as Finetune [59], iCaRL [60], DFSL [42], ARP [43], and CEC [39], and are widely used in previous related works.
The baseline methods are briefly introduced as follows. In the Finetune method, the classification system can quickly adapt to novel audio classes after finetuning using training samples of novel audio classes in incremental sessions. As a result, the classification system tends to overfit the novel audio classes and forget the old ones. In the iCaRL method, both strong classifiers and a feature representation are learned using the strategies of data retention and knowledge distillation. When novel audio classes appear continually, the classification system tends to gradually forget the old audio classes. In the DFSL method, an attention-based weight generator and a cosine-similarity based classifier are designed for realizing FCAC. In the ARP method, the prototypes are adaptively refined by a dynamic relation projection module. In the CEC method, continually evolved classifiers are designed for recognizing novel audio classes and a graph model is used to propagate the context information between classifiers for prototype adaptation. Based on the introductions above, main technical differences of different methods are presented in Table VII.
All baseline methods are implemented with open-source codes by the authors, whose main parameters are set according to the suggestions in the corresponding references and optimally tuned on the training data. Different methods are compared on three audio datasets. In this experiment, the value of (_N_, _K_) is set to (5, 5). Under the same experimental conditions, the scores of accuracies, AA and PD that are obtained by different methods on the audio datasets of FSC-89, NSynth-100 and LS-100 are presented in Tables VIII, IX and X, respectively.
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{8}{c}{Accuracy in various sessions (\%)} \\ \cline{2-11} & Session & 0 (0-58) & 1 (59-63) & 2 (64-68) & 3 (69-73) & 4 (74-78) & 5 (79-83) & 6 (84-88) & AA (\%) & PD (\%) \\ \hline \multirow{4}{*}{Finetune} & Base & 42.28 & 31.84 & 29.15 & 24.98 & 20.54 & 19.63 & 16.19 & 26.37 & 26.19 \\ & Novel & - & 29.70 & 28.85 & 23.23 & 25.28 & 21.44 & 17.72 & 24.37 & 11.98 \\ & Both & 42.28 & 31.67 & 29.11 & 24.63 & 21.74 & 20.17 & 16.70 & 26.61 & 25.58 \\ \cline{2-11} & Base & 42.48 & 33.02 & 32.14 & 28.24 & 24.78 & 19.98 & 21.65 & 28.90 & 20.83 \\ \cline{2-11} & Novel & - & 25.10 & 19.05 & 17.10 & 19.43 & 19.30 & 15.08 & 19.18 & 10.02 \\ & Both & 42.48 & 32.40 & 30.25 & 25.99 & 23.42 & 19.78 & 19.44 & 27.68 & 23.04 \\ \cline{2-11} & Base & 42.36 & 36.58 & 36.23 & 35.97 & 35.76 & 35.66 & 35.55 & 36.87 & 6.81 \\ \cline{2-11} & Novel & - & 19.40 & 12.00 & 12.50 & 12.15 & 11.62 & 11.38 & 13.17 & 8.02 \\ \cline{2-11} & Both & 42.36 & 35.23 & 32.72 & 31.21 & 29.79 & 28.51 & 27.40 & 32.46 & 14.96 \\ \cline{2-11} & Base & 42.04 & **41.36** & **39.52** & 38.40 & 37.37 & 36.67 & 36.05 & 38.77 & 5.99 \\ \cline{2-11} & Novel & - & 23.35 & 22.19 & 20.05 & 19.99 & 19.14 & 18.36 & 20.51 & **4.99** \\ \cline{2-11} & Both & 42.04 & 39.95 & 37.01 & 34.68 & 32.97 & 31.45 & 30.09 & 35.46 & 11.95 \\ \cline{2-11} & Base & 42.16 & 40.56 & 39.39 & **38.83** & **38.28** & **37.87** & **37.57** & **39.24** & **4.59** \\ \cline{2-11} & Novel & - & 31.31 & 22.08 & 23.03 & 25.09 & 25.40 & 23.90 & 25.13 & 7.41 \\ \cline{2-11} & Both & 42.16 & 39.84 & 36.88 & 35.63 & 34.94 & 34.15 & 32.96 & 36.65 & 9.20 \\ \cline{2-11} & Base & **42.92** & 40.16 & 38.98 & 38.41 & 37.61 & 37.17 & 36.62 & 38.84 & 6.30 \\ \cline{2-11} & Novel & - & **38.32** & **31.11** & **32.80** & **34.15** & **32.84** & **30.97** & **33.37** & 7.35 \\ \cline{2-11} & Both & **42.92** & **40.01** & **37.84** & **37.27** & **36.73** & **35.88** & **34.71** & **37.91** & **8.21** \\ \hline \hline \end{tabular}
\end{table} TABLE VIII: Results obtained by different methods on FSC-89
As shown in Tables VIII to X, our method obtains AA scores of 37.91%, 93.31%, and 86.39% for both base and novel audio classes (the rows of Both) on the FSC-89, NSynth-100, and LS-100, respectively. These AA scores are higher than the counterparts achieved by the baseline methods. Our method produces PD scores of 8.21%, 12.90%, and 12.59% for both base and novel audio classes (the rows of Both) on the FSC-89, NSynth-100, and LS-100, respectively. These PD scores are lower than the counterparts obtained by the baseline methods. That is, our method outperforms all baseline methods in terms of both AA and PD when evaluated on three audio datasets. The advantage of our method over the baseline methods in the two metrics above mainly benefits from the STDU in base session, and the design of APGM and PQAM for updating prototypes. These three modules above work together to effectively reduce the confusions between the prototypes of various audio classes. Compared with the baseline methods, the proposed method can recognize novel audio classes better and forget old audio classes less.
In addition, the AA scores obtained by different methods on the FSC-89 are lower than that on the NSynth-100 and LS-100 for all methods in all sessions. The reasons are probably that the background noise in the FSC-89 is much stronger than that in other two datasets, and the sources of samples of the FSC-89 are less consistent compared to other two datasets. Hence, the inter-class confusion and intra-class inconsistency of the FSC-89 are larger than that of the NSynth-100 and LS-100.
To observe the confusions among different audio classes in the last incremental session, we plot the confusion matrices obtained by different methods on the LS-100, as illustrated in Fig. 6.
In the confusion matrices obtained by different methods, most values lie in the diagonal which denotes the ground-truth, and the confusions among base audio classes (classes 0 to 59) are obviously less than that among novel audio classes (classes 60 to 99). In addition, our method generates a less scattered and lighter confusion matrix, which shows our method obtains higher accuracy scores (less confusions) than other methods.
### _Computational Complexity and Memory Requirement_
Because the training of initial classification system in the base session is generally conducted on a computing machine with high performance, the training time of the initial classification system is not critical for the problem of FCAC. However, the computational efficiency of different methods in incremental sessions is important in practice. Hence, the training time of different methods in incremental sessions on the LS-100 is recorded for computing their ATT values. For fair comparison, only the training time of different classification systems is included, whereas the time for data preparation and embedding
Fig. 6: Confusion matrices of the last incremental session obtained by different methods on the LS-100.
learning is excluded.
This experiment is also conducted on the computing machine introduced in section III.\(B\) but only one GPU of RTX3090 is adopted here. It can be known from the second column of Table XI that the ATT of the Finetune method, iCaRL method, DSFL method, ARP method, CEC method and our method is 29.39 s, 144.20 s, 0.16 s, 0.48 s, 0.93 s and 0.84 s, respectively. That is, the computational complexity of the Finetune and iCaRL methods is much higher than that of other four methods. The reason is that the Finetune and iCaRL methods need many epochs to fine-tune the entire classification system in order to obtain satisfactory results. In converse, other four methods only need one or two epochs to update the classifiers (or the prototypes) instead of the entire classification system. In terms of ATT, our method has advantage over the baseline methods except the DFSL and ARP methods.
In practical application, samples may involve privacy and the storage space of intelligent audio terminals is usually limited. Hence, it is often not allowed to store abundant samples (or embeddings) and too many parameters, such as prototypes, weights. We compare the memory requirements of different methods in incremental sessions only, because the memory requirements of different methods in base session have been determined after the initialization of the classification system. The memory requirements (i.e., SS) of different methods in incremental sessions are listed in the third column of Table XI.
Because the Finetune method only needs to tune the parameters of the classification system obtained in last session using samples of novel audio classes without storing any samples and parameters for novel audio classes, the SS of the Finetune method in incremental sessions is always equal to 0. The iCaRL method needs to store some representative samples for each novel audio class in all incremental sessions. The SS of the iCaRL method is equal to \(N_{c}K_{s}L_{s}\), where \(N_{c}\), \(K_{s}\) and \(L_{s}\) denote total number of audio classes in all incremental sessions, the number of samples per audio class and the length of one sample, respectively. Our method, the DFSL method, the ARP method and the CEC method need to store one prototype (or weight vector) for each novel audio class in all incremental sessions. The SS of these four methods is equal to \(N_{c}D\), where \(D\) denotes the dimension of one prototype (or weight vector). \(D\) is generally smaller than \(L_{s}\), and thus the memory requirements of these four methods are lower than that of the iCaRL method in incremental sessions.
### _Generalization across Datasets_
In all experiments above, the training data and the evaluation data are selected from the same audio dataset. To evaluate the generalization capability of our method across audio datasets, the training data and the evaluation data come from different audio datasets. That is, when the training data is chosen from one audio dataset (e.g., FSC-89), the evaluation data is selected from the remaining two audio datasets (e.g., NSynth-100 and LS-100). In this experiment, the value of \((N,K)\) is also set to (5, 5) without losing generality.
In the first row of Table XII, the item on the left side of the arrow (e.g., "FS" in "FS\(\rightarrow\)NS") represents the training data, while the item on the right side of the arrow (e.g., "NS" in "FS\(\rightarrow\)NS") denotes the evaluation data. The AA scores of all sessions obtained by our method across audio datasets are presented in Table XII. Our method obtains AA scores of 40.31%, 39.37%, 83.83%, 77.50%, 78.71%, and 77.31% for the class of Both, when audio datasets are FS\(\rightarrow\)NS, FS\(\rightarrow\)LS, NS\(\rightarrow\)FS, NS\(\rightarrow\)LS, LS\(\rightarrow\)FS, and LS\(\rightarrow\)NS, respectively.
As given in Tables VIII, IX, and X, our method obtains AA scores of 37.91%, 93.31%, and 86.39% for the class of Both, when audio datasets are FS\(\rightarrow\)FS, NS\(\rightarrow\)NS, and LS\(\rightarrow\)LS (training and evaluation data from the same audio datasets), respectively. The AA score of 37.91% (FS\(\rightarrow\)FS) is lower than the AA scores of 40.31% (FS\(\rightarrow\)NS) and 39.37% (FS\(\rightarrow\)LS). However, the AA score of 93.31% (NS\(\rightarrow\)NS) is higher than the AA scores of 83.83% (NS\(\rightarrow\)FS) and 77.50% (NS\(\rightarrow\)LS). Similarly, the AA score of 86.39% (LS\(\rightarrow\)LS) is higher than the AA scores of 78.71% (LS\(\rightarrow\)FS) and 77.31% (LS\(\rightarrow\)NS). That is, our method achieves better results when the training data (except the FSC-89) and the evaluation data are from the same audio datasets. Furthermore, when the training data is from the FSC-89, even if the training data and the evaluation data are from different audio datasets, our method produces larger AA scores. The reasons are probably that samples in the FSC-89 are relatively noisy (with evident background noise) and sources of samples in the FSC-89 are diverse. Hence, the distribution range of time-frequency characteristics of samples in the FSC-89 is wider and may overlap with that of samples in the LS-100 and NSynth-100. Therefore, the classification system which is trained on the FSC-89 performs better on the LS-100 and NSynth-100. In summary, our method generalizes well across audio datasets instead of overfitting on a single dataset.
### _Extended Analyses_
The first extended analysis is to discuss the settings of \(N\)-way \(K\)-shot for training the PAN in base session and for updating the classifier in incremental sessions. Specifically, we fix the number of query samples as 15 (\(K_{q}\)=15) and all modules of STDU, APGM and PQAM are adopted in this experiment. We discuss the impacts of the values of both \(N\) and \(K\) on the performance of our method. We set the same values of \(K_{q}\), \(N\) and \(K\) for training the PAN and updating the classifier. We select the values of both \(N\) and \(K\) from {1, 5, 10, 15, 20}. The accuracy of the last incremental session obtained by our method on the LS-100 is presented in Fig. 7. The following observations can be obtained from Fig. 7. First, when the value of (\(N\), \(K\)) is equal to (5, 5), our method obtains the highest
accuracy score of 82.50%. Second, for the same number of ways, the larger the number of shots, the higher the accuracy scores (except 5-way 5-shot). The reason is probably that with the increase of shots, the classification system obtains more information about novel audio classes and thus obtains higher accuracy scores. Third, for the same number of shots, when the number of ways is equal to 5, our method obtains the highest accuracy score (except 20-way 20-shot). When the number of ways deviates from 5, the accuracy scores obtained by our method decrease. The possible reasons are as follows. When the number of ways decreases, the number of incremental sessions will increase and thus the old audio classes are more likely to be forgotten (the more sessions, the more likely the old audio classes will be forgotten). When the number of ways increases, the number of novel audio classes in one incremental session will increase and thus the confusions between novel audio classes in each incremental session is more likely to increase (the more audio classes, the greater the possibility of confusion between audio classes).
The second extended analysis is to visually demonstrate the locations of query embeddings and prototypes before and after being updated by the PAN. We use the t-SNE [61] to map query embeddings and prototypes into two-dimensional space as depicted in Fig. 8. The Python library of _scikit-learn_ is used to reduce the dimensionality of query embeddings and prototypes. The Python library of _matplotlib_ is adopted to plot Fig. 8. Five audio classes are randomly chosen from the LS-100 as the base audio classes, and five new audio classes are added as the novel audio classes in incremental session. It can be observed from Fig. 8 that prototypes of the classifier are shifted away from the confusion region by the PAN to produce more discriminative decision boundaries when novel audio classes are involved.
## IV Conclusions
In this study, we have investigated a newly-emerging problem of FCAC. Moreover, we have tried to solve this problem by designing a dynamically expanded classifier with self-attention modified prototypes. Based on the detailed description of our method and comprehensive experiments and discussions, the following two conclusions can be drawn.
First, our method exceeds previous methods for FCAC in terms AA and PD under the same experimental conditions. As a result, our method is a state-of-the-art method for solving the problem of FCAC. In addition, our method has advantage over most baseline methods in terms of memory requirement and computational complexity.
Second, we design a PAN for updating prototypes of the classifier in incremental sessions. The PAN is a self-attention network and can effectively take advantage of prototypes of prior sessions and unlabeled query samples of current session for updating all prototypes of the classifier. In addition, we propose a STDU in base session to train the EE and PAN, which makes the EE and PAN possess better generalization capability in incremental sessions.
Although our method has advantages over baseline methods, there is still room for improvement in this work. For example, we did not update the EE in each incremental session and thus the generalization capability of the EE needs to be enhanced. In addition, we did not consider the implementation of the proposed method on intelligent audio terminals with limited computing resources. The future work will include two parts. First, we will design a strategy to update the EE together with the classifier in incremental sessions for further improving the performance of our method. Second, to meet requirements for lightweight applications, we will decrease the computational complexity and memory requirement of our method by taking effective measures, such as embedding grouping, network quantization, self-knowledge distillation. Accordingly, we can make the proposed framework lighter for directly deploying our method on intelligent audio terminals.
|
2309.08939 | An Unified Search and Recommendation Foundation Model for Cold-Start
Scenario | In modern commercial search engines and recommendation systems, data from
multiple domains is available to jointly train the multi-domain model.
Traditional methods train multi-domain models in the multi-task setting, with
shared parameters to learn the similarity of multiple tasks, and task-specific
parameters to learn the divergence of features, labels, and sample
distributions of individual tasks. With the development of large language
models, LLM can extract global domain-invariant text features that serve both
search and recommendation tasks. We propose a novel framework called S\&R
Multi-Domain Foundation, which uses LLM to extract domain invariant features,
and Aspect Gating Fusion to merge the ID feature, domain invariant text
features and task-specific heterogeneous sparse features to obtain the
representations of query and item. Additionally, samples from multiple search
and recommendation scenarios are trained jointly with Domain Adaptive
Multi-Task module to obtain the multi-domain foundation model. We apply the
S\&R Multi-Domain foundation model to cold start scenarios in the
pretrain-finetune manner, which achieves better performance than other SOTA
transfer learning methods. The S\&R Multi-Domain Foundation model has been
successfully deployed in Alipay Mobile Application's online services, such as
content query recommendation and service card recommendation, etc. | Yuqi Gong, Xichen Ding, Yehui Su, Kaiming Shen, Zhongyi Liu, Guannan Zhang | 2023-09-16T10:00:02Z | http://arxiv.org/abs/2309.08939v1 | # An Unified Search and Recommendation Foundation Model for Cold-Start Scenario
###### Abstract.
In modern commercial search engines and recommendation systems, data from multiple domains is available to jointly train the multi-domain model. Traditional methods train multi-domain models in the multi-task setting, with shared parameters to learn the similarity of multiple tasks, and task-specific parameters to learn the divergence of features, labels, and sample distributions of individual tasks. With the development of large language models, LLM can extract global domain-invariant text features that serve both search and recommendation tasks. We propose a novel framework called S&R Multi-Domain Foundation, which uses LLM to extract domain invariant features, and Aspect Gating Fusion to merge the ID feature, domain invariant text features and task-specific heterogeneous sparse features to obtain the representations of query and item. Additionally, samples from multiple search and recommendation scenarios are trained jointly with Domain Adaptive Multi-Task module to obtain the multi-domain foundation model. We apply the S&R Multi-Domain foundation model to cold start scenarios in the pretrain-finetune manner, which achieves better performance than other SOTA transfer learning methods. The S&R Multi-Domain Foundation model has been successfully deployed in Alipay Mobile Application's online services, such as content query recommendation and service card recommendation, etc.
search and recommendation, LLM, multi-domain recommendation +
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †:
downstream tasks. We are inspired by the strong expressive power of natural language features and propose to build the Search and Recommendation Foundation model on top of LLMs, which extract low-level domain-invariant text features of the query (Q) and item (I). The major difference between our S&R foundation model and traditional multi-domain multi-task models is how we use the domain-invariant text features to help constrain the divergence of different tasks, which alleviates data imbalance, negative transferring, and item heterogeneous issues. To summarize, our proposed S&R Foundation model has the following key contributions:
* We apply LLMs in S&R Multi-Domain Foundation model, and extract domain invariant text features to help mitigate the negative transferring and item heterogeneous issues in the multi-domain settings.
* We novelly proposed the Aspect Gating Fusion (Domain-Specific Gating) to fuse the ID feature, text features from LLMs, and sparse features. The Domain Adaptive Multi-Task module is also used to extract the domain-specific query and item towers' representations.
* For the cold start of new scenarios, we have conducted extensive experiments both offline and online, to show the effectiveness of supervised fine-tuning of our S&R Foundation model in downstream tasks, which is now fully deployed online and serving in Alipay's mobile application.
## 2. Proposed Model
### Problem Formulation
Given a set of \(K\) search and recommendation tasks \(\{D_{k}\}_{k=1}^{K}\), \(D_{k}\) denotes the dataset for the \(k\)-th task. We let \(\mathcal{U}=\{u_{1},u_{2},...,u_{N}\}\) denote the user set, \(\mathcal{I}=\{i_{1},i_{2},...,i_{M}\}\) denote the item set and \(\mathcal{Q}=\{q_{1},q_{2},...,q_{T}\}\) denote the search query set. In real-world scenarios, items in search and recommendation usually come from different domains and are heterogeneous. Some items are shared across multiple domains and some items belong to each specific domain. And we let \(\mathcal{I}=\mathcal{I}_{1}\cup\mathcal{I}_{2}\cup...\cup\mathcal{I}_{K}\) denote the union of all items in \(K\) domains, which contains \(M\) items in total. We aim to jointly train a search and recommendation (S&R) foundation model \(M_{Foundation}^{S\&R}\) in the multi-task setting and predict the probability of user \(u_{l}\) click the item \(i_{l}\) given input query \(q_{l}\) as \(p(y_{l}^{ctr}=1|u_{l},q_{l},i_{l})\). And for search scenarios, additional query-item relevance score is also predicted as \(p(y_{l}^{sim}=1|q_{l},i_{l})\). For cold start of a new search or recommendation scenario \(D^{s}\), we restore parameters of embedding tables and partial network structures from the pretrained S&R foundation model \(M_{Foundation}^{S\&R}\), and then apply supervised fine-tuning on the downstream tasks, such as click through rate (CTR) prediction, query-item relevance prediction, etc. For the search task \(D_{k}^{S}\), we let \(D_{k}^{S}=\{x_{l}=(u_{l},q_{l},i_{l}),y_{l}\}_{l}\), which denotes the search ranking task given the triple input of (user, query, item) as \((u_{l},q_{l},i_{l})\). For the recommendation task \(D_{k}^{R}\), we set search query set Q as emptyset \(\emptyset\) in \(D_{k}^{R}=\{x_{l}=(u_{l},q_{l}=\emptyset,i_{l}),y_{l}\}_{l}\).
### S&R Multi-Domain Foundation Model
As illustrated in Figure 1, the S&R Multi-Domain Foundation model has three main components: the User-Query-Item encoding module, the Aspect Gating Fusion module, and the Domain-Adaptive Multi-Task module. Firstly, raw features of user, query and item pass through the embedding layers, and we extract the ID embedding, token-level text embedding and sparse features' embedding. We apply LLM to extract domain-invariant text features of query and item towers, which minimize the divergence of features' distribution cross multiple domains. Secondly, the Aspect Gating Fusion module is designed to merge different groups of ID, text, sparse features' embedding. The fusion network is to balance the relative importance of ID, text, and sparse features. Very few training samples contain ID features of cold start items and can't represent them well, and generic text features play more important role. Finally, we feed the concatenated embedding of user, query and item towers to the Domain Adaptive MTL module. The module has two outputs representing the click through rate (CTR) prediction task and the query-item relevance prediction task. The final loss function is the sum of CTR prediction loss \(\mathcal{L}^{ctr}\), relevance prediction loss \(\mathcal{L}^{sim}\) and domain adaptive regularization \(\mathcal{L}^{reg}\).
#### 2.2.1. User Query and Item Encoding
We extract three towers for user, query and item respectively. For the user tower, \(e_{u}^{ID}\in\mathbb{R}^{D}\) denotes user id embedding. \(e_{u}^{Nit}=[x_{1},...,x_{3},...,x_{Nit}]\) denotes the unified sequence of both search and recommendation clicks in chronological order. Each behavior \(x_{s}\) is encoded as multiple layers of MLPs with inputs of ID feature, sparse feature of behavior type S or R, and other sparse features of attributes, \(x_{s}=Fc(e_{s}^{ID}\in e_{s}^{type}\in e_{s}^{attr})\). For the query (\(Q\)) and item (\(I\)) features, we extract both domain-invariant text features, such as tokens in search query and items' title, and the domain-specific sparse features. The tokens of \(Q\) and \(I\) go through the same tokenizer and we get the tokenized id sequences as integer tensors \(e_{q}^{Token}\) and \(e_{t}^{Token}\). \(e_{q}^{Token}=[e_{q}^{1},e_{q}^{2},...,e_{q}^{N_{q}}]\in\mathbb{R}^{L_{q} \times D}\) denotes the query's tokenized id tensor of length \(L_{q}\), and \(e_{i}^{Token}=[e_{i}^{1},e_{i}^{2},...,e_{i}^{L_{i}}]\in\mathbb{R}^{L_{i} \times D}\) denotes the item's tokenized id tensor of length \(L_{i}\). For ID feature, we also embed the search query as ID feature \(e_{q}^{ID}\in\mathbb{R}^{D}\), and item ID as \(e_{i}^{ID}\in\mathbb{R}^{D}\). For the sparse features, we embed sparse features of \(Q\) as \(e_{q}^{S}\) and sparse features of \(I\) as \(e_{i}^{S}\). Finally, we get the feature groups of query tower as \(e_{q}=[e_{q}^{ID},e_{q}^{Token},e_{q}^{S}]\) and the feature groups of item tower as \(e_{i}=[e_{i}^{ID},e_{i}^{Token},e_{i}^{S}]\).
**LLM as Domain-Invariant Feature Extractor**
We apply the pretrained Large Language Model, such as BERT (
query and item as \(E_{S}(Q),E_{S}(I)\in\mathbb{R}^{H}\). Finally, we get the feature groups of query tower as \([E_{ID}(Q),E_{Im}(Q),E_{S}(Q)]\) and item tower as \([E_{ID}(I),E_{Im}(I),E_{S}(I)]\).
#### 2.2.2. Aspect Gating Fusion
After low level networks \(L_{0}\) (embedding tables) and \(L_{1}\) (feature encoding layers) in Figure 1, we fuse different aspects of query and item as in literature (Kang et al., 2018). Each aspect \(E_{a}\) represents some fine-grained properties of query and item, such as ID, text and sparse features. \(\mathcal{A}\) denotes the set of aspects we extract from query and item. In S&R scenarios, we set \(|\mathcal{A}|=3\) as ID, text and sparse attributes. Final representations are fused as weighted sum of different aspects' representations.
\[E(Q)=\sum_{a}w_{a}(Q)E_{a}(Q),E(I)=\sum_{a}w_{a}(I)E_{a}(I)\quad\forall a\in| \mathcal{A}|\]
The weight vector \(w(Q),w(I)\in\mathbb{R}^{|\mathcal{A}|}\) are outputs of a gating network, and we have different strategies to design the network.
* **Mean-Gating Strategy** Simply mean pooling of different aspects of query and item features as \(w_{a}=\frac{1}{|\mathcal{A}|}\).
* **[CLS]-Gating Strategy** We use randomly initialized embedding \(E_{CLS}(Q)\),\(E_{CLS}(I)\in\mathbb{R}^{H}\) to represent classification token [CLS] of query and item respectively..
* **Domain-Gating Strategy** We design the domain gating strategy from the intuition that the fusion network has different weights when merging different aspects of query and item. To model the differences across domains, we randomly initialize the domain embedding \(E_{D}=[E_{D_{1}},E_{D_{2}},..,E_{D_{K}}]\in\mathbb{R}^{K\times H}\) as the representations of different domains. And the domain-specific gating is calculated as \[w_{a}=\frac{e^{E_{D_{k}}E_{a}}}{\sum_{a\in|\mathcal{A}|}e^{E_{D_{k}}E_{a}}} \in\mathbb{R}^{|\mathcal{A}|}\].
#### 2.2.3. Domain Adaptive Multi-Task Learning
The input to the Domain Adaptive Multi-Task module is the concatenation of representations of user, query and item towers as \(\mathbf{x}=E(U)\oplus E(Q)\oplus E(I)\). For multi-domain setting, a series of multi-task and multi-domain models are proposed, such as SharedBottom(Kang et al., 2018), MMoE(Kang et al., 2018), PLE(Kang et al., 2018), STAR(Kang et al., 2018), SAMD(Kang et al., 2018), etc. These models use shared structures (Experts or MLP layers) to model the similarity among different tasks or domains, and use individual structures to learn the domain-specific properties. The difficulty of training the multi-domain models is the domain shift phenomena. For the k-th domain \(D_{k}\), the marginal distribution of input feature \(p(\mathbf{x}_{k})\) and the conditional distribution of predicting output \(y_{k}\) as \(p(y_{k}|\mathbf{x}_{k})\) has divergence from other domains. The well studied MTL models handle the divergence of conditional distribution. We propose to add a Domain Adaptive Layer to the input features \(\mathbf{x}_{i}\), which maps the inputs from multiple domains to a common vector space. We reuse the randomly initialized domain embedding \(E_{D}=[E_{D_{1}},E_{D_{2}},..,E_{D_{K}}]\in\mathbb{R}^{K\times H}\) in section 2.2.2 and concatenate the domain embedding \(E_{D_{k}}\) to feature vector \(\mathbf{x}_{i}\) of instances from the k-th domain \(D_{k}\), followed by domain-specific linear transformation \(W_{k}\). Suppose \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) denote two instances from different domains in the same training batch, we can get the domain-adaptive representation \(\hat{\mathbf{x}}_{i},\hat{\mathbf{x}}_{j}\) as
Figure 1. SR Multi-Domain Foundation Model Architecture
\[\hat{\mathbf{x}}_{i}=W_{i}(\mathbf{x}_{i}\oplus E_{D_{i}}),\hat{\mathbf{x}}_{j}=W_{ j}(\mathbf{x}_{j}\oplus E_{D_{j}})\]
We apply domain adaptation (Kang et al., 2017) techniques to constrain the divergence of distributions from domains \(i\) and \(j\) as \(p(\hat{\mathbf{x}}_{i})\) and \(p(\hat{\mathbf{x}}_{j})\) as \(\mathcal{L}^{reg}=\sum_{i,j\in\{1,2,\ldots,K\}}d(p(\hat{\mathbf{x}}_{i})||p( \hat{\mathbf{x}}_{j}))\). In terms of divergence measurement, we compared different metrics such as Jensen-Shannon Divergence (symmetric KL Divergence), Maximum Mean Discrepancy (MMD) (Kang et al., 2017) in the experiment section. And we find the Jensen-Shannon Divergence achieves the best performance as
\[\mathcal{L}^{reg}=\sum_{i,j\in\{1,2,\ldots,K\}}\mathrm{JS}(p(\hat{\mathbf{x} }_{i})||p(\hat{\mathbf{x}}_{j}))\]
Finally, on top of the Domain Adaptive Layer we stack the standard Multi-Task module, such as MMoE to extract outputs and predict two objectives, CTR prediction \(y^{ctr}\) and query-item relevance prediction \(y^{sim}\).
**CTR Prediction** Click-Through Rate (CTR) Prediction is a common task in both search and recommendation scenarios. We apply a unified scoring function \(y^{ctr}_{l}=f_{0}(u_{l},q_{l},i_{l})\) in the S&R foundation framework to predict CTR with the triple inputs of user, item and query as \((u,q,i)\). For search tasks, users have explicit search query \(q\). And for recommendation tasks users don't have explicit intentions. So we set \(q=\emptyset\) as the default embedding in the unified scoring function.
**Query-Item Relevance Prediction** Query-Item Relevance Prediction is a common task in search scenarios, which predicts the relevance score of query-item pair of \((q,i)\) and train a function \(y^{sim}_{l}=f_{0}(q_{l},i_{l})\) to represent query-item pair's relevance score. The relevance prediction is usually a classification task.
#### 2.2.4. Loss of S&R Foundation model
We train the S&R foundation model in multi-domain multi-task settings, using datasets from \(K\) domains. Each domain calculates either or both of two objectives of CTR prediction \(y^{ctr}_{l}=f_{0}(u_{l},q_{l},i_{l})\) and relevance prediction \(y^{sim}_{l}=f_{0}(q_{l},i_{l})\), depending on whether the task is search or recommendation. The final objective function consists of three parts, the loss for CTR prediction \(\mathcal{L}^{ctr}\), the loss for relevance prediction \(\mathcal{L}^{sim}\), and the loss for domain adaptive regularizer \(\mathcal{L}^{reg}\).
\[\mathcal{L}=\mathcal{L}^{ctr}+\mathcal{L}^{sim}+\mathcal{L}^{reg}\]
\[\mathcal{L}^{ctr}=\sum_{k\in K}\sum_{l\in N^{ctr}_{k}}\mathcal{L}_{ce}(f_{0} (u_{l},q_{l},i_{l});y^{ctr}_{l})\]
\[\mathcal{L}^{sim}=\sum_{k\in K}\sum_{l\in N^{sim}_{l}}\mathcal{L}_{ce}(f_{0} (q_{l},i_{l});y^{sim}_{l})\]
### Supervised Fine-Tuning Downstream Tasks
The pretrained S&R foundation model can benefit downstream tasks in the pretrain-finetune manner. The downstream model restores parameters from the foundation model, freezes part of the parameters and finetunes the remaining layers. We experiment different ways of freeze-finetune split. Firstly, the freeze-finetune split is between level \(L_{0}\) and \(L_{1}\) as in Figure 1. The pretrained embedding in level \(L_{0}\) is freezed and the remaining layers from \(L_{1}\) to \(L_{n}\) are finetuned. Secondly, the freeze-finetune split is between level \(L_{1}\) and \(L_{2}\). The embedding in level \(L_{0}\) as well as the parameters of encoding layers in level \(L_{1}\) are freezed, and the parameters from level \(L_{2}\) to \(L_{n}\) are finetuned. Given dataset of new downstream task \(D^{*}=\{(u^{*}_{l},q^{*}_{l},i^{*}_{l}),y^{*}_{l}\}\), the domain embedding \(E_{D^{*}}\in\mathbb{R}^{H}\) is randomly initialized and finetuned. In the experiment section, we thoroughly tested the performance of different ways of freeze-finetune split. We also compared the performance of pretrain-finetuning S&R Foundation model \(M^{S\&R}_{Foundation}\) with the performance of training single domain model without transfer learning.
## 3. Experiment
To test the effectiveness of our proposed S&R Multi-Domain Foundation model, we want to answer the following questions:
* **RQ1:** Whether our joint S&R Multi-Domain Foundation model can achieve SOTA performance compared to other multi-domain and multi-task models?
* **RQ2:** In terms of query and item towers' representations, what's the performance of the domain-invariant text features extracted by LLM and Aspect Gating Fusion network compared to other methods?
* **RQ3:** Whether S&R Multi-Domain Foundation and Supervised Finetuning can help benefit cold start scenarios?
### Experimental Settings
#### 3.1.1. Dataset
We conducted extensive experiments of S&R Foundation model on real-world datasets, including 7 industrial datasets of Alipay Search Ranking and Query Recommendation. The statistics are summarized in table 1. \(S\) denotes the search dataset, in which users have explicit search query, such as Query-Item Relevance Prediction, Content Search Ranking, etc. And \(R\) denotes the recommendation dataset, in which users don't have explicit intent of search query. There are also some tasks between Search and Recommendation, which we classify as S/R, such as Query Suggest CTR Prediction, in which users have explicit query, and at the same time the task is a CTR prediction task to make recommendation of query suggestions to users.
#### 3.1.2. Comparison Methods
**S&R Foundation Model** We compared our proposed S&R Multi-Domain Foundation model with SOTA multi-domain and multi-task models, such as Shared Bottom MTL (Kang et al., 2017), Multi-Gate Mixture of Experts (MMoE) (Kang et al., 2017), PLE (Kang et al., 2017), etc. For ablation study, we designed separate experiments to evaluate different modules of the framework, including the User-Query-Item encoding module, Aspect Gating Fusion module and Domain Adaptive Multi Task module. The experiment of S&R Multi-Domain Foundation (MLP) denotes the concatenated user-query-item representations are followed by multiple MLP layers. And the experiment of S&R Multi-Domain Foundation-MMoE-DA-JS denotes the representations are followed by a Domain Adaptive Layer (JS-Divergence) and MMoE multi-task module.
**Domain-Invariant Text Features and Aspect Gating Fusion** To prove the effectiveness of adding domain-invariant text features in S&R Foundation model, we have conducted experiments and ablation studies on different query and item token encoding methods on Alipay Content Query Recommendation dataset of tasks 4 in table 1. In the baseline method, we intentionally leave out the token-embedding of text features and only use ID and sparse features. We also compared randomly initialized token embedding with
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline ID & Dataset & S/R & Train & Eval & Test & \#Query & \#Item \\ \hline Task 1 & Query-Item Relevance Prediction & S & 76.2M & 12.7M & 12.7M & 40K & 40K \\ \hline Task 2 & Query Suggest CTR Prediction & S/R & 145.4M & 23.5M & 23.5M & 0.84M & 0.16M \\ \hline Task 3 & Background Word Query Recommendation CTR Prediction & R & 146.2M & 24.3M & 24.3M & - & 65K \\ \hline Task 4 & Content Query Recommendation CTR Prediction & R & 0.76M & 0.09M & 0.09M & - & 4.6K \\ \hline Task 5 & People Also Ask DeepSuggest & S/R & 2.4M & 0.38M & 0.38M & 0.41M & 25K \\ \hline Task 6 & Service Card Recommendation & S/R & 1.01M & 0.17M & 0.17M & 1.3K & 1.6K \\ \hline Task 7 & Content Search Ranking & S & 6.13M & 1.03M & 1.03M & 0.27M & 0.14M \\ \hline \end{tabular}
\end{table}
Table 1. Statistics of Alipay Search Ranking and Query Recommendation Datasets.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Method & Task1 & Task2 & Task3 & Task4 & Task5 & Task6 & Task7 \\ \hline S\&R Multi-Domain Shared Bottom [3] & 0.6483 & 0.8993 & 0.7829 & 0.6575 & 0.8511 & 0.8015 & 0.8561 \\ \hline S\&R Multi-Domain MMoE [15] & 0.6482 & \({}^{\prime}\)0.9003 & 0.7812 & 0.6650 & 0.8463 & 0.7942 & 0.8599 \\ \hline S\&R Multi-Domain PLE [19] & \({}^{\prime}\)0.7006 & 0.8981 & 0.7815 & 0.6682 & 0.8487 & 0.7978 & 0.8620 \\ \hline S\&R Multi-Domain Foundation (MLP) & 0.6827 & 0.8974 & 0.7784 & 0.6683 & 0.8462 & 0.7926 & 0.8629 \\ \hline S\&R Multi-Domain Foundation-MMoE-DA-MMD & 0.6874 & 0.8942 & \({}^{\prime}\)0.7971 & 0.6793 & 0.8564 & 0.8203 & 0.8569 \\ \hline S\&R Multi-Domain Foundation-MMoE-DA-JS & 0.6942 & 0.8973 & 0.7912 & \({}^{\prime}\)0.6979 & \({}^{\prime}\)0.8703 & \({}^{\prime}\)0.8312 & \({}^{\prime}\)0.8692 \\ \hline Absolute Improvement & +0.0459 & -0.0020 & +0.0083 & +0.0404 & +0.0192 & +0.0297 & +0.0131 \\ \hline \end{tabular}
\end{table}
Table 2. Performance of S\&R Multi-Domain Foundation Model.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Method & AUC & Absolute Gain \\ \hline Mean-Pooling & 0.7385 & - \\ \hline
[CLS]-Gating & 0.7515 & +0.0130 \\ \hline Domain-Gating & \({}^{\prime}\)0.7524 & +0.0139 \\ \hline \end{tabular}
\end{table}
Table 4. Comparison of Aspect Gating Fusion on Task 4.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline ID & Token Embedding & Query/Item Encoder & Finetune & AUC \\ \hline
1 & Baseline: Without Token Emb & - & - & 0.7524 \\ \hline
2 & Randomly Initialized & Mean Pooling & - & 0.7551 \\ \hline
3 & Randomly Initialized & Transfomer(L=1) & True & 0.7544 \\ \hline
4 & Randomly Initialized & Transfomer(L=6) & True & 0.7559 \\ \hline
5 & SR Foundation (LM=Transformer) & Transfomer(L=1) & \(L_{0},L_{1}\);True & 0.7562 \\ \hline
6 & SR Foundation (LM=Transformer) & Transfomer(L=1) & \(L_{0}\);False,\(L_{1}\);True & 0.7531 \\ \hline
7 & SR Foundation (LM=Transformer) & Transfomer(L=1) & \(L_{0},L_{1}\);False & 0.7574 \\ \hline
8 & SR Foundation (LM=BERT) & BERT BASE(L=12) & True & 0.7563 \\ \hline
9 & SR Foundation (LM=BERT) & BERT BASE(L=12) & False & \({}^{\prime}\)0.7580 \\ \hline
10 & SR Foundation (LM=ChatGLM 6B) & ChatGLM 6B Pretrained LLM [8; 25] & False & 0.7518 \\ \hline
11 & SR Foundation (LM=ChatGLM 6B) & ChatGLM 6B Pretrained LLM [8; 25] + prompt & False & 0.7503 \\ \hline
12 & SR Foundation (LM=ChatGLM2 6B) & ChatGLM2 Pretrained LLM [8; 25] & False & 0.7502 \\ \hline & Absolute Improvement & - & - & +0.0056 \\ \hline & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular}
\end{table}
Table 3. Comparison of Query and Item Token Encoding Methods after Fine-tuning Task 4.
\begin{table}
\begin{tabular}{|c|c|c|} \hline & Service Card Rec & Content Query Rec \\ \hline Single Domain & 0.8229 & 0.7295 \\ \hline SR Fdt\(\rightarrow\)Finetune & 0.8446 & 0.7574 \\ \hline Absolute Improvement & +0.0216 & +0.0279 \\ \hline \end{tabular}
\end{table}
Table 5. Comparison of Cold Start Scenarios Task 4 and 6.
Figure 2. Visualization of SR Foundation Model’s Domain-Adaptive Layers
embedding restored from the pretrained S&R foundation model under different configurations. For the encoders, we compared mean pooling, randomly initialized Transformer, BERT, ChatGLM-6B[(8; 25)], and ChatGLM+prompt, etc. In methods 10-12, we adopt ChatGLM-6B and ChatGLM2-6B to encode the text features of query and items. The implementation details are: we utilized the encoders of ChatGLM-6B and ChatGLM2-6B to convert the input text features and corresponding prompts into 4096-dimensional vectors, which are followed by 2 MLP dense layers and further reduced to 32-dimensional vectors. The prompt function used in our approach is defined as \(f_{prompt}(X)=\text{``Extract keywords from sentence [X]"}\). To compare different Aspect Gating Fusion methods, e.g. Mean-Pooling, [CLS]-Gating, Domain-Gating, we conducted ablation studies and the results are listed in table 4.
**Supervised Fine-Tuning on Cold Start Scenarios** For cold start scenarios, we compared the performance of supervised fine-tuning (SFT) the foundation model using downstream dataset, with the method of training single domain model on several tasks, including Service Card Recommendation (task 6) and Content Query Recommendation (task 4) as in table 5.
### Experimental Results
#### 3.2.1. S&R Multi-Domain Foundation model
To compare the performance of different multi-domain models, we report AUC performance on 7 search and recommendation datasets in table 2. All the experimented models share same input features, User-Query-Item Encoding module, and Domain-Gating as Aspect Gating Fusion strategy. The baseline for the multi-task learning (MTL) module is the shared bottom model. MMoE-DA-MMD and MMoE-DA-JS represent models that utilize Maximum Mean Discrepancy (MMD) and Jensen-Shannon Divergence (JS-Divergence) to constrain the distributions of domain adaptive layers respectively. The asterisk (\({}^{*}\)) denotes the best performance achieved in each task, and the absolute improvement represents the absolute improvement of MMoE-DA-JS method compared to baseline. MMoE-DA-JS achieved best performance on 4 tasks: 4, 5, 6, 7 with AUC improvement of +0.0404, +0.0192, +0.0297, +0.0131 respectively. The domain-adaptive layer constrains the embedding representations from different domains in the common vector space. The t-SNE visualization of S&R Foundation model's domain-adaptive layers is depicted in Figure 2. The embedding depicted in the first subplot "S&R Foundation MLP" is scattered, and the embedding in the third subplot "S&R Foundation-MMoE-DA-JS" is coherently aligned.
#### 3.2.2. Domain-Invariant Text Features and Aspect Gating Fusion
We report the performance of different methods to encode domain-invariant text features and freeze-finetune split in table 3 on task 4 Content Query Recommendation. Our proposed method of restoring pretrained parameters from BERT BASE (12 layers Transformer) in S&R Foundation, freezing the parameters of the encoder and finetuning the remaining networks achieves the best AUC performance 0.7580, which is 0.0056 absolute gain over baseline model. Comparing different freeze-finetune split (methods 5-7), we can see that freezing pretrained parameters in level \(L_{0}\) and \(L_{1}\) (method 7) achieves better performance than other split methods (method 5/6), which is 0.0043 absolute gain in AUC. As for the ablation studies of Aspect Gating Fusion in table 4, the baseline is to simply mean pooling three aspects: ID, text and sparse features. We can see the Domain-Gating achieves best AUC performance 0.7524, which is 0.0139 absolute gain over mean-pooling method.
#### 3.2.3. Supervised Finetuning in Cold Start Scenarios
To prove the effectiveness of finetuning our pretrained S&R Foundation model, we compared cold start performance of two scenarios, Service Card Recommendation (task 6) and Content Query Recommendation (task 4). They are new scenarios and we only collected a few samples in a short period of time. The samples are splitted as we leave out last one day's collected data for testing, and use the remaining data for fine-tuning the S&R Foundation. We also train the single domain model as the baseline. From table 5, we can see the fine-tuned S&R Foundation model achieves +0.0216 AUC improvement over single domain model on task 6 and +0.0279 AUC improvement on task 4.
#### 3.2.4. Online AB Testing
To further prove the effectiveness of online performance in cold start scenario, we deployed the fine-tuned S&R Foundation model online in Service Card Recommendation scenario, and compared with baseline, which is the single domain DNN model. The results of the AB Testing from day 1 to day 7 are depicted in Figure 3. The key performance measurement of the cold start scenario is PVCTR (Page View Click Through Rate). And we observed that the fine-tuned S&R Foundation model achieved +17.54% relative gain in PVCTR over baseline. The online AB Testing results showed that our method achieved better performance than baseline consistently in cold start scenario.
## 4. Conclusion
In this paper, we study the problem of training search and recommendation tasks jointly as the S&R Multi-Domain Foundation model, and use domain adaptation techniques to benefit cold start scenario. Our proposed model learns user, query and item representations, applies LLM to encode domain invariant text features and Aspect Gating Fusion to merge ID, text and sparse features. We also conducted extensive experiments on finetuning the foundation models in cold start scenarios, which achieves better performance than the single domain model. The fine-tuned S&R Multi-Domain Foundation model has been successfully deployed online in Alipay's multiple search and recommendation scenarios.
Figure 3. Online AB Testing PVCTR Performance of Service Card Recommendation. |
2310.20126 | The optically thick rotating magnetic wind from a massive white dwarf
merger product -- II. axisymmetric magnetohydrodynamic simulations | We numerically construct a series of axisymmetric rotating magnetic wind
solutions, aiming at exploring the observation properties of massive white
dwarf (WD) merger remnants with a strong magnetic field, a fast spin, and an
intense mass loss, as inferred for WD J005311. We investigate the
magnetospheric structure and the resultant spin-down torque exerted to the
merger remnant with respect to the surface magnetic flux $\Phi_*$, spin angular
frequency $\Omega_*$ and the mass loss rate $\dot M$. We confirm that the wind
properties for $\sigma \equiv \Phi^2_* \Omega_*^2/\dot M v_\mathrm{esc}^3
\gtrsim 1$ significantly deviate from those of the spherical Parker wind, where
$v_\mathrm{esc}$ is the escape velocity at stellar surface. For such a rotating
magnetic wind sequence, we find: (i) quasi-periodic mass eruption triggered by
magnetic reconnection along with the equatorial plane (ii) a scaling relation
for the spin-down torque $T \approx (1/2) \times \dot{M} \Omega_* R^2_*
\sigma^{1/4}$. We apply our results to discuss the spin-down evolution and wind
anisotropy of massive WD merger remnants, the latter of which could be probed
by a successive observation of WD J005311 using Chandra. | Yici Zhong, Kazumi Kashiyama, Shinsuke Takasao, Toshikazu Shigeyama, Kotaro Fujisawa | 2023-10-31T02:04:40Z | http://arxiv.org/abs/2310.20126v1 | The optically thick rotating magnetic wind from a massive white dwarf merger product - II. axisymmetric magnetohydrodynamic simulations
###### Abstract
We numerically construct a series of axisymmetric rotating magnetic wind solutions, aiming at exploring the observation properties of massive white dwarf (WD) merger remnants with a strong magnetic field, a fast spin, and an intense mass loss, as inferred for WD J005311. We investigate the magnetospheric structure and the resultant spin-down torque exerted to the merger remnant with respect to the surface magnetic flux \(\Phi_{*}\), spin angular frequency \(\Omega_{*}\) and the mass loss rate \(\dot{M}\). We confirm that the wind properties for \(\sigma\equiv\Phi_{*}^{2}\Omega_{*}^{2}/\dot{M}v_{\rm esc}^{3}\gtrsim 1\) significantly deviate from those of the spherical Parker wind, where \(v_{\rm esc}\) is the escape velocity at stellar surface. For such a rotating magnetic wind sequence, we find: (i) quasi-periodic mass eruption triggered by magnetic reconnection along with the equatorial plane (ii) a scaling relation for the spin-down torque \(T\approx(1/2)\times\dot{M}\Omega_{*}R_{*}^{2}\sigma^{1/4}\). We apply our results to discuss the spin-down evolution and wind anisotropy of massive WD merger remnants, the latter of which could be probed by a successive observation of WD J005311 using _Chandra_.
white dwarfs -- stars: winds, outflows -- stars: rotation 0000-0002-4000-0000]Yici Zhong
0000-0002-3881-8888]Kazumi Kashiyama
0000-0002-4881-7088]Shinsuke Takasao
0000-0002-0001-8888]Toshikazu Shigeyama
0000-0002-0001-8888]Kotaro Fujisawa
## 1 Introduction
Consequences of a merger of massive white dwarfs (WDs) are of great astrophysical importance. It may explode as a Type Ia supernova in particular when the binary constitutes of carbon-oxygen WDs with a total mass exceeding the Chandrasekhar limit (Webbink, 1984; Iben & Tutukov, 1984). Instead, if a super-Chandrasekhar oxygen-neon core is synthesized after the merger, it may collapse into a neutron star (NS) (Nomoto & Iben, 1985; Saio & Nomoto, 2004). Such a merger induced collapse has gotten attention as a scenario for the formation of peculiar type of neutron stars, e.g., sources of fast radio bursts (e.g., Kashiyama & Murase, 2017; Kremer et al., 2021; Kirsten et al., 2022; Lu et al., 2022).
If not explode nor collapse, the merger product will be a rapidly rotating and strongly magnetized WD (e.g., Tout et al., 2008; Briggs et al., 2015). They would constitue a good fraction, say \(\sim 20\) %, of the Galactic massive WDs with a mass of \(M_{*}\gtrsim 1\,M_{\odot}\)(e.g., Garcia-Berro et al., 2012; Cheng et al., 2020; Schwab, 2021). Thanks to rather complete photometric searches and spectroscopic followups, increasing amount of merged WD candidates have been identified, e.g., ZTF J190132.9+145808.7 with \(M_{*}=(1.327\)-\(1.365)M_{\odot}\), \(P=6.97\,{\rm min}\) and \(B_{*}=(6\)-\(9)\times 10^{8}\,{\rm G}\)(Caiazzo et al., 2021) and SDSS J221141.80+113604.5 with \(M_{*}=1.268\,M_{\odot}\), \(P=76\,{\rm sec}\) and \(B_{*}=1.5\times 10^{7}\,{\rm G}\)(Kilic et al., 2021),
where \(P\) and \(B_{*}\) denote the spin period and the strength of the surface magnetic field at the pole. Their post-merger ages have been estimated as \(\sim 10\,\mathrm{Myr}\) and \(\sim 100\,\mathrm{Myr}\), respctively, from their positions on the cooling track.
Recently, a candidate for a significantly younger merger product, WD J005311, was fortuitously discovered within an infrared nebula (Gvaramadze et al., 2019). The most remarkable characteristic of this WD is unveiled through optical spectroscopy, revealing an optically-thick wind emanating from it. This wind is enriched with carbon burning ashes and exhibits a remarkable velocity of \(v_{\infty}=16,000\pm 1,000\,\mathrm{km\ s^{-1}}\), accompanied by a mass loss rate of \(\dot{M}=(3.5\pm 0.6)\times 10^{-6}M_{\odot}\,\mathrm{yr^{-1}}\). While the direct measurement of the central WD's physical properties remains elusive, the presence of such a fast and intense wind strongly suggests that it is a rapidly rotating and strongly magnetized WD, potentially possessing a super- or near-Chandrasekhar mass (Gvaramadze et al., 2019; Kashiyama et al., 2019).
The mass and composition loaded on the WD J005311 wind is likely from the near-surface carbon burning. The launch of such a wind can be triggered by the Kelvin-Helmholtz contraction of the oxygen neon core of the merged WD, that can happen \(\sim 1,000\)-\(10,000\,\mathrm{yr}\) after the merger (Schwab et al., 2016; Yao et al., 2023; Wu et al., 2023). The timing can be consistent with the post-merger age of the system estimated based on both the expansion velocity of the surrounding nebula and the ancient records on a historical Galactic SN, SN1181, which happened in the direction of WD J005311 \(\sim 850\,\mathrm{yr}\) ago and is likely associated with the merger of the progenitor binary (Ritter et al., 2021; Lykou et al., 2022; Ko et al., 2023).
On the other hand, the expansion velocity of the wind observed in WD J005311 significantly surpasses the escape velocity of a WD with a typical mass. This suggests that the wind is either thermally driven, originating from a super- or near-Chandrasekhar mass WD, or magnetically driven due to the rapid rotation and strong magnetic field of the WD. In the former case, the wind velocity will be (Parker, 1965):
\[v_{\mathrm{T}}\approx\sqrt{\frac{2GM_{*}}{R_{*}}}\sim 20,000\,\mathrm{km\,s^{- 1}}\,\left(\frac{M_{*}}{1.4\,M_{\odot}}\right)^{1/2}\left(\frac{R_{*}}{1,000 \,\mathrm{km}}\right)^{-1/2}, \tag{1}\]
while in the latter case, the maximum wind velocity along the equatorial plane is (Weber and Davis, 1967; Michel, 1969):
\[v_{\mathrm{M,max}}\approx\left(\frac{B_{*}^{2}R_{*}^{4}\Omega_{*}^{2}}{\dot{ M}}\right)^{1/3}\sim 13,000\,\mathrm{km\,s^{-1}}\left(\frac{B_{*}}{2\times 10^{7} \,\mathrm{G}}\right)^{2/3}\left(\frac{R_{*}}{4,000\,\mathrm{km}}\right)^{4/3} \left(\frac{\Omega_{*}}{0.2\,\mathrm{s^{-1}}}\right)^{2/3}\left(\frac{\dot{M} }{3\times 10^{-6}\,M_{\odot}\,\mathrm{yr^{-1}}}\right)^{-1/3}. \tag{2}\]
The wind is so fast that it catches up and clashes into the surrounding supernova ejecta, forming a wind termination shock, which is observed as an inner X-ray nebula (Oskinova et al., 2020; Ko et al., 2023). The X-ray nebula is still in its infancy; given the observed angular size, it is only a few tens of years old (Ko et al., 2023). Subsequent observations may reveal the time variability and anisotropy of the wind, which is generally expected for a rotating magnetic wind but has not been explored in this context. These properties of the wind can also be linked to the mass-loss and spin-down rates of the central WD, which are important in determining the fate of the central WD: whether it eventually collapses into a neutron star, and if so, how rapidly rotating and strongly magnetized the neutron star would be.
Here we model a system like WD J005311 by numerically constructing a 2D axisymmetric wind solution driven by rotating dipole, with implementing a wind launching region that mimics the near-surface carbon burning region. We investigate the wind structure together with its time evolution (i.e., how the mass, energy and angular momentum loss rate from the system evolves with time), and the scaling of the spin-down torque with respect to system parameters such as surface magnetic field, rotation frequency and mass loss rate. This paper is organized as follows. We introduce our setup in Sec. 2, including numerical details. In Sec. 3, we show our results on wind structure, time evolution and scaling of spin-down torque. Finally, we discuss several implications and applications on observational results in Sec. 4.
## 2 Setup
We conduct a series of numerical simulations of a rotating magnetic wind from a massive WD merger product with a stable nuclear burning occurring at the near surface region. We first describe the general numerical setup including the governing equations, the Riemann solver, the mesh decomposition, and the boundary conditions in Sec. 2.1. We then describe the source term that represents the injection of mass and internal energy at the near surface nuclear burning region in Sec. 2.2. Finally, we elaborate on setups related to magnetic fields.
We numerically integrate ideal MHD equations with central gravity;
\[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{v})=S_{\rho}, \tag{3}\]
\[\frac{\partial(\rho\mathbf{v})}{\partial t}+\nabla\cdot\mathbf{T}=-\rho\nabla\phi, \tag{4}\]
\[\frac{\partial\varepsilon_{\mathrm{tot}}}{\partial t}+\nabla\cdot\mathbf{s}=S_ {\mathrm{e}}-\rho(\nabla\phi\cdot\mathbf{v}), \tag{5}\]
\[\frac{\partial\mathbf{B}}{\partial t}-\nabla\times(\mathbf{v}\times\mathbf{B} )=0, \tag{6}\]
in the two dimensional spherical coordinate using Athena++1(Stone et al., 2020). Here \(S_{\rho}\) and \(S_{\mathrm{e}}\) are source terms that we use to mimic the matter and energy injection into the computational domain, which will be described in detail in Sec. 2.2; the velocity vector \(\mathbf{v}\), magnetic field \(\mathbf{B}\), stress tensor \(\mathbf{T}\), total energy density \(\varepsilon_{\mathrm{tot}}\), energy flux \(\mathbf{s}\) are given as
Footnote 1: [https://github.com/PrincetonUniversity/athena](https://github.com/PrincetonUniversity/athena)
\[\mathbf{v}=(v_{r},v_{\theta},v_{\varphi})\left(\begin{array}{c}\hat{\mathbf{r}} \\ \hat{\mathbf{\theta}}\\ \hat{\mathbf{\varphi}}\end{array}\right) \tag{7}\]
\[\mathbf{B}=(B_{r},B_{\theta},B_{\varphi})\left(\begin{array}{c}\hat{\mathbf{r}} \\ \hat{\mathbf{\theta}}\\ \hat{\mathbf{\varphi}}\end{array}\right) \tag{8}\]
\[\mathbf{T}=\rho\mathbf{v}\mathbf{v}+\left(p+\frac{|\mathbf{B}|^{2}}{8\pi} \right)\mathbf{I}-\frac{\mathbf{B}\mathbf{B}}{4\pi}, \tag{9}\]
\[\varepsilon_{\mathrm{tot}}=\frac{\rho|\mathbf{v}|^{2}}{2}+\frac{|\mathbf{B}|^{ 2}}{8\pi}+\frac{p}{\gamma-1}, \tag{10}\]
\[\mathbf{s}=\left(\frac{1}{2}\rho|\mathbf{v}|^{2}+\frac{\gamma}{\gamma-1}p+ \frac{|\mathbf{B}|^{2}}{4\pi}\right)\mathbf{v}-\frac{\mathbf{B}(\mathbf{v} \cdot\mathbf{B})}{4\pi}, \tag{11}\]
where \(\rho\) is the density, \(p\) is the pressure, \(\mathbf{I}\) is the identity dyadic tensor, and \(\phi=-GM_{*}/r\) is the gravitational potential, where \(G\) is the gravitational constant, \(M_{*}\) is the mass of the central WD. To close Eqs.(3)-(6), we use the adiabatic equation of state with an index of \(\gamma=4/3\). The above ideal MHD equations are scale-free; we use a unit of \(G=M_{*}=R_{*}=1\) for the numerical calculations, where \(R_{*}\) is the radius of the WD. When estimating quantities in a physical unit, we transform to the cgs unit with setting \(M_{*}=1\,M_{\odot}\) and radius \(R_{*}=0.009\,\,\,R_{\odot}\). We note that this is consistent with the mass-radius relation of degenerate oxygen neon cores with an angular frequency of \(\Omega_{*}\lesssim 0.5\,\mathrm{s}^{-1}\)(Kashiyama et al., 2019).
We use the HLLD approximate Riemann solver for the MHD equations (Miyoshi and Kusano, 2005) with the second-order piecewise linear reconstruction method (PLM). The time integration is carried out by the second-order Runge-Kutta method with Courant-Friedrich-Lewy number of 0.1. The computational domain is resolved with the mesh number of 128 for \([0.9,30]\)\(R_{*}\) in the radial direction and 128 for \([0,\pi]\) in the polar direction. We employ a non-uniform mesh in the radial direction, where the radial grid size is proportional to the radius. The fiducial value of the grid size ratio \(\Delta r(i+1)/\Delta r(i)\) is 1.02 so that the smallest cell size is 0.05, where \(i\) stands for the grid index.
At the outer boundary of the computational domain, we impose the zero-gradient boundary condition for the radial direction and connect the domain across the axes for the polar direction. On the other hand, we impose the zero gradient boundary condition for the inner boundary (\(r=r_{\mathrm{in}}=0.9R_{*}\)), and set the velocity to be compatible with the rigid rotation of the central WD;
\[v_{r,\mathrm{in}}=v_{\theta,\mathrm{in}}=0,\,\,\,\,v_{\varphi,\mathrm{in}}= \Omega_{*}r_{\mathrm{in}}\sin\theta. \tag{12}\]
In this paper, we consider the cases with \(\Omega_{*}=[0.05,0.07,0.12,0.16,0.23,0.35,0.46]\,\mathrm{s^{-1}}\), which correpsonds to \(\sim\) 5-50 % of the mass shedding limit. In terms of the inner ghost cell's density and pressure, we carefully prescribe their values to achieve a specific thermally-driven wind mass loss rate (see Sec. 2.2). Initially, we distribute a cold and homogeneous gas throughout the entire computational domain and inject the thermally-driven wind from the designated launching region. As the thermally-driven outflow reaches the outer boundary, we initiate an aligned dipole field at the inner boundary, facilitating the transformation of the wind into a rotating magnetic wind (see Sec. 2.3).
### Wind launching region
We initialize our simulation with a cold, homogeneous, isotropic and non-magnetized atmosphere, and set up a "wind launching region" 2 with a width of \(\mathcal{D}\) near the WD surface, where the mass is injected to the computational domain to mimic the mass loading due to the carbon burning around the surface of massive WD merger product. To do that, we implement an isotropic relaxation function for both matter and energy source terms to update density and pressure in wind launching region:
Footnote 2: Note that this is originally called damping layer in the context of accreting stellar system (see Takasao et al., 2019)
\[S_{\rho}=\frac{\rho_{*}-\rho}{\tau},\quad(r_{\mathrm{in}}\leq r\leq\mathcal{D}), \tag{13}\]
\[S_{\mathrm{e}}=\frac{p_{*}-p}{\tau},\quad(r_{\mathrm{in}}\leq r\leq\mathcal{D}), \tag{14}\]
where \(\rho_{*}\) and \(p_{*}\) correspond to the density and pressure at the outer edge of the wind launching region. The actual value of the relaxation timescale \(\tau\) is chosen to satisfy the condition,
\[\tau\lesssim\frac{\mathcal{D}}{\max{(v_{*,*},v_{\mathrm{A,*}})}}, \tag{15}\]
where \(v_{*,*}\equiv\sqrt{\gamma p_{*}/\rho_{*}}\) is the adiabatic sound velocity and \(v_{\mathrm{A,*}}\equiv\sqrt{B_{*}^{2}/4\pi\rho_{*}}\) is the Alfven velocity with \(B_{*}\) being the surface magnetic field strength at the equator (see Sec. 2.3). This condition is needed to stably inject mass to the computational domain by suppressing fluctuations associated with hydrodynamic and/or MHD waves in the wind launching region. The above source terms can self-consistently produce a thermal pressure-driven wind with \(\rho\propto r^{-2}\), \(v_{r}\approx v_{\mathrm{esc}}\) and a stable mass loss in the steady state, where \(v_{\mathrm{esc}}\) is the surface escape velocity. In this paper, we set the width of the wind launching region as \(\mathcal{D}=0.6,R_{*}\) as our fiducial value and check the convergence of our results with respect to the value of \(\mathcal{D}\). We use a fixed value of \(\tau\), with which Eq. (15) is satisfied for the most strongly magnetized case. Then we set \(\rho_{*}\) and \(p_{*}\) so that the mass loss rate by the thermal pressure-driven wind becomes \(\dot{M}=10^{-6}\ M_{\odot}\) yr\({}^{-1}\).
### Rotating magnetic wind
After the thermal pressure-driven wind settles down, we turn on a dipole magnetic field that is embedded on the rotating stellar surface, with magnetic moment \(\boldsymbol{\mu}\equiv B_{*}R_{*}^{3}\hat{\boldsymbol{z}}\) aligned with the rotation axis. We use the following vector potential
\[\begin{split}\mathbf{A}(\mathbf{r})&=\left(\frac{ \boldsymbol{\mu}\times\mathbf{r}}{r^{3}}\right)_{d}\\ &=\left(0,0,\frac{B_{*}R_{*}^{3}\sin\theta}{r^{2}}\right)\left( \begin{array}{c}\hat{\boldsymbol{r}}\\ \hat{\boldsymbol{\theta}}\\ \hat{\boldsymbol{\varphi}}\end{array}\right)\end{split} \tag{16}\]
to ensure that the divergence of magnetic field vanishes. We consider the cases with \(B_{*}=[1.5\times 10^{6},2.3\times 10^{6},2.6\times 10^{6},3.0\times 10^{6}]\) G, for which \(\beta_{*}\equiv v_{\mathrm{s,*}}/v_{\mathrm{A,*}}=10^{-(2-3)}\ll 1\) so that the magnetic pressure dominates in the near surface region. In order to numerically solve the MHD equations in such a low plasma beta gas, we implement the dual energy formalism (see Appendix A).
The launched gas will then corotate with the rotating magnetic field, and the magnetic torque, which depends on the resultant magnetospheric structure and the polar angle, can also contribute to the wind acceleration in addition to
the thermal pressure gradient. As a result, we expect a rotating magnetic wind to start blowing in an angle dependent manner, and relax to a quasi-steady state when it reaches to the outer boundary. We simulate the rotating magnetic wind for a few 10 \(\times\) the spin period after turning on the magnetic field.
## 3 Result
Table. 3 shows a summary of our simulation. A model Bxdy corresponds to the case with \(B_{*}=x\) and \(\Omega_{*}=y\) in the cgs unit. When the rotating magnetic wind becomes quasi-steady, it can be characterized by the mass loss rate
\[\dot{M}=2\pi\int_{0}^{\pi}\rho v_{r}r^{2}\sin\theta d\theta, \tag{17}\]
wind luminosity
\[L=2\pi\int_{0}^{\pi}\rho v_{r}r^{2}\left[\frac{1}{2}v^{2}+\frac{\gamma P}{\rho (\gamma-1)}-\frac{\Omega_{*}r\sin\theta B_{r}B_{\phi}}{4\pi\rho v_{r}}\right] \sin\theta d\theta, \tag{18}\]
and spindown torque
\[T=2\pi\int_{0}^{\pi}\rho v_{r}r^{2}\left(rv_{\phi}-\frac{rB_{r}B_{\phi}}{4\pi \rho v_{r}}\right)\sin\theta d\theta \tag{19}\]
estimated at the outer boundary. As we show later, the strength of rotating magnetic winds can be characterized by a dimensionless parameter
\[\sigma\equiv\frac{\Phi_{*}^{2}\Omega_{*}^{2}}{\dot{M}v_{\rm esc}^{3}}, \tag{20}\]
where \(\Phi_{*}\equiv 2\pi\int_{0}^{\pi/2}B_{r}r^{2}\sin\theta d\theta|_{r=R_{*}}\) is the half hemisphere magnetic flux and \(v_{\rm esc}=\sqrt{2GM_{*}/R_{*}}\) is the escape velocity at the WD surface 3. With using \(\sigma\), the Michel velocity (Eq. 1) can be described as \(v_{\rm M,max}\approx\sigma^{1/3}v_{\rm esc}\). Our simulations cover the range of \(1\lesssim\sigma\lesssim 500\).
Footnote 3: In relativistic MHD regime, speed of light \(c\) is conventionally used as the characteristic speed of the system (e.g., see the definition of \(\sigma_{0}\) in Bucciantini et al., 2006).
Hereafter we take B1.5e600.23 with \(\sigma=34.3\) as the fiducial model, and first show the multi-dimensional structure of the rotating magnetic wind in Sec. 3.1. We then investigate the time variability of the system primarily focusing on the impacts of quasi-periodic eruption along with the equatorial plane in Sec. 3.2. Finally, we show how the time-averaged spin-down torque scales with system parameters in Sec. 3.3.
### Anisotropic wind structure
Fig. 1 shows a snapshot of our fiducial model (B1.5e600.23) after the wind structure reaches a quasi-steady state. As explained in Sec. 2, mass and internal energy are continuously injected into the wind launching region, as indicated by the lightly shaded area around the WD surface. An aligned rotating magnetic dipole is situated within the WD, and the resulting magnetic field lines are represented by the solid lines. Since the plasma beta (top-right panel of Fig. 1) at the WD surface is significantly smaller than unity, the injected gases co-rotate with the magnetic field up to approximately the Alfven radius, \(r_{\rm A}\), shown with the dotted line; we determine \(r_{\rm A}\) from the condition \(\rho(r_{\rm A})|{\bf v}(r_{\rm A})|^{2}=|{\mathbf{B}}(r_{\rm A})|^{2}/(4\pi r_{\rm A})\). As shown in the bottom-right panel of Fig. 1, the poloidal component of the magnetic field dominates inside the Alfven radius, maintaining the dipolar structure. In this region, the gases acquire azimuthal velocities due to the magnetic centrifugal force. On the other hand, the gases are also accelerated by the thermal pressure gradient at the outer edge of the wind launching region, causing them to expand radially. As the magnetic field strength decreases more rapidly with radius than the inertia of the expanding gases, the magnetic field structure undergoes modification, and the toroidal component dominates outside the Alfven radius.
In the quasi-steady state, magnetic fields are fully open in directions away from the equatorial plane (\(\theta\lesssim 80^{\circ}\) and \(\theta\gtrsim 100^{\circ}\)), where the wind is primarily accelerated by the pressure gradient at the outer edge of the wind launching region and becomes supersonic at \(r\sim 2\,R_{*}\). In Fig. 1, the sonic radius \(r_{\rm s}\) is depicted with the dashed line; where we determine \(r_{\rm s}\) based on the condition \(|{\bf v}(r_{\rm s})|=c_{\rm s}(r_{\rm s})\). Note that the terminal velocity is comparable to the escape velocity (as shown in the bottom-right panel of Fig. 1), and the azimuthal velocities are at most a few percent of the radial velocities. Therefore, the properties of the wind in these directions are broadly consistent with the non-magnetized spherical Parker wind, even though the plasma beta at small radii is significantly less than unity.
In the equatorial direction (\(80^{\circ}\lesssim\theta\lesssim 100^{\circ}\)), magnetic fields are closed at small radii, forming a corotating magnetosphere. Beyond the last closed loop, the magnetic field lines are open with a predominant toroidal component, having opposite polarities with respect to the equatorial plane. The transition of the magnetic field configuration is mediated by reconnection occurring at around the tip of the last closed loop, or the Y point. As can be observed from the top-right panel of Fig. 1, the plasma beta in this transition region is higher than those along the open magnetic fields, implying that the gas is trapped mainly by magnetic tensions. In this high plasma-beta region sandwiched by low plasma beta regions, gases are pinched and radially accelerated in the reconnection region, eventually become supersonic at around \(r\sim 5\,R_{*}\).
We confirm that the terminal velocity of the fastest portion becomes comparable to the Michel velocity, \(v_{\rm M,max}\sim\sigma^{1/3}v_{\rm esc}\). The azimuthal velocity at the Y point is 30% of \(v_{\rm esc}\), which roughly corresponds to the corotation velocity at that location. After becoming ballistic, it gradually decreases as \(\propto 1/r\) due to angular momentum conservation.
The latitudinal angle dependence of the wind at the outer boundary is illustrated in Fig. 2, where the gray shaded regions represent the dynamical range of radial velocity (top-left), luminosity (top-right), torque (bottom-left), and mass loss rate (bottom-right). As described in the previous paragraphs, radial velocities in the near-equatorial direction can reach and transiently exceed the Michel velocity (represented by the horizontal dotted line) during eruptions caused by reconnection at the Y-point. Consequently, the wind luminosity, dominated by the radial kinetic term, also exhibits a sharp peak in the equatorial direction. On the other hand, the torque is primarily exerted by corotation with the
\begin{table}
\begin{tabular}{c|c c|c c c c} \hline \hline & \multicolumn{2}{c|}{input parameters} & \multicolumn{4}{c}{calculated quantities\({}^{\dagger}\)} \\ \cline{2-7} Model & \(B_{*}\) [G]\({}^{a}\) & \(\Omega_{*}\) [s\({}^{-1}\)]\({}^{b}\) & \(\dot{M}\) [\(M_{\odot}\) yr\({}^{-1}\)]\({}^{c}\) & \(L\) [erg s\({}^{-1}\)]\({}^{d}\) & \(T\) [dyn cm] \({}^{e}\) & \(\sigma\)\({}^{f}\) \\ \hline [email protected] & \(3.0\times 10^{6}\) & 0.46 & \(1.77\times 10^{-6}\) & \(7.70\times 10^{37}\) & \(1.80\times 10^{38}\) & 382 \\ [email protected] & \(2.6\times 10^{6}\) & 0.46 & \(1.62\times 10^{-6}\) & \(6.83\times 10^{37}\) & \(1.70\times 10^{38}\) & 317 \\ [email protected] & \(2.3\times 10^{6}\) & 0.46 & \(1.52\times 10^{-6}\) & \(6.06\times 10^{37}\) & \(1.57\times 10^{38}\) & 258 \\ [email protected] & \(1.5\times 10^{6}\) & 0.46 & \(1.50\times 10^{-6}\) & \(5.44\times 10^{37}\) & \(1.33\times 10^{38}\) & 122 \\ \hline [email protected] & \(3.0\times 10^{6}\) & 0.35 & \(1.65\times 10^{-6}\) & \(6.00\times 10^{37}\) & \(1.21\times 10^{38}\) & 235 \\ [email protected] & \(2.6\times 10^{6}\) & 0.35 & \(1.49\times 10^{-6}\) & \(5.10\times 10^{37}\) & \(1.06\times 10^{38}\) & 199 \\ [email protected] & \(2.3\times 10^{6}\) & 0.35 & \(1.40\times 10^{-6}\) & \(4.59\times 10^{37}\) & \(9.72\times 10^{37}\) & 161 \\ [email protected] & \(1.5\times 10^{6}\) & 0.35 & \(1.39\times 10^{-6}\) & \(4.13\times 10^{37}\) & \(7.91\times 10^{37}\) & 75.8 \\ \hline [email protected] & \(3.0\times 10^{6}\) & 0.23 & \(1.41\times 10^{-6}\) & \(4.18\times 10^{37}\) & \(5.23\times 10^{37}\) & 111 \\ [email protected] & \(2.6\times 10^{6}\) & 0.23 & \(1.27\times 10^{-6}\) & \(3.58\times 10^{37}\) & \(4.78\times 10^{37}\) & 94.0 \\ [email protected] & \(2.3\times 10^{6}\) & 0.23 & \(1.20\times 10^{-6}\) & \(3.23\times 10^{37}\) & \(4.31\times 10^{37}\) & 76.2 \\ [email protected] & \(1.5\times 10^{6}\) & 0.23 & \(1.24\times 10^{-6}\) & \(3.13\times 10^{37}\) & \(3.54\times 10^{37}\) & 34.3 \\ \hline [email protected] & \(3.0\times 10^{6}\) & 0.16 & \(1.29\times 10^{-6}\) & \(3.40\times 10^{37}\) & \(2.80\times 10^{37}\) & 52.5 \\ [email protected] & \(2.6\times 10^{6}\) & 0.16 & \(1.13\times 10^{-6}\) & \(2.85\times 10^{37}\) & \(2.52\times 10^{37}\) & 45.4 \\ [email protected] & \(2.3\times 10^{6}\) & 0.16 & \(1.07\times 10^{-6}\) & \(2.59\times 10^{37}\) & \(2.31\times 10^{37}\) & 36.4 \\ [email protected] & \(1.5\times 10^{6}\) & 0.16 & \(1.11\times 10^{-6}\) & \(2.61\times 10^{37}\) & \(1.70\times 10^{37}\) & 16.3 \\ \hline [email protected] & \(3.0\times 10^{6}\) & 0.12 & \(1.14\times 10^{-6}\) & \(2.84\times 10^{37}\) & \(1.25\times 10^{37}\) & 20.7 \\ [email protected] & \(2.6\times 10^{6}\) & 0.12 & \(1.00\times 10^{-6}\) & \(2.36\times 10^{37}\) & \(1.09\times 10^{37}\) & 17.8 \\ [email protected] & \(2.3\times 10^{6}\) & 0.12 & \(1.00\times 10^{-6}\) & \(2.14\times 10^{37}\) & \(9.86\times 10^{36}\) & 14.5 \\ [email protected] & \(1.5\times 10^{6}\) & 0.12 & \(1.00\times 10^{-6}\) & \(2.28\times 10^{37}\) & \(7.74\times 10^{36}\) & 6.19 \\ \hline [email protected] & \(3.0\times 10^{6}\) & 0.07 & \(1.08\times 10^{-6}\) & \(2.67\times 10^{37}\) & \(7.26\times 10^{36}\) & 11.1 \\ [email protected] & \(2.6\times 10^{6}\) & 0.07 & \(1.00\times 10^{-6}\) & \(2.19\times 10^{37}\) & \(6.39\times 10^{36}\) & 9.62 \\ [email protected] & \(1.5\times 10^{6}\) & 0.07 & \(1.00\times 10^{-6}\) & \(2.00\times 10^{37}\) & \(5.83\times 10^{36}\) & 7.71 \\ [email protected] & \(1.5\times 10^{6}\) & 0.07 & \(1.00\times 10^{-6}\) & \(2.21\times 10^{37}\) & \(4.92\times 10^{36}\) & 3.21 \\ \hline [email protected] & \(3.0\times 10^{6}\) & 0.05 & \(1.00\times 10^{-6}\) & \(2.56\times 10^{37}\) & \(4.42\times 10^{36}\) & 5.59 \\ [email protected] & \(2.6\times 10^{6}\) & 0.05 & \(1.00\times 10^{-6}\) & \(2.10\times 10^{37}\) & \(3.99\times 10^{36}\) & 4.86 \\ [email protected] & \(1.5
magnetic fields and reaches its peak slightly off the equatorial plane (\(\theta\sim 85^{\circ}\)), corresponding to the edge of the concave shape of the Alfven radius.
### Time variability
The acceleration of the rotating magnetic wind in the equatorial direction occurs in a time-variable manner, associated with magnetic reconnection at the Y-point. Consequently, the overall flux of mass, energy, and angular momentum from the central WD can also vary with time. Fig. 3 displays the time evolution of the mass loss rate (\(\dot{M}\)), luminosity (\(L\)), and torque (\(T\)) in our fiducial model. All quantities are normalized by their time-averaged values.
The lower panel of Fig. 3, which provides a long-term perspective, reveals a recurrent eruptive behavior. A recurrent cycle consists of pre-eruption, reconnection, post-eruption phases: In the pre-eruption phase, gases injected into the near-equatorial plane become trapped within the closed field lines. Due to the centrifugal force, the gases accumulate at the tip of the last closed loop, resulting in a continuous decrease in plasma beta in that region. When the centrifugal force acting on the accumulated gases exceeds the tension of the closed magnetic fields, the tip of the closed zone starts expanding radially and is subsequently ejected as a plasmoid through reconnection. Such a plasmoid can be observed at \(r\sim 4\) - \(5\,R_{*}\) in Fig. 1. Afterwards, the cycle returns to the pre-eruption phase and restores the gases within the closed magnetic field lines. This type of recurrent eruptions has been known as slingshot prominence in the context of magnetically-active rapidly-rotating stars (e.g., Ferreira, 2000; Townsend & Owocki, 2005; Jardine & Collier Cameron, 2019).
The upper panel of Fig. 3 focuses on the fluxes during a rotation period (\(t=15\)-\(16\)\([2\pi/\Omega_{*}]\)), as also depicted in Fig. 2. In comparison to the pre- and post-eruption phases, represented by the purple and red lines, respectively, the observed fluxes of mass, energy, and angular momentum consistently increase as the erupted plasmoids reach the outer
Figure 1: Snapshot of the rotating magnetic wind of B1.5e6D0.23 showing the gas density (left top), the plasma beta (right top), the radial velocity normalized by the escape velocity from the WD, and the ratio of the strength of toroidal field over poloidal field (right bottom). The solid, dashed, and dotted lines indicate the poloidal magnetic field lines, the position where the radial velocity of the wind exceeds the adiabatic sound velocity and the Alfvén velocity, respectively. The pale shaded region around the WD surface corresponds to the wind launching region.
boundary, indicated by the green lines. Notably, the magnetic torque significantly contributes to the overall torque increase during the eruption phase and plays a dominant role in the central WD's spin-down.
We note that the specifics of reconnection dynamics, such as the frequency of recurrent eruptions and the resulting time evolution of mass, energy, and angular momentum fluxes, may be influenced by our numerical parameters, including spatial resolution (which governs numerical resistivity) and the width of the wind launching region. However, we have verified that the time-averaged values of wind velocities, mass loss rate, luminosity, and torque have all reached convergence concerning the spatial resolution in our simulations and the width of the wind launching region (see Appendix B).
### Scaling relation of the spin-down torque
Here we consider how the spin-down torque of rotating magnetic wind depends on the system parameters based on our numerical results. As we show in the previous sub-sections, the time-averaged torque is essentially determined by the magnetic torque exerted on the gases at the tip of the last closed field lines, or the Y-point, where the field configuration is still roughly compatible with the rotating dipole. In this case, the (electro)magnetic torque at the Y point can be estimated as
\[T\approx\frac{\mu_{*}^{2}}{r_{\rm Y}^{3}}, \tag{21}\]
where \(r_{\rm Y}\) represents the Y-point radius. Eq. (21) is based on the analogy with the force-free limit (e.g., Contopoulos & Spitkovsky, 2006). In the force-free limit, the last closed field line corresponds to the light cylinder, \(r_{\rm Y}\approx r_{\rm lc}=c/\Omega_{*}\), and the spin-down torque of a rotating dipole is roughly given as \(T_{\rm ff}\approx B_{\rm lc}^{2}r_{\rm lc}\approx[B_{*}\times(r_{\rm lc}/R_{ *})^{-3}]^{2}R_{\rm lc}^{3}\approx\mu_{*}^{2}/r_{\rm lc}^{3}\approx\mu_{*}^{ 2}/r_{\rm Y}^{3}\).
Based on our numerical results, the Y-point radius is determined by the balance between the centrifugal force and the magnetic tension force excerted on the gases, which can be described as
\[\rho_{\rm Y}r_{\rm Y}\Omega_{*}^{2}\approx B_{\rm Y}^{2}\kappa_{\rm Y}, \tag{22}\]
Figure 2: Latitudinal angle dependence of the rotating magnetic wind of B1.5e6Ø0.23 showing the radial velocity \(v_{r}\) (top left panel), the wind luminosity \(L\) (top right panel), the torque \(T\) (bottom left panel), and the mass loss rate \(\dot{M}\) (bottom right panel) at the outer boundary. A time sequence during a rotation period (\(t=15\)-\(16\)\([2\pi/\Omega_{*}]\)) is represented with colors across the gray shaded region, indicating the entire dynamic range during the simulation. The thick purple, green, and red lines highlight the timings of pre-eruption, eruption, and post-eruption, respectively. These timings are marked with vertical dashed lines in the upper panel of Fig. 3. The orange dotted line in the top left panel indicates the Michel velocity (Eq. 2).
where \(\rho_{\rm Y}\) is the density, \(B_{\rm Y}\) is the magnetic field strength, and \(\kappa_{\rm Y}\) is the curvature of the magnetic field at the Y point. In our case, the mass injection from the wind launching region is designed to be spherical, allowing us to describe the density at the Y point as
\[\rho_{\rm Y}\approx\frac{\dot{M}}{4\pi v_{r,{\rm Y}}r_{\rm Y}^{2}}\approx \frac{\dot{M}}{4\pi v_{\rm esc}r_{\rm Y}^{2}}. \tag{23}\]
For the latter equation, we take into account that the gases at the Y point is quasi-hydrostatic in the radial direction and \(v_{r,{\rm Y}}\approx v_{\rm esc}\) is satisfied in all cases examined in Table 3. On the other hand, given again that the magnetic field configuration at around the Y point is still roughly compatible with the rotating dipole, the strength and curvature of the magnetic field can be estimated as
\[B_{\rm Y}\approx B_{*}\left(\frac{R_{*}}{r_{\rm Y}}\right)^{3}, \tag{24}\]
\[\kappa_{\rm Y}\approx\frac{r_{\rm Y}}{R_{*}^{2}}, \tag{25}\]
respectively. By substituting Eqs. (23-25) into Eq. (22), the Y-point radius can be obtained as
\[r_{\rm Y}\approx\left(\frac{\Phi_{*}^{2}v_{\rm esc}}{\Omega_{*}^{2}\dot{M}} \right)^{1/4}. \tag{26}\]
Using Eq. (19) and the dimensionless parameter \(\sigma\), the torque of a rotating magnetic wind can be expressed as
\[\mathcal{T}\equiv\frac{T}{\dot{M}\Omega_{*}R_{*}^{2}}\approx\sigma^{1/4}. \tag{27}\]
Figure 3: Time evolution of mass loss rate \(\dot{M}\), luminosity \(L\), and torque \(T\) of the rotating magnetic wind of B1.5e6h0.23 estimated at the outer boundary of the computational domain. The quantities are normalized by the time-averaged values. The upper panel displays a close-up view of a rotation period (\(t=15\)-\(16\)\([2\pi/\Omega_{*}]\)), where the vertical dashed lines indicate the timings of pre-eruption, eruption, and post-eruption highlighted in Fig. 2.
Fig. 4 shows the relation between the dimensionless parameters \(\sigma\) and \(\mathcal{T}\). While the above derivation of Eq. (27) is crudely approximate, the derived scaling relation is broadly consistent with our simulation results. The data points, regardless of their angular frequencies, can be effectively fitted by a single relation: \(\mathcal{T}=0.5\times\sigma^{1/4}\). With restoring the physical dimensions, we obtain a fitting formula for the time-averaged spin-down torque of the rotating magnetic wind as
\[T\approx\frac{\dot{M}\Omega_{*}R_{*}^{2}\sigma^{1/4}}{2}\sim 2.4\times 10^{36} \,\mathrm{erg\,s^{-1}}\left(\frac{M_{*}}{M_{\odot}}\right)^{-3/8}\left(\frac{ R_{*}}{0.009R_{\odot}}\right)^{27/8}\left(\frac{\dot{M}}{10^{-6}\,M_{\odot}\, \mathrm{yr^{-1}}}\right)^{3/4}\left(\frac{B_{*}}{10^{6}\,\mathrm{G}}\right)^{1 /2}\left(\frac{\Omega_{*}}{0.1\,\mathrm{s^{-1}}}\right)^{3/2}, \tag{28}\]
which can be applicable at least to the cases with \(1\lesssim\sigma\lesssim 10^{3}\).
## 4 Summary and Discussion
We have conducted a series of axisymmetric MHD simulations for rapidly rotating and strongly magnetized WDs, taking into account a near-surface carbon burning process as observationally inferred for WD J005311. We systematically investigated the wind anisotropy, time variability, and the spin-down evolution with respect to the dimensionless parameter \(\sigma\) (Eq. 20). We have confirmed that a co-rotating magnetosphere forms beyond the wind launching region and inside the Alfven radius for \(\sigma\gtrsim 1\), which leads to an anisotropic wind structure. In the near-equatorial directions there happens recurrent eruptions of plasmoids that are triggered by reconnections near the Y point. These plasmoids are accelerated to a radial velocity compatible with the Michel velocity, while the wind properties remain broadly consistent with the Parker wind away from the equatorial plane. We found a scaling relation for the spin-down torque (Eq. 27) that can be consistently explained by the criteria for reconnections to happen around the Y point, based on the numerical facts we obtained. Our results complement previous studies on solar-like stars with relatively slow rotation (e.g., Ud-Doula et al., 2009; Matt et al., 2012; Raives et al., 2023), and can be applied to not only massive WD merger remnants, but also various stellar objects with \(1\lesssim\sigma\lesssim 10^{3}\).
We now discuss implications of our numerical results on the properties of WD J005311. To reproduce the observed maximum wind velocity of \(v_{\infty}=16,000\pm 1,000\) km s\({}^{-1}\) by the rotating magnetic wind near the equatorial plane, and considering the carbon burning to be occurred near the WD surface, the WD paramters are constrained to be \(M_{*}\sim 1.1\)-\(1.3\,M_{\odot}\), \(B_{*}\sim(2\)-\(5)\times 10^{7}\,\mathrm{G}\), and \(\Omega\sim 0.2\)-\(0.5\,\mathrm{s}^{-1}\)(Kashiyama et al., 2019). Consequently, the dimensionless parameter ranges from \(\sigma\sim 2\)-\(3\).
* Given the Michel velocity to be \(v_{\mathrm{M,max}}\approx\sigma^{1/3}v_{\mathrm{esc}}\), the contrast in radial velocity between the equatorial and polar directions is \(\sim\sigma^{1/3}\sim 1.2\)-\(1.4\). Such an anisotropic velocity profile could manifest in the optical spectrum. To identify this signature, a multi-dimensional line transfer calculation based on our optically-thick rotating magnetic wind solution is necessary.
* Assuming the mass injection from the carbon-burning region to be spherical, the time-averaged mass loss is also presumed to be spherical. In other words, the quantity \(\rho v_{r}\) remains relatively constant concerning the latitudinal angle. Consequently, the difference in wind ram pressure, which is proportional to \(\rho v_{r}^{2}\), between the equatorial and polar directions is estimated to be \(\sim\sigma^{1/3}\), roughly within the range of 1.2-1.4. This can result in a non-spherical expansion of the wind termination shock. The wind nebula of WD J005311 has recently been shown to have an extended structure by _Chandra_(Ko et al., 2023). Continued observations might identify any asymmetry or non-spherical characteristics.
* The reconnection around the Y-point occurs in a time-dependent manner, which makes the wind acceleration and the resultant non-thermal radiation also time-variable. However, given that the Y-point is well within the photosphere in the case of WD J005311 (with \(r_{\mathrm{ph}}\sim 0.15R_{\odot}\)) and the light crossing time at the wind termination shock is much longer than the expected reoccurrence time of reconnection, any time variability induced by reconnection may become smeared out and is difficult to detect. This can potentially explain the absence of apparent variabilities in WD J005311.
* Using the scaling relation for the spin-down torque (Eq. 28), we can estimate the spin-down timescale of WD J005311 as \(t_{\mathrm{sd}}\approx 2M_{*}R_{*}^{2}\Omega_{*}/5T\), or \[t_{\mathrm{sd}}\sim 7.2\times 10^{4}\,\mathrm{yr}\left(\frac{M_{*}}{1.2\,M_{ \odot}}\right)^{11/8}\left(\frac{R_{*}}{4,000\,\mathrm{km}}\right)^{-11/8} \left(\frac{\dot{M}}{3\times 10^{-6}\,M_{\odot}\,\mathrm{yr}^{-1}} \right)^{-3/4}\left(\frac{B_{*}}{2\times 10^{7}\,G}\right)^{-1/2}\left(\frac{ \Omega_{*}}{0.2\,\mathrm{s}^{-1}}\right)^{-1/2}.\] (29) Hence, even if the currently observed wind of WD J005311 is a rotating magnetic one and continues to blow for a Kelvin-Helmholtz timescale of the central WD, which is \(\sim 1,000\)-\(10,000,\mathrm{yr}\), the spin-down will be negligible. When the carbon burning in the near-surface region ceases, the mass loss rate will significantly decrease, which increases the dimensionless parameter \(\sigma\). The rotating magnetic wind will then become relativistic and eventually enter the force-free regime without significantly spinning down the WD. In this case, the remnant WD may serve as a non-thermal radiation source, or or the so-called WD pulsar (e.g., Kashiyama et al., 2011).
Finally, we address some caveats in our numerical simulations. We have implemented a simple prescription for the near-surface carbon burning region as source terms (Eqs. 13 and 14), referred to as the wind launching region. However, the actual near-surface carbon burning region should be convective, and can be affected by the strong magnetic field. The structure of the convective region, the resulting wind launch, and its chemical composition would also be influenced by the radiative transfer. For accurate multi-wavelength spectrum calculations, it is desirable to conduct a comprehensive radiative MHD simulation that covers from the carbon burning layer to the photosphere radius. Also, we only investigate the aligned rotating dipole magnetic fields in this paper, while a more complicated field configuration such as oblique or off-centered dipole may be realized for the remnant WD system. Finally, the deformation of the central WD due to its rapid rotation and anisotropic carbon burning can alter the observed properties as well. We save the investigations into the above topics for our future work.
YZ is supported by the International Graduate Program for Excellence in Earth-Space Science (IGPEES) at the University of Tokyo. This work is also supported by Grants-in-Aid for Scientific Research No. JP23KJ0392(YZ), JP20K04010, JP20H01904, JP22H00130(KK), JP21H04487, JP22KK0043, JP22K14074 (ST), JP22K03688, JP22K03671, JP20H05639 (TS), and JP20K14512(KF). We thank Eliot Quataert for fruitful discussions and useful suggestions.
* Athena++
## Appendix A Dual energy formalism
We introduced the so-called dual energy formalism to treat the magnetically dominated region in our simulations. This method was originally developed by Bryan et al. (1995), in order to deal with simulations with high Mach number flow. The basic idea is to separately solve the equation of internal energy in high Mach number (\(\mathcal{M}\)) region, and smoothly connect it to the solution given by the equation of total energy while \(\mathcal{M}\sim 1\). We applied it to our case, where the magnetic energy (instead of the kinetic energy in Bryan et al. (1995)) dominates over others especially around the WD surface, so the relevant parameter is now plasma \(\beta\) instead of \(\mathcal{M}\). Details are given as follows.
First of all, internal energy \(e_{\rm in}\) can be written in terms of the kinetic energy \(\rm E_{k}\), the total energy \(\rm E_{tot}\) and magnetic energy \(e_{m}\) as
\[e_{\rm in}=\rm E_{tot}-e_{m}-E_{k}\] (A1)
in our simulations. Considering the case with magnetic energy dominated (\(\beta\lesssim 1\)), right-hand side of this equation becomes a difference between two large numbers, which is problematic for numerical computations, and becomes worse with \(\beta\) decreases.
By solving the internal energy equation
\[\frac{\partial e_{\rm in}}{\partial t}+\mathbf{v}\cdot\nabla e_{\rm in}=-p \nabla\cdot\mathbf{v}\] (A2)
separately, we can take the solution as a floor to prevent randomly small or even negative numbers that can possibly appear from Eq.(A1) in magnetically dominated region, and a smooth transition should be made at \(\beta\sim 1\) to restore the solution given by Eq.(A1) while \(\beta\gtrsim 1\). To achieve this, we define an effective internal energy in our simulations
\[e_{\rm in,eff}\equiv\max\left[E_{\rm tot}-\frac{\rho|\mathbf{v}|^{2}}{2}-\frac {B^{2}}{2},\,\eta_{2}\left(\frac{e_{\rm in}}{e_{m}}\right)e_{\rm in}\right],\] (A3)
which depends on the ratio between internal energy given by Eq.(A2) and the ratio between internal energy and magnetic energy \(e_{\rm in}/e_{\rm m}\). Here we choose the function \(\eta_{2}(x)\) as
\[\eta_{2}(x)=\begin{cases}0.99,&\frac{x}{x+0.03}<0.99\\ \frac{x}{x+0.03},&0.99\leq\frac{x}{x+0.03}<1\\ 1,&1\leq\frac{x}{x+0.03}\end{cases},\] (A4)
following Takasao et al. (2019) and Iijima (2016). This gives a safe enough internal energy floor as \(0.99\times e_{\rm in}\) at low plasma beta region. We then calculate the pressure from this effective internal energy before integrating the source term.
## Appendix B Convergence of results
The convergence of our rotating magnetic wind solutions against both mesh resolution and the size of wind launching region have been confirmed. We show the results for our fiducial model (B1.5e6O0.23) in Fig. 5. In the top panel we increase the spatial resolution for 4 times both along the radial and latitudinal direction, while in the bottom panel we change the thickness of wind launching region from \(\mathcal{D}=0.6\)\(R_{*}\) (fiducial value we are using, corresponding to 9 cells) to \(\mathcal{D}=0.3\)\(R_{*}\) (which corresponds to 5 cells). We check the time evolution of the spindown torque \(T\) for both changes, and zoom into the first few eruptive peaks to show the difference clearly. We find that the time-averaged value as well as the power-law trend converge with respect to both the spatial resolution and the size of the wind launching region, but the time variability vary. This is due to the fact that the reconnections in our simulations are mainly modulated by numerical resistivities.
## Appendix C Change of the mass loss rate in MHD regime
As we claimed in Sec. 2, the mass loss rate is controlled to be the same for the pressure driven wind, based on the prescription of our wind launching region. However, for the rotating magnetic wind solutions we obtained, we found that time-averaged mass loss rate in MHD regime is slightly altered by increasing \(B_{*}\) and \(\Omega_{*}\), as shown in Fig. 6. For all the cases we explored (\(\sigma\sim 10^{0-3}\)), \(\dot{M}_{\rm MHD}/\dot{M}_{\rm HD}\) varies \(\sim 2\) times at most, which is generally caused by the recurring eruption events (see the peak in the top right panel of Fig. 2). Though minor in the regime we explored, it may get significant when beta further decreases, and the magnetic effects become increasingly important.
|
2309.06082 | A Machine Learning Framework to Deconstruct the Primary Drivers for
Electricity Market Price Events | Power grids are moving towards 100% renewable energy source bulk power grids,
and the overall dynamics of power system operations and electricity markets are
changing. The electricity markets are not only dispatching resources
economically but also taking into account various controllable actions like
renewable curtailment, transmission congestion mitigation, and energy storage
optimization to ensure grid reliability. As a result, price formations in
electricity markets have become quite complex. Traditional root cause analysis
and statistical approaches are rendered inapplicable to analyze and infer the
main drivers behind price formation in the modern grid and markets with
variable renewable energy (VRE). In this paper, we propose a machine
learning-based analysis framework to deconstruct the primary drivers for price
spike events in modern electricity markets with high renewable energy. The
outcomes can be utilized for various critical aspects of market design,
renewable dispatch and curtailment, operations, and cyber-security
applications. The framework can be applied to any ISO or market data; however,
in this paper, it is applied to open-source publicly available datasets from
California Independent System Operator (CAISO) and ISO New England (ISO-NE). | Milan Jain, Xueqing Sun, Sohom Datta, Abhishek Somani | 2023-09-12T09:24:21Z | http://arxiv.org/abs/2309.06082v1 | # A Machine Learning Framework to Deconstruct the Primary Drivers for Electricity Market Price Events
###### Abstract
Power grids are moving towards 100% renewable energy source bulk power grids, and the overall dynamics of power system operations and electricity markets are changing- The electricity markets are not only dispatching resources economically but also taking into account various controllable actions like renewable curtailment, transmission congestion mitigation, and energy storage optimization to ensure grid reliability. As a result, price formations in electricity markets have become quite complex. Traditional root cause analysis and statistical approaches are rendered inapplicable to analyze and infer the main drivers behind price formation in the modern grid and markets with variable renewable energy (VRE). In this paper, we propose a machine learning-based analysis framework to deconstruct the primary drivers for price spike events in modern electricity markets with high renewable energy. The outcomes can be utilized for various critical aspects of market design, renewable dispatch and curtailment, operations, and cybersecurity applications. The framework can be applied to any ISO or market data; however, in this paper, it is applied to open-source publicly available datasets from California Independent System Operator (CAISO) and ISO New England (ISO-NE).
machine learning, electricity market, price formation, renewables
## I Introduction
With the increasing penetration of renewable energy resources, operating an electric grid is becoming complex [1]. It often leads to differences in planned versus actual operations between day-ahead and real-time markets. Various market instruments like virtual bidding, reserve markets, flexible ramping products, and demand response provide mechanisms to achieve convergence between the markets and manage imbalances and uncertainties between day-ahead and real-time markets [2]. However, even with the multitude of these advanced market instruments in place, one or more system operating conditions occurring in different temporal and spatial combinations can still cause price volatility and price spikes.
### _Related Work_
While there exists a significant amount of work on price formation, electricity price forecasting, and price spike detection, only a handful of studies have explored the interpretation of price spike events. In most cases, studies have mainly focused on understanding price formation in specific energy markets, which cannot be generalized to other markets. For instance, Goncalves et al. [3] applied a set of explanatory models in the MIBEL electricity market spot price to understand the main drivers of electricity price. Velasco et al. [4] proposed a graphical analysis to visualize key variables associated with the European electricity market prices. In contrast, our proposed framework is generic and can be applied to any energy market. For any generic market, the framework proposed by Gonzalez et al. [5] is also useful, where the feature importance of trained models is studied to improve the accuracy of price forecasting. Our proposed framework is an extension of those studies and incorporates advanced concepts of interpretable machine learning to automatically deconstruct the primary drivers of price spike events.
### _Challenges_
The challenges in designing such a framework, specifically in the context of price spikes, include: (a) rare occurrences of price spike events; (b) complex interactions between multiple system conditions leading to price spikes; and (c) limited data availability due to the confidential nature of price bids and generator availability.
### _Proposed Framework_
In the context of this study, an event implies a price spike event, and a price spike is defined as a segment of time in which the marginal cost of energy exceeds a certain threshold. The proposed comprehensive analytical framework integrates price-spike detection, statistical analysis of detected events, feature engineering, training of machine learning (ML) model, and automatic identification of key drivers driving the price-spike using the trained model. The framework also allows users to understand the system state during those events by clustering the energy market data using an unsupervised ML algorithm and plotting them on a radar-chart visualization. The framework is applied to publicly available datasets collected from California ISO (CAISO) and ISO New England (ISO-NE). The insights related to their price spike events and the key drivers driving those events are discussed.
The major contributions of this paper are: 1) a comprehensive machine learning-based framework for the detection and interpretation of price spikes, and 2) demonstrate the effectiveness of the proposed framework using data from two energy markets: CAISO and ISO-NE. The proposed framework can be used by (1) market operators to analyze market conditions in real-time, (2) cyber-security experts to identify malicious spikes, and (3) market regulators to bring transparency into market operations.
## II CAISO and ISO-NE Data Description
### _California ISO (CAISO)_
CAISO is a _nodal_ market, which generates locational marginal prices (LMP) for over 4000+ price nodes throughout
its footprint. In this paper, price spikes occurring at four major locational aggregate price (LAP) nodes for PG&E (Pacific Gas and Electric Company), SCE (Southern California Edison), SDGE (San Diego Gas and Electric), and VEA (Valley Electric Association) are analyzed. The market and operational data for CAISO for the years 2018, 2019, and 2020 were utilized to apply the framework.
### _ISO New England (ISO-NE)_
For ISO-NE, natural-gas-fired generation, nuclear, other low- or no-emission sources, and imported electricity (mostly hydropower from Eastern Canada) provided the region's electricity in 2021.The total generation by renewable energy is 12.44%, including 4% by wind and 3% by solar. We collected market and operational data from ISO-NE for 2020 and 2021. The LMPs and corresponding energy, congestion, and loss components of LMPs are obtained from ISO-NE for the eight load zones and hubs.
In this analysis, we only focus on identifying the key drivers that impact the marginal cost of energy. Fig. 1 compares the distribution of the energy component of LMP between CAISO and ISO-NE. Compared to CAISO, energy prices in ISO-NE are relatively lower, and the price spikes are sparser.
## III Methodology
The proposed framework (as shown in Fig. 2) begins with the statistical analysis of the market data for price spike detection and divides the data into price spike and non-price spike segments. Next, those segments are summarized into a state-space representation, which is used to train an ML model to classify spike segments from non-spike segments. The predictions generated by the trained classifier are next analyzed using SHAP (SHapley Additive exPlanations) [6] - a game theoretic approach used to quantify the impact (both positive and negative) of each feature on the model outcome. Finally, to provide user context about the state of the energy market, an unsupervised clustering algorithm is used to cluster the market data and define the system state, which is visualized using radar charts.
### _Spike Detection and Data Segmentation_
A _price-spike point_ is a price in time that exceed a certain threshold. The threshold can be a fixed number, or can be computed from the data using percentiles. Once detected, the price-spike points closed to each other are grouped together to define a _price-spike event_. Next, the data is divided into segments: normal data segments (no price-spike event) and anomalous data segments (atleast one price-spike event). For the anomalous segments, data between \([t_{first}-b_{len},t_{last}+f_{len}]\) is selected, where \(t_{first}\) and \(t_{last}\) depict the first and last occurrence of spike in the grouped event, and \(b_{len}\) and \(f_{len}\) captures the recent history and near future around the spike. Rest of the data is divided into hourly normal data segments.
### _State Space Representation_
Once the data is divided into segments, instead of using raw signals, derived features are computed to capture different aspects of the features. The derived features include 1) _mean:_ to capture average value of a feature within the segment, 2) _std:_ to capture feature volatility within the segment, 3) _average gradient:_ to quantify trend in the feature value, 4) _maximum gradient:_ to measure sudden change in the feature value within the segment. The sign of average and maximum gradient indicates whether the change was positive or negative.
### _Identifying Primary Drivers_
The state-space representation is next used to train a Random Forest Classifier to classify spike/non-spike segments. Random Forest is an ensemble learning method that constructs multiple decision trees and each tree captures simple decision rules inferred from the data features. Though the learned rules (across the trees) can be used to highlight most important features (as done by [5]), the information cannot be used to quantify the feature importance for individual predictions.
In this study, we use SHAP (SHapley Additive exPlanations) [6] to determine the most important features and their influence on the model prediction. For any segment (normal/anomalous), Shapley values quantify the influence of individual features on the prediction generated by the trained model. For instance, Fig. 3 compares shapley values of features between a normal (top) and an anomalous (bottom) segment from CAISO. While features in blue are pushing the prediction towards being normal, features in red are driving prediction towards being anomalous. It is evident from the plot that Shapley values of features advocating for spike is almost non-existent for the normal segment (top), however, significantly higher for the anomalous segment (bottom), and vice-a-versa for Shapley values of features in blue advocating for segment being normal.
Fig. 1: Plot and Histogram of Energy Component of LMP in CAISO
Fig. 2: System architecture
To identify key drivers, framework uses the top five features from the red category, sorted by their contributions towards the predicted value and present it to the user.
### _Clustering and Visualization_
Most often, the complex interaction between multiple components of an energy market lead to anomalous price behaviors. While it is important to identify key variables associated with a price spike event, it is hard to make sense out of that information without knowing the context - the system state. Therefore, the framework uses K-Means clustering on the state-state representation of the data to identify potential system states and visualize the output using radar plot. This additional feature provides some context to the user about the complex interplay between different components of the market and help them take an informed decision.
The framework and the data will be open-sourced as part of this study. The links have been anonymized for the review.
## IV Evaluation
### _Price Spike Events_
For the statistical analysis of price spikes, the marginal cost of energy (MCE) is used as the price signal for both CAISO and ISO-NE, with 95\({}^{th}\) and 99\({}^{th}\) percentiles being the thresholds for the moderate- and high- spike points. Table I depicts the thresholds for moderate spikes (Q-95) and high spikes (Q-99), as computed from the CAISO and ISO-NE data.
High-spike points (moderate-spike points are considered normal in this study) are grouped together to identify a _price-spike event_. For grouping, any two consecutive high-spike points which are at least 5-mins (one interval) apart are considered to belong to two separate price-spike events. Given this definition, 1004 price-spike events were identified for CAISO between 2018 and 2021, and 223 price-spike events were observed for ISO-NE in 2020 and 2021.
The following observations were made from the distributions of price spike events of CAISO and ISO-NE (see Fig. 4).
* While spring is the biggest price-spike season for CAISO, price-spikes are least probable in spring for ISO-NE.
* Except for Winters, price spikes are highly probable during the evening time for CAISO across all the seasons. Though this statement is true for ISO-NE for the summer season, the spikes are almost equally probable at different times of the day for all other seasons.
* For both CAISO and ISO-NE, midday is the least probable time period across all the seasons for a price-spike event to occur.
* Most spike events in CAISO exist only for one 5-min interval, however, most spike events in ISO-NE stay for three 5-min intervals.
* Spikes longer than an hour was noticed in both CAISO and ISO-NE, which often is an indication of an extreme event (e.g. wildfires, hurricanes). Such events are more common in ISO-NE (20%) than in CAISO (3%).
The statistical analysis of the distribution of price-spike events across CAISO and ISO-NE indicates that primary drivers of a price-spike event can vary significantly depending on the season, time of the day, and location of the ISO. Next, we discuss model training followed by primary drivers as identified by the framework for both CAISO and ISO-NE.
### _Model Validation_
The random forest classifier trained on 67% of segments achieved an accuracy of 92% and 82% on the remaining 33% of test data for CAISO and ISO-NE, respectively. The train set and the test set included 4% and 1.5% anomalous segments for CAISO and ISO-NE, respectively. Table II depicts the number of derived (raw) features for both the ISOs. Raw features were selected based on data availability and filtered using statistical analysis of the data and inputs from the subject matter expert (SME). In the table, the _Others_ column includes features related to the demand, solar/wind curtailment, and transfers (imports and exports).
The false positive rate of the trained model is 13% (769/5919) for CAISO and 1% (42/4255) for ISO-NE. However, further analysis indicates that those segments are correlated with moderate price-spike events. In those events, CAISO and ISO-NE had mean energy costs of \(\$51\)/MWh and \(\$91\)/MWh, respectively, close to our observations in Table I.
### _Primary Drivers_
Analysis of the top five features (from the red category) sorted by their contributions towards the predicted value indicates that the six key drivers (from Fig. 5) are highly correlated with the price-spike segments.
#### Iv-C1 Congestion (MCC\(\neq\)0)
Congestion captures average congestion cost and its movement during a segment in a real-time market. While it is a key driver for CAISO (50%), it contributes only to a small proportion of spikes in ISO-NE (6%). In CAISO, typically, the congestion between the
Fig. 3: Feature importance through Shapley values
Southern California nodes (SCE and VEA) and Northern California (PGAE) is a key reason, especially during the evening time across all the seasons and sometimes in midday during summer, as shown in Fig. 6.
#### Iv-C2 High Day-Ahead Prices
Increase in gas prices, and extreme conditions (like wildfire or blackouts) are some of the several reasons in why system operator can anticipate high prices in the day-ahead market. Price spikes in such markets are hard to control or avoid. Such segments are more prominent in ISO-NE (41%) than in CAISO (16%). Price spikes under this category are more frequent in winter (Fig. 6) for both the ISOs.
#### Iv-C3 Forecasting Error
Market regulators forecast generation from renewables (mainly Solar and Wind) and demand in the day-ahead market. Though markets are often robust enough to handle forecasting errors, sometimes an unexpected generation or abrupt movement in resources can lead to price-spike. While this is a key driver for CAISO - 42%, it is non-existent for ISO-NE. Further analysis of those events indicates that 90% of price spikes under this category for CAISO can be attributed to forecasting errors in solar and wind projections. These spikes mainly exist during the evening times, as shown in Fig. 6, when solar ramps down.
#### Iv-C4 Movement in Generation
Typically, movement in renewable sources (specifically solar, wind, and hydro) lead to a price-spike event, and movement in conventional resources (except Nuclear) respond to the deficit created by the movement of renewables. Fig. 5 indicates that movement in the generation is a key driver in 44% and 27% price-spike segments for both CAISO and ISO-NE, respectively. While spikes in CAISO are more correlated with movement in renewables (Hydro-75%, Solar, and Wind-25%), spikes in ISO-NE can be attributed to movement in Oil-based generation (95%). This further validates the limited role of renewables in ISO-NE, as also noticed previously in _Forecasting Error_.
#### Iv-C5 Regulation Prices
Regulation/clearing prices were a key driver in 93% of the total price-spike segments in ISO-NE, with the primary reason being the movement in renewable capacity, present in 85% of the total spike segments of this category. Fig. 5 indicates that these spike segments occurred mainly in Winter (Morning, Evening, and Night), Summer (Midday and Evening), and Fall (Evening and Night). For CAISO, only 22% of all spike segments are driven by regulation prices with the key reason being the volatility of Regulation Up prices (99%).
### _Capturing the System State_
The framework offers an additional feature of data-driven categorization of potential system states using K-Means clustering and its visualization using radar plots. Through Elbow method [7] we identified 8 clusters for both the ISOs. Fig. 7 depicts three such clusters from ISO-NE where each cluster is characterizing a specific system state: high reserve prices (left), congestion (middle), and volatile demand + high renewables (right). The left radar chart shows that key variables like the mean value \(\mu\) and the standard deviation \(\sigma\) of the reserve prices are high during high renewable volatility (wind MW average change \(\bar{\delta}\) and solar MW \(\sigma\)) generation period, leading to the additional requirement of regulation reserves to be procured and leading increase in energy prices. The middle radar chart represents a scenario where the congestion prices \(\sigma\) are high, and demand forecast error \(\mu\) are high leading to high prices in the market. The right radar chart shows how volatility in demand \(\sigma\), wind MW generation \(\bar{\delta}\), can lead to increased electricity market prices as expensive resources are being dispatched to cover for the uncertainty in load and renewable generation. Clusters, in association with key drivers, are crucial for the users to make informed decisions.
## V Concluding Remarks
The primary contribution of this paper is an ML framework to automatically identify and report key factors driving the price spike events in the electricity markets. The framework was demonstrated on CAISO and ISO-NE and the analysis indicates that while congestion and renewable movement are key drivers for price-spike in CAISO, regulation prices, and day-ahead markets drive price spikes in ISO-NE. A high correlation
Fig. 4: Spike Distribution: [top] Distribution of spike events across seasons and different times of the day. [bottom] Distribution of spike events by duration.
Fig. 5: Key drivers behind price-spike segments
of day-ahead prices with price-spike segments also explains longer spikes in ISO-NE, compared to CAISO. Our analysis indicates that insights from ISO about a certain price pattern cannot be generalized to other ISOs. Key drivers behind certain price patterns can vary depending upon the ISO location, local weather conditions, time of the day, and several other reasons. In this context, the insights generated from such an automated analytical framework about the key drivers driving the price spikes can be used by market and system operators to analyze market conditions in near real-time. As of now, market operators analyze market data after the fact and perform price corrections and adjustments during the settlement process, which in real time is non-trivial for complex market data. The market operators can use the proposed framework to even cluster the market run results and validate it in near real-time. Furthermore, the automated labeling of price-spike events and assignment of specific reasons like congestion, renewable volatility, forecast errors, etc. as primary drivers can be utilized cyber-security experts to spot market attacks. Akin to that, market designers and policy analysts can use these insights to comprehend the market mechanisms for price formation to improve the forecast of electricity prices, bring transparency in the market operations, and design appropriate market-based interventions to mitigate such scenarios. In future, we intend to extend this analysis to other ISOs and also incorporate other price patterns, including price volatility.
|
2309.16291 | Efficiency Separation between RL Methods: Model-Free, Model-Based and
Goal-Conditioned | We prove a fundamental limitation on the efficiency of a wide class of
Reinforcement Learning (RL) algorithms. This limitation applies to model-free
RL methods as well as a broad range of model-based methods, such as planning
with tree search.
Under an abstract definition of this class, we provide a family of RL
problems for which these methods suffer a lower bound exponential in the
horizon for their interactions with the environment to find an optimal
behavior. However, there exists a method, not tailored to this specific family
of problems, which can efficiently solve the problems in the family.
In contrast, our limitation does not apply to several types of methods
proposed in the literature, for instance, goal-conditioned methods or other
algorithms that construct an inverse dynamics model. | Brieuc Pinon, Raphaël Jungers, Jean-Charles Delvenne | 2023-09-28T09:38:27Z | http://arxiv.org/abs/2309.16291v1 | # Efficiency Separation between RL Methods: Model-Free, Model-Based and Goal-Conditioned
###### Abstract
We prove a fundamental limitation on the efficiency of a wide class of Reinforcement Learning (RL) algorithms. This limitation applies to model-free RL methods as well as a broad range of model-based methods, such as planning with tree search.
Under an abstract definition of this class, we provide a family of RL problems for which these methods suffer a lower bound exponential in the horizon for their interactions with the environment to find an optimal behavior. However, there exists a method, not tailored to this specific family of problems, which can efficiently solve the problems in the family.
In contrast, our limitation does not apply to several types of methods proposed in the literature, for instance, goal-conditioned methods or other algorithms that construct an inverse dynamics model.
## 1 Introduction
A significant part of research in Artificial Intelligence is dedicated to creating, analyzing, and evaluating Reinforcement Learning (RL) methods. One goal is to understand when and why some types of methods will work better than others from a statistical and computational point of view. Explaining such differences is of central importance to drive the design of new efficient algorithms.
A first step in understanding the differences between methods is to abstract them into classes. Two of the main classes of RL methods are model-based and model-free methods. Model-based methods are algorithms that leverage a known or learned model of the environment dynamics Mordatch and Hamrick (2020). In contrast, model-free methods do not use such a model.
While the distinction between model-free, for example Q-learning and policy gradient algorithms, and model-based classes is commonly accepted Sutton and Barto (2018), there exist no agreed-upon general formal definitions of these classes. Authors resort to proposing their own definitions Sun et al. (2019), or to proving their results on classical algorithms that are representative of these classes Tu and Recht (2019).
Several works have studied the relative performance of these classes from a theoretical point of view, and it is cited as an open problem in the survey Levine et al. (2020). For tabular Markov Decision Process, Tu and Recht (2019) makes a survey of the existing literature that studied the model-free or model-based methods. They obtain no clear conclusion in favor of one class over another. In the specific problem family of the Linear Quadratic Regulator, Tu and Recht (2019) proves a polynomial separation result.
To our knowledge, Sun et al. (2019) gives the only result with a gap on the efficiency that is exponential in a relevant quantity to the advantage of model-based over model-free methods. We extend their result in two ways.
First, we redefine and broaden the class of methods to which a limitation applies. We construct a family of problems that is hard in the horizon, not only for model-free methods, but also for several model-based ones.
For this family, we also show that there exists an RL method that can efficiently discover the optimal behavior. The second and main contribution with respect to the result of Sun et al. (2019) is that our RL method is not specific to the family of problems, but applies to a much more universal set of problems. In opposition, their algorithm relies crucially on the knowledge of the set of problems present in the family. In general, such knowledge cannot be assumed to be known in practice.
Our findings are thus the first to implicate a strong efficiency limitation of a wide class of RL methods, while a universal method does not have that limitation. Moreover, the exposed limitation points out that some ideas proposed in the literature could be essential to solve a large set of problems.
This article is structured as follows. In Section 2, we define the notations and formalize the problems we address. In Section 3, we characterize the class of RL methods on which we will prove a limitation. In Section 4, we state our main Theorem. In Section 5, we demonstrate numerically the Theorem with deep RL algorithms. Finally, we give a summary of our findings and discuss ideas in the literature that could overcome the limitation presented in this paper.
## 2 Preliminaries
NotationsWe use \(\Delta(\Omega)\) to denote the set of probability measures over a sample space \(\Omega\) with an implicitly associated \(\sigma\)-algebra. We define the function \(\delta(.)\) to output \(1\) if the condition in its argument is respected, else \(0\). For a set \(A\), we note \(A^{*}\) the set of all finite sequences of elements in \(A\), \(\cup_{i\in\mathbb{N}}A^{i}\). Reference to neural networks initialization refers to the initialization of a multilayer perceptron (MLP), a classical neural network architecture which compose iteratively linear operators and non-linear point-wise activation functions. We keep implicit the input dimension, output dimension, and number of layers with their respective numbers of hidden units.
In this paper an RL problem is a finite horizon Markov Decision Process (MDP), which is defined by a horizon \(H\in\mathbb{N}\), a state space \(\mathcal{S}\), an action space \(\mathcal{A}\) and an operator \(P:(\mathcal{S}\times\mathcal{A})\cup\{\bot\}\rightarrow\Delta(\mathbb{R} \times\mathcal{S})\), which determines the initial state distribution with \(P(s_{0}|\perp)\) and the transition dynamics with \(P(r,s^{\prime}|s,a)\), where \(r\) is the reward and \(s^{\prime}\) is the next state.
Throughout the paper, the set of actions is binary \(\mathcal{A}=\{0,1\}\) and, for some \(n\in\mathbb{N}\), the set of states \(\mathcal{S}\) is contained in \(\{(t,x)\in\{0,\ldots,H\}\times\mathbb{R}^{n}\}\), where \(t\) is the time step. Initial states have \(t=0\), and \(t\) is incremented at each transition by \(P\). When the time step \(H\) is reached, we say that the state is final and the trajectory ends.
We note \((s_{0},a_{0},r_{0},s_{1},a_{1},\ldots,s_{H})\sim P^{\pi}\) a trajectory sampled according to the operator \(P\) and a policy \(\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A})\). We use \(\pi^{U}\) to denote the policy which outputs a uniform distribution over actions.
An RL method is an algorithm that outputs an optimized policy, \(\pi\), to maximize the expected cumulative rewards \(\mathbb{E}_{(s_{0},a_{0},r_{0},\ldots,s_{H})\sim P^{\pi}}[\sum_{t=0}^{H-1}r_{t}]\). To do so, the method can draw and leverage samples of trajectories from the MDP.
## 3 A formalization of a large RL class
In this section, we provide a definition for a class of RL methods before stating, in the next section, a limitation on their efficiency.
We constrain this class of methods in one main way. For any observed transitions \((s,a,r,s^{\prime})\), the state \(s^{\prime}\) can only be observed through evaluations of a set of functions. Moreover, this set of functions must respect a symmetry condition.
As we will illustrate, these constraints are satisfied by a large set of classical RL methods.
To provide a formal characterization of this class of methods, we will need several definitions.
In Algorithm 1, we define an encoder and a decoder for states to pointers and pointers to states, respectively. These methods provide a way to obfuscate a state, while still allowing to manipulate that state. We will note \(\bar{s}\) for an index corresponding to state \(s\).
In Algorithm 2, we define an interface between the RL methods and the RL problem. It leverages the encoder and decoder just defined to obfuscate the states in which we enter. Simultaneously, it maintains and constructs a dataset upon which functions can be defined to evaluate encountered states. This setup allows us to constrain the available information on the states by constraining the set of functions that can be used to evaluate them.
We impose these functions to respect a special symmetry condition upon permutations of the input variables. We define such permutations here.
**Definition 1**.: _A permutation of coordinates \(p:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is a function such that for \(x\in\mathbb{R}^{n}\), \(p(x)_{f(i)}=x_{i}\) for some bijective function \(f\) from \(\{1\ldots n\}\) to itself._
```
\(D\leftarrow[]\) functionEnv_init(\(\mathcal{F}\)) \(s\sim P(\bot)\) \(\bar{s}\leftarrow\mathrm{encode\_state}(s)\) return\(s,\bar{s},\,\left[f(s,D)\right]_{f\in\mathcal{F}}\) functionEnv_step(\(\bar{s},a,\,g,\mathcal{F}\)) \(s\leftarrow\mathrm{decode\_pointer}(\bar{s})\) if\(s\) is not terminal then \(r,s^{\prime}\sim P(r,s^{\prime}|\,s_{\alpha})\) \(\bar{s}^{\prime}\leftarrow\mathrm{encode\_state}(s^{\prime})\) \(f_{s^{\prime}}\leftarrow\left[f(s^{\prime},D)\right]_{f\in\mathcal{F}}\) \(D\leftarrow D.\mathrm{append}((s,g(a,r,f_{s^{\prime}})))\) return\(s,\,a,\,r,\,\bar{s}^{\prime},\,f_{s^{\prime}}\) elseif\(s\) is terminalthen return\(\bot\) functionEnv_evaluate_state(\(s,\mathcal{F}\)) return\(\left[f(s,D)\right]_{f\in\mathcal{F}}\) functionEnv_encode(\(s\)) return\(\mathrm{encode\_state}(s)\) functionEnv_reset_data() \(D\leftarrow[]\)
```
**Algorithm 1** Encoder and decoder of pointers and states.
_We will also use interchangeably \(p^{\prime}:\mathbb{N}\times\mathbb{R}^{n}\rightarrow\mathbb{N}\times\mathbb{R} ^{n}\) defined as \(p^{\prime}(s=(t,x))=(t,p(x))\)._
From these definitions, we are able to express the assumption that the RL methods part of our class must satisfy.
**Assumption 2**.: _The RL method takes as unique input the interface defined in Algorithm 2. The function \(g\) and the sequence of functions \(\mathcal{F}\) given in argument to these methods must have the following form \(g:\mathcal{A}\times\mathbb{R}\times\mathbb{R}^{N}\to X\), and \(\mathcal{F}=\{f:\mathcal{S}\times(\mathcal{S}\times X)^{*}\rightarrow\mathbb{ R}\}^{N}\) where \(X\) is some undefined set and \(N\) some natural number. Moreover, for any function \(f\in\mathcal{F}\) and any permutation of the coordinates \(p\) we must have \(f(s,((s_{0},x_{0}),\ldots))=f(p(s),((p(s_{0}),x_{0}),\ldots))\)._
To analyze an existing RL method, we must translate this method such that it fits the Assumption, if possible. For example, drawing transitions of the RL problem must be replaced by calls to the interface.
For all the translations to this form that we illustrate, any lower bound on the number of calls of the resulting algorithm to the interface can be translated back to the original RL method as the same lower bound on the number of necessary operations. In some cases, depending on how the original algorithm has been reduced to this form, it can also be translated back as a lower bound on the number of necessary samples of the environment. Giving a statistical bound as well as a computational bound.
We note that the interface Env allows not only to sample full trajectories, but also to generate transitions at any encountered state. Thus, it is possible to use common local planning methods that rely on such a generator, for example, local tree search procedures.
The set \(\mathcal{F}\) is used as a set of learning algorithms in our translation of existing RL methods to use this interface. The restriction on \(\mathcal{F}\) can be intuitively understood as a restriction on the learning algorithms.
Specifically, we work with the assumption that the learning procedure treats the coordinates of the input state vectors symmetrically. This assumption is natural since, without a specific prior about the task at hand, we do not wish to process the different coordinates in particular ways. However, we note that in a practical setting with neural networks, the weights are initialized randomly, which can create asymmetries. Nevertheless, we prove in Appendix C that the distribution of trained neural networks has the demanded symmetries under a natural assumption on their architecture.
### Translation of a classical RL method
As an example of our formalization, we show how a classical RL method, fitted Q-iteration Riedmiller (2005), can be translated to use the interface as defined in Assumption 2. We give two more complete examples in Appendix B: a model-free policy gradient algorithm and a local tree search model-based method. We also explain in the Appendix how other model-based RL methods can be cast to use the Assumption. These examples depict how a much larger set of algorithms could be adapted to this form.
To clarify the presentation, we use methods as simplified as possible. For example, we only work with one-step RL methods, which use \((s,a,r,s^{\prime})\) instead of longer sequences. The presented formalism can however be extended to include these multi-step methods. Our results are robust to such changes.
In Algorithm 3, we define the fitted Q-iteration algorithm with deep learning, a common variant of model-free deep Q-learning methods. The Algorithm trains at each iteration a new Q-function on a new dataset following the Bellman equation and the previously trained and fixed \(Q\) function. For simplicity, we do not add an exploration term to the policy.
```
Parameters:\(H\): horizon of the MDP, \(\mathcal{S}\): state space, \(\mathcal{A}\): action space, \(K\): number of iterations, \(\eta>0\): scalar factor for gradient descent, \(I\): number of samples by iteration (divisible by \(H\)) Input:\(P\): the operator corresponding to the MDP to solve Initialize a neural network \(Q_{\theta}:\mathcal{S}\rightarrow\mathbb{R}^{|A|}\) \(\pi(a|\,s,\,Q)\leftarrow\delta(a=\arg\max_{a\in A}Q(s,a))\) for\(k\gets 1,\ldots,K\)do \(\tilde{Q}\gets Q_{\theta}\) Initialize a neural network \(Q_{\theta}\) for\(i\in 1,\ldots,I/H\)do \(s\gets P(s|\perp)\) for\(t\gets 0\ldots H-1\)do \(a\sim\pi(a|\,s,\,\tilde{Q})\) \(r,s^{\prime}\gets P(r,\,s^{\prime}|\,s,\,a)\) \(\theta\leftarrow\theta-\eta\nabla_{\theta}\left[Q_{\theta}(s,a)-r-\max_{a\in \mathcal{A}}\tilde{Q}(s^{\prime},a)\right]^{2}\) \(s\gets s^{\prime}\) \(\pi(a|\,s)\leftarrow\pi(a|\,s,\,Q_{\theta})\) return\(\pi(a|\,s)\)
```
**Algorithm 3** Fitted Q-iteration (with minimal exploration)
In Algorithm 4, we give a translation of the original algorithm using the interface defined in Algorithm 2. We specify the function \(g\) to construct a dataset and the set of functions \(\mathcal{F}\) to learn and apply the Q-functions.
Following the argument formalized in Appendix C about the symmetries existing in neural networks training by gradient descent, the functions defined in \(\mathcal{F}\) practically satisfy the symmetry condition of Assumption 2.
## 4 Main theorem
The main result of this paper states an efficiency gap between a broad group of RL methods and a specific algorithm. We present this algorithm.
Algorithm 5 is a simple goal-conditioning method that first samples a dataset of trajectories by drawing actions uniformly. From this dataset, it extracts a state which gives maximal reward when entering it and poses this state as its goal. From the same dataset, a function is learned to predict the action taken from any state given the final state encountered. Finally, it constructs a policy that tries to reach the goal with the action predictor conditioned by the goal.
The learning algorithm for the action prediction simply minimizes the empirical rate of errors on the dataset. The space of functions in which we learn is a composition of a feature selection, a linear function, and a threshold (to output a binary prediction). The formal mathematical program is
\[\begin{split}\operatorname*{arg\,min}_{f\in F}&\sum_{((s _{t},s_{H}),a)\in D_{GC}^{t}}\delta(f(\begin{bmatrix}s_{t}\\ s_{H}\end{bmatrix})\neq a)\\ \text{s.t.}&\|w\|_{0}\leq\alpha,\end{split} \tag{1}\]
where \(\begin{bmatrix}s_{t}\\ s_{H}\end{bmatrix}\) denotes the real part of the states concatenated, dataset \(D_{GC}^{t}\) is defined in Algorithm 5, \(F\) is the set of linear functions with a threshold, \(F=\{x\in\mathbb{R}^{2n}\rightarrow\delta(\langle w,x\rangle>0)|\,\forall w\in \mathbb{R}^{2n}\}\), and \(w\in\mathbb{R}^{2n}\) is the parameter associated to \(f_{w}\). The condition \(\|w\|_{0}\leq\alpha\) bounds the number of non-zero weights in the linear function by \(\alpha\in\mathbb{N}\).
We argue that this algorithm is designed for a much more universal set of problems than just for the family of problems used in the proof. This procedure could apply to any goal-reaching problem. The initial policy to explore the dynamics is simply uniform. The space of functions in which we learn our goal-conditioned policy is universal, and only made sufficiently simple to make the proof more straightforward. This claim is supported by the numerical analysis with neural networks presented in the next section.
In contrast, the family of methods for which we prove an inefficiency is only assumed to respect Assumption 2. This allows methods that are specifically designed to be performant on the family of the proof.
We have now defined everything needed to state our main result.
**Theorem 3**.: _There exists a family of finite horizon MDPs such that_
1. _For any MDP in the family, with probability at least_ \(1-\delta\) _(over the sampled trajectories), Algorithm_ 5 _outputs an optimal policy for the MDP with a number of samples and number of operations upper bounded by a polynomial in_ \(H\) _and_ \(\nicefrac{{1}}{{\delta}}\)_._
2. _For any algorithm satisfying Assumption_ 2 _and using_ \(o(2^{H})\) _calls to the interface (Algorithm_ 2_), there exists a problem in the family for which it outputs a suboptimal policy with probability at least_ \(\nicefrac{{1}}{{3}}\)_._
The complete proof is given in Appendix A, it is partially inspired by some of the proofs in Sun et al. (2019). We give a sketch here with the principal intuitions.
We define explicitly a family of MDPs satisfying the requirements of the Theorem. We define an MDP of the family by its horizon and by \(b\) a binary word that represents a sequence of actions that solves the task. A representation of one element of the family for a small horizon is given in Figure 1.
For the dynamics, at the initial state, there are two possible continuations that are drawn randomly by the RL problem. The left-hand side will give a trajectory over which the agent has no control and ends up randomly either in a non-rewarding state or a rewarding state. On the right-hand side, in that part the agent has full control over the trajectories which can end in a non-rewarding state or a unique rewarding state.
For algorithms satisfying Assumption 2, the right-hand side has a number of possible end states which grows exponentially in the horizon, and the unique rewarding state becomes hard to reach by purely random actions. Moreover, these algorithms do not have access to relevant information to orient the search: the left-hand side dynamics is independent of \(b\), and we show in the proof that the information in the end states of the right-hand side is obfuscated due to the Assumption.
For Algorithm 5, the left-hand side allows the agent to easily discover a rewarding state and set it as its goal. On the right-hand side, the dynamics is sufficiently simple for its learning procedure to perfectly predict the necessary sequence of actions to reach this goal. Its returned policy will thus act perfectly to reach the rewarding state on the right-hand side.
## 5 Numerical experiments
We demonstrate numerically how practical deep RL methods perform on the family of RL problems constructed in the proof. We test Algorithm 5 but with neural networks trained by gradient descent as a learning procedure instead. We test a more practical version of the fitted Q-iteration Riedmiller (2005), presented in Algorithm 3. We also implement and run a classical Actor-Critic method, Proximal Policy Optimization (PPO) Schulman et al. (2017). We refer to Appendix D for more details on the implementations.
We apply these methods to problems of the family with increasing horizons. We sample a dataset of \(1000\) trajectories for the goal-conditioned method. For the fitted Q-iteration and PPO methods, we sample \(50\) times \(1000\) trajectories during training. To measure the success of a method, we check if its returned policy reaches the rewarding state on the right-hand side with \(1000\) sampled trajectories, this allows the goal-conditioned algorithm to do some exploration. The results are presented on Figure 2.
The model-free methods quickly fail to solve the task when the horizon is increased. While the goal-conditioning method successfully solves the task for much longer. Both of these observations are suggested by our theoretical findings.
## 6 Conclusion and discussion
We defined a new class of RL methods that encompasses model-free, such as Q-Learning and policy gradient, and several model-based methods. For this class, we showed an efficiency limitation on a family of problems that are efficiently solvable by an RL method generally applicable to goal-reaching problems.
The problems in the family feature two different sides with their different respective dynamics but with a common unique rewarding state. The first side has a dynamics that allows an algorithm to easily discover a rewarding state.
However, this side has no optimal way to reach that state. The second side possesses an optimal path but it is hard to find it without seeking to reach the rewarding state.
To summarize intuitively our findings, a large class of RL methods will unnecessarily struggle in environments where there exists both: an easy-to-explore but suboptimal-to-use dynamics and a second dynamics which is hard to explore, but easy to navigate if one leverages a known goal that has been discovered in the first dynamics.
There exist several types of methods proposed in the literature that evade the class of RL methods on which we prove the limitation. We identified the followings:
* Algorithms that learn a smooth model of the dynamics then use backpropagation through the learned model Nguyen and Widrow (1990), Jordan and Rumelhart (2013), Deisenroth and Rasmussen (2011), Grondman (2015), Heess et al. (2015). These methods elude our definition because the learned model is not treated as a black box by the algorithm. We note that these methods are mostly used in environments where the dynamics are approximately smooth, such as low-level control in robotics.
Figure 1: Example of the family of RL problems for \(H=4\) and \(b=01\). On the left-hand side, the agent can easily discover the rewarding state. However, the path to reach it is suboptimal. By understanding and using the dynamics on the right-hand side, the algorithm can uncover the optimal path.
* Algorithms that construct a symbolic or semi-parametric model of the dynamics and then apply a symbolic planning algorithm Konidaris et al. (2018). This example is related to the previous one, it evades the limitation when the symbolic planner does not treat the model only as a black box generator of transitions but leverages insights in it.
* Universal value functions Sutton et al. (2011); Schaul et al. (2015) can be learned to decide which states can be efficiently reached from which states. These algorithms bypass our definition because the value functions take as input the states in the future of the trajectory, and the quantities they learn cannot be replaced trivially by a generator of transitions.
* A wide variety of algorithms that learn a link from the future to the past of a trajectory. An example is learning an inverse dynamics model. These algorithms avoid the limitation for the same reason as the universal value functions: the future of the trajectory is directly used to learn the relevant functions, and there is no trivial efficient way to replace the computed quantities with a generator of transitions. Inverse dynamics models or goal-conditioning methods learn to predict the action to take given the current state and a state to reach in the future Ghosh et al. (2019); Emmons et al. (2021). There exist variants of this idea, Janner et al. (2022) proposes to learn to map the current and future state to a full sequence of intermediary states. It is also possible to condition on other information than future states, such as future returns Schmidhuber (2019); Kumar et al. (2019); Chen et al. (2021); Emmons et al. (2021). These different functions can be efficiently learned with Hindsight Experience Replay, where what is reached in a trajectory is relabelled as a goal in hindsight for training Kaelbling (1993); Andrychowicz et al. (2017). We note however that not all these methods are sound in the presence of uncertainty, as described (and alleviated) in Paster et al. (2022); Eysenbach et al. (2022); Yang et al. (2022); Villaflor et al. (2022).
Not all of these algorithms necessarily solve efficiently the family of RL problems we defined. Our theoretical result suggests that the ideas present in them could help solve problems otherwise intractable by a large class of classical RL methods.
|
2309.15940 | Context-Aware Entity Grounding with Open-Vocabulary 3D Scene Graphs | We present an Open-Vocabulary 3D Scene Graph (OVSG), a formal framework for
grounding a variety of entities, such as object instances, agents, and regions,
with free-form text-based queries. Unlike conventional semantic-based object
localization approaches, our system facilitates context-aware entity
localization, allowing for queries such as ``pick up a cup on a kitchen table"
or ``navigate to a sofa on which someone is sitting". In contrast to existing
research on 3D scene graphs, OVSG supports free-form text input and
open-vocabulary querying. Through a series of comparative experiments using the
ScanNet dataset and a self-collected dataset, we demonstrate that our proposed
approach significantly surpasses the performance of previous semantic-based
localization techniques. Moreover, we highlight the practical application of
OVSG in real-world robot navigation and manipulation experiments. | Haonan Chang, Kowndinya Boyalakuntla, Shiyang Lu, Siwei Cai, Eric Jing, Shreesh Keskar, Shijie Geng, Adeeb Abbas, Lifeng Zhou, Kostas Bekris, Abdeslam Boularias | 2023-09-27T18:32:29Z | http://arxiv.org/abs/2309.15940v1 | # Context-Aware Entity Grounding with
###### Abstract
We present an **O**pen-**V**ocabulary 3D **S**cene **G**raph (OVSG), a formal framework for grounding a variety of entities, such as object instances, agents, and regions, with free-form text-based queries. Unlike conventional semantic-based object localization approaches, our system facilitates context-aware entity localization, allowing for queries such as "pick up a cup on a kitchen table" or "navigate to a sofa on which someone is sitting". In contrast to existing research on 3D scene graphs, OVSG supports free-form text input and open-vocabulary querying. Through a series of comparative experiments using the ScanNet [1] dataset and a self-collected dataset, we demonstrate that our proposed approach significantly surpasses the performance of previous semantic-based localization techniques. Moreover, we highlight the practical application of OVSG in real-world robot navigation and manipulation experiments. The code and dataset used for evaluation can be found at [https://github.com/changhaonan/OVSG](https://github.com/changhaonan/OVSG).
Open-Vocabulary Semantics, Scene Graph, Object Grounding 1hc856,kb1204,shiyang.lu,epj25,skk139,sg1309,kb572,[email protected]
## 1 Introduction
In this paper, we aim to address a fundamental problem in robotics - grounding semantic entities within the real world. Specifically, we explore how to unambiguously and accurately associate entities present in commands, such as object manipulation, navigation to a specific location, or communication with a particular user.
Currently, the prevailing method for grounding entities in the robotics domain is semantic detection [2]. Semantic detection methods are intuitive and stable. However, in scenes with multiple entities of the same category, semantic labels alone cannot provide a unique specification. In contrast, humans naturally possess the ability to overcome this grounding ambiguity by providing context-aware specifications, such as detailed descriptions and relative relations. For example, rather than simply designating "a cup", humans often specify "a blue cup on the shelf", "a coffee cup in the kitchen", or "Mary's favorite tea cup".
Inspired by this, a series of recent works introduce context relationship into grounding problem [3, 4, 5, 6, 7]. These approaches employ 3D scene graphs as a scene representation that concurrently accounts for instance categories and inter-instance spatial contexts. In a 3D scene graph, concepts such as people, objects, and rooms are depicted as nodes, with attributes like color, material, and affordance assigned as node attributes. Moreover, spatial relationships are represented as graph edges. Such structure enables 3D scene graphs to seamlessly support context-aware object queries,
such as "the red cup on the table in the dining room", provided that the attribute, the semantic category, and the relationship have been predefined in the graph.
However, this inevitably brings us to a more crucial question that this paper aims to answer: how do we cope with scenarios when the class category, relationship, and attribute are not available in the constructed 3D scene graph? Tackling this question is vital if we wish to effectively integrate robots into real-world scenarios. To resolve the challenge, we present a novel framework in this paper - the Open-Vocabulary 3D Scene Graph (OVSG). To the best of our knowledge, OVSG is the first 3D scene graph representation that facilitates context-aware entity grounding, even with unseen semantic categories and relationships.
To evaluate the performance of our proposed system, we conduct a series of query experiments on ScanNet [1], ICL-NUIM [8], and a self-collected dataset DOVE-G (**D**ataset for **O**pen-**V**ocabulary **E**ntity **G**rounding). We demonstrate that by combining open-vocabulary detection with 3D scene graphs, we can ground entities more accurately in real-world scenarios than using the state-of-the-art open-vocabulary semantic localization method alone. Additionally, we designed two experiments to investigate the open-vocabulary capability of our framework. Finally, we showcase potential applications of OVSG through demonstrations of real-world robot navigation and manipulation.
Our contributions are threefold: 1) A new dataset containing eight unique scenarios and 4,000 language queries for context-aware entity grounding. 2) A novel 3D scene graph-based method to address the context-aware entity grounding from open-vocabulary queries. 3) Demonstrate the real-world applications of OVSG, such as context-aware object navigation and manipulation.
## 2 Related Work
**Open-Vocabulary Semantic Detection and Segmentation** The development of foundation vision-language pre-trained models, such as CLIP [9], ALIGN [10], and LiT [11], has facilitated the progress of 2D open-vocabulary object detection and segmentation techniques [12; 13; 14; 15; 16; 17; 18]. Among these approaches, Detic [16] stands out by providing open-vocabulary instance-level detection and segmentation simultaneously. However, even state-of-the-art single-frame methods like Detic suffer from perception inconsistency due to factors such as view angle, image quality, and motion blur. To address these limitations, Lu et al. proposed OVIR-3D [19], a method that fuses the detection result from Detic into an existing 3D model using 3D global data association. After fusion, the 3D scene is segmented into multiple instances, each with a unique Detic feature attached. Owing to its stable performance, we choose OVIR-3D as our semantic backbone.
**Vision Language Object Grounding** In contrast with object detection and segmentation, object grounding focuses on pinpointing an object within a 2D image or a 3D scene based on textual input. In the realm of 2D grounding, various studies, such as [20; 21; 22; 23], leverage vision-language alignment techniques to correlate visual and linguistic features. In the 3D context, object grounding is inherently linked to the challenges of robot navigation, thus gaining significant attention from the robotics community. For instance, CoWs [24] integrates a CLIP gradient detector with a navigation policy for effective zero-shot object grounding. More recently, NLMap [25], ConceptFusion [26] opts to incorporate pixel-level open-vocabulary features into a 3D scene reconstruction, resulting in a queryable scene representation. While NLMap overlooks intricate relationships in their framework, ConceptFusion claims to be able query objects from long text input with understanding of the object context. Thus, we include ConceptFusion as one of our baselines for 3D vision-language grounding.
**3D Scene Graph** 3D scene graphs provide an elegant representation of objects and their relationships, encapsulating them as nodes and edges, respectively. The term "3D" denotes that each node within the scene possesses a three-dimensional position. In [3], Fisher et al. first introduced the concept of 3D scene graphs, where graph nodes are categorized by geometric shapes. Armeni et al. [4] and Kim et al. [5] then revisited this idea by incorporating semantic labels to graph nodes. These works establish a good foundation for semantic-aware 3D scene graphs, demonstrating that objects, rooms, and buildings can be effectively represented as graph nodes. Recently, Wald et al. [7] showed
that 3D feature extraction and graph neural networks (GNN) can directly infer semantic categories and object relationships from raw 3D point clouds. Rosinol et al. [6] further included dynamic entities, such as users, within the scope of 3D scene graph representation. While 3D scene graphs exhibit great potential in object retrieval and long-term motion planning, none of the existing methods support open-vocabulary queries and direct natural language interaction. Addressing these limitations is crucial for real-world deployment, especially for enabling seamless interaction with users.
## 3 Open-Vocabulary 3D Scene Graph
### Open-Vocabulary 3D Scene Graph Representation
An Open-Vocabulary 3D Scene Graph (OVSG) is denoted as \(G=|V,E|\), where \(V\) signifies graph nodes and \(E\) stands for graph edges. Each node \(v^{i}\) in \(V=\{v^{i}\}=\{t^{i},f^{i},l^{i},p^{i}\}\) consists of a node type \(t^{i}\), a open-vocabulary feature \(f^{i}\), a language description \(l^{i}\) (optional), and a 3D position \(p^{i}\) (optional); \(i\) is the node index. In this study, we identify three primary node types, \(t^{i}\): object, agent, and region. The open-vocabulary feature \(f^{i}\) associated with each node \(v_{i}\) is contingent on the node type \(t_{i}\). The encoder utilized for \(f^{i}\) is accordingly dependent on \(t_{i}\). The 3D position \(p^{i}=\{x_{c},y_{c},z_{c},x_{min},y_{min},z_{min},x_{max},y_{max},z_{max}\}\) of each entity is defined by a 3D bounding box and its center position. Edges in the graph are represented by \(E=\{e^{i,j}|v^{i},v^{j}\in V\},e^{i,j}=\{r^{i,j,k}=\{t^{i,j,k},f^{i,j,k},l^{i,j, k}\}|k=0,\ldots\}\). Each edge \(e^{i,j}\) encapsulates all relationships \(r^{i,j,k}\) between the nodes \(v^{i}\) and \(v^{j}\). The triplet notation \((i,j,k)\) refers the \(k^{th}\) relationship between node \(v^{i}\) and \(v^{j}\), \(t^{i,j,k}\) indicates the type of this relationship. We primarily categorize two relationships in this study: spatial relations and abstract relationships. A short sentence \(l^{i}\) is optionally provided to describe this relationship. The feature \(f^{i,j,k}\) encodes the semantic meaning of the relationship, whose encoder depends on \(t^{i,j,k}\). For a more detailed definition of these types, please refer to Section 3.3.
The primary distinction of OVSG from conventional 3D scene graph work is its utilization of semantic features, instead of discrete labels, to characterize nodes and relationships. These features are either directly trained within the language domain like Sentence-BERT [27] and GloVe [28], or aligned to it, as seen with CLIP [9] and Detic [16]. The versatility of language features enables OVSG to handle diverse queries. The degree of similarity among nodes and edges is depicted using a distance metric applied to their features:
Figure 1: This is an illustration of the proposed pipeline. The system inputs are the positional input \(P_{u}\), user input \(L_{u}\), RGBD Scan \(I\), and a query language \(L_{g}\). The top section depicts the construction of \(G_{g}\). Both \(P_{u}\) and \(L_{u}\) are derived for all \(G_{g}\). The RGBD Scan \(I\) inputs into the open-vocabulary fusion system defined to as OVSG-3D. This system outputs a position and a Detic feature for each object. Subsequently, the language descriptions for the agent and region are converted into features via different encoders. A unique Spatial Relationship Encoder is employed to encode spatial relationship features from pose pairs. The bottom section shows the building of \(G_{q}\). The query \(L_{g}\) used in this example is: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 86, 87, 88, 89, 91, 80, 83, 85, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 181, 182, 183, 184, 185, 186, 187, 188, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 208, 209, 210, 211, 22, 213, 214, 215, 216, 217, 223, 218, 224, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 289, 291, 292, 293, 294, 295, 296, 297, 298, 300, 31, 320, 321, 32, 333, 34, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 85, 87, 89, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 119, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 140, 151, 161, 17, 182, 183, 184, 185, 186, 187, 188, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 209, 211, 223, 214, 215, 216, 217, 218, 229, 229, 231, 232, 233, 234, 236, 237, 238, 239, 241, 248, 249, 251, 265, 266, 270, 278, 288, 290, 291, 294, 295, 296, 297, 298, 299, 300, 320, 321, 334, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 83, 89, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
\[\text{dist}(v^{i},v^{j})=\begin{cases}\infty&\text{if }t^{i}\neq t^{j}\\ 1-\text{dot}(f^{i},f^{j})&\text{else}\end{cases};\text{dist}(e^{i,j},e^{u,v})= \min_{\forall k\in|e^{i,j}|,\forall w\in|e^{u,v}|}\text{dist}(r^{i,j,k},r^{u,v,w})\]
\[\text{dist}(r^{i,j,k},r^{u,v,w})=\begin{cases}\infty&\text{if }t^{i,j,k}\neq t^{u,v,w} \\ 1-\text{dot}(f^{i,j,k},f^{u,v,w})&\text{if }t^{i,j,k}=t^{u,v,w}\neq\text{ spatial}\\ \text{SRP}(f^{i,j,k},f^{u,v,w})&\text{if }t^{i,j,k}=t^{u,v,w}=\text{ spatial}\end{cases} \tag{1}\]
, where the \(|e^{i,j}|\) and \(|e^{u,v}|\) are the number of relationships inside \(e^{i,j}\) and \(e^{u,v}\); _SRP_ refers to a Spatial Relationship Predictor. Check Section 3.3 and Appendix B for more details. Noticeably, the distance across different types will not be directly compared. These distances will be used to compute the type-free index in Section 3.4.
### Context-Aware Open-Vocabulary Entity Grounding
The problem we address can be formally defined using the open-vocabulary scene graph concept as follows: Given a scene, represented as \(S\), our objective is to localize an entity, referred to as \(s\), using natural language, represented as \(L_{q}\), within the context of the scene \(S\). Essentially, we seek to establish a mapping \(\pi\) such that \(s=\pi(L_{q}|S)\). An RGBD scan of the scene \(I\), user linguistic input \(L_{u}\), and position input \(P_{u}\) are provided to facilitate this process. Significantly, the query language \(L_{q}\) may encompass entity types and relationship descriptions not previously included in the scene graph construction phase.
Our proposed procedure can be separated into two main stages. The first stage involves the construction of the scene graph. From the user input \(L_{u}\) and the RGBD scan \(I\), we construct an open-vocabulary scene graph (OVSG) for the entire scene, denoted as \(G_{s}\). This is a one-time process that can be reused for every subsequent query. When a new query is introduced, we also construct an OVSG using the query \(L_{q}\), denoted as \(G_{q}\). Once we have both scene graphs \(G_{s}\) and \(G_{q}\), we proceed to the second stage, which is the graph matching stage. Here, we match the query scene graph, \(G_{q}\), with a sub-graph from the whole scene graph, \(G_{s}\). The queried entity is situated within the matched sub-graph.
### 3D Scene Graph Building
**Type definition** Prior to delving into the scene graph construction procedure, we first delineate the categories of node types and edge types this paper pertains to. The term _Object_ signifies static elements within a scene, such as sofas, tables, and so forth. The term _Agent_ is attributed to dynamic, interactive entities in the scene, which could range from humans to robots. _Region_ indicates a specific area, varying in scale from the surface of a tabletop to an entire room or building. Regarding relationships, _spatial_ describes positional relationships between two entities, such as Tom being in the kitchen. Conversely, _abstract_ relationships are highly adaptable, enabling us to elucidate relationships between an agent and an object (for instance, a cup belonging to Mary) or the affordance relationship between two objects, such as a key being paired with a door.
**Input process** The inputs for \(G_{s}\) consist of an RGBD-scan set \(I\), a user language input \(L_{u}\), and a user position input \(P_{u}\). The \(L_{u}\) input assigns names to agents and regions and provides descriptions of abstract relationships. \(P_{u}\) provides the locations for the agent and region (not including object position), and it can be autonomously generated using existing algorithms like DSGS [6]. Since this process is not the focus of our study, we assume \(P_{u}\) is pre-determined in this paper. The input \(I\) is an RGBD scan of the entire scene, which is fed into the **O**pen-**V**ocabulary **3D** Instance **R**etrieval (OVIR-3D) [19] system, a fusion system operating at the instance level. OVIR-3D returns a set of objects, each denoted by a position \(p^{i}\) and a Detic feature \(f^{i}_{Detic}\).
\(G_{q}\) accepts a language query \(L_{q}\) as its input. An exemplary query, as depicted in Figure 1, is "I want to find Tom's bottle in laboratory". To parse this language, we utilize a large language model (LLM), such as GPT-3.5 or LLAMA. Utilizing a meticulously engineered prompt (refer to Appendix A for more details), we can interpret different entities within the query.
**Feature encoding** As specified in Eq. 1, the calculation of the similarity between nodes and edges relies heavily on their features. This operation of computing features is termed the feature encoding process.
Instead of using a unified encoder as in previous works [25; 26], we choose different encoders for various node and relationship types. Since the inputs of \(G_{s}\) and \(G_{q}\) differ, the selection of encoders for each graph also varies. Object features in \(G_{s}\) are generated by deploying OVIR-3D to the 3D scan of the scene. These features are Detic features. Meanwhile, objects in \(G_{s}\) are encoded from their names \(l\) (parsed from LLM during the input process) using the CLIP-text encoder. Because the Detic feature is directly trained to align with the CLIP-text feature, we can compute distances for object nodes between \(G_{s}\) and \(G_{q}\) using Eq.1. For agent and region nodes in \(G_{s}\), they are identified by their names in the user input, \(L_{u}\). Whereas in \(G_{q}\), agent and region nodes are also specified by names \(l\). For both of them, we employ Sentence-BERT [27] to encode the language features. As for relationships, we differentiate between spatial relationships and abstract relationships. In \(G_{s}\), the input for spatial relationships comes from the positions of the corresponding nodes. In contrast, in \(G_{q}\), the input for spatial relationships comes from language descriptions \(l\) parsed from \(L_{q}\) by LLM. Given the absence of a standardized approach for spatial-language encoding, we trained a spatial encoder for this purpose (see Appendix B). Finally, for abstract relationship features in \(G_{s}\), the input is language \(l\) from user input, \(L_{u}\). In \(G_{q}\), the input is also textual. We use GloVe to encode these texts on both sides.
Multiple distinct encoders are utilized during the feature encoding step. Different encoders have varied emphases, and using a combination can improve the robustness of OVSG. For instance, GloVe is trained to be sensitive to nuances like sentiment, while Sentence-BERT is not. Therefore, we use GloVe for abstract relationships to better distinguish relationships such as "like" and "dislike". Conversely, while GloVe does have a predefined vocabulary list, Sentence-BERT does not. Hence, for encoding the names of agents and regions, we prefer Sentence-BERT. Moreover, OVSG is designed with a modularized structure, allowing future developers to easily introduce new types and feature encoders into OVSG.
### Sub-graph Matching
Subsequent to the phases of input processing and feature encoding, two OVSG representations are constructed: one for the scene and another for the query, denoted by \(G_{s}\) and \(G_{q}\) respectively. The problem of grounding \(L_{q}\) within the scene \(S\) can be converted now effectively translates to locating \(G_{q}\) within \(G_{s}\). Generally, the subgraph-matching problem is NP-hard, prompting us to make several assumptions to simplify this problem. In this study, we assume that our \(G_{q}\) is a star graph, signifying that a central node exists and all other nodes are exclusively linked to this central node. (If \(G_{q}\) is not a star-graph, we will extract a sub-star-graph from it, and use this sub-graph as our query graph.)
The pipeline of sub-graph matching is illustrated on the right side of Figure 1. This a two-step procedure: candidate proposal and re-ranking. Let's denote the center of \(G_{q}\) as \(v^{c}_{q}\). Initially, we traverse all nodes, \(v^{i}_{s}\), in \(V_{s}\), ranking them based on their distance to \(v^{c}_{q}\), computed with Eq. 1. Subsequently, we extract the local subgraph, \(G^{i}_{s}\), surrounding each candidate, \(v^{i}_{s}\). These extracted subgraphs serve as our candidate subgraphs. In the second phase, we re-rank these candidates using a graph-similarity metric, \(\tau(G_{q},G^{i}_{s})\). To evaluate graph similarity, we examine three distinct methodologies: Likelihood, Jaccard coefficient, and Szymkiewicz-Simpson index.
**Likelihood** Assuming the features of nodes and edges all originate from a normal distribution, we can define the likelihood of nodes and edges being identical as follows: \(L(v^{i},v^{j})=exp(\frac{-\text{dist}(f^{i},f^{j})}{\sigma_{v}})\) for nodes and \(L(e^{i,j},e^{u,v})=exp(\frac{-\text{dist}(f^{i,j},f^{u,v})}{\sigma_{i}})\) for edges. Here \(\sigma_{v}\) and \(\sigma_{e}\) are balancing parameters. From this, we can derive the graph-level likelihood \(\tau_{L}\) as:
\[\tau_{L}(G_{q},G^{i}_{s})=L(v^{c}_{q},v^{c}_{s^{i}})\times\prod_{k\in|V_{q}|} \underset{j\in|V_{s^{i}}|}{\text{argmax}}\left[L(v^{k}_{q},v^{j})\cdot L(e^{c,k}_{q},e^{c,j}_{s^{i}})\right] \tag{2}\]
where \(v_{s^{i}}^{c}\) is the center node of \(G_{s}^{i}\). The insight behind this formula is to iterate over all possible node-level associations and select the one that maximizes the overall likelihood that \(G_{q}\) matches with \(G_{s}^{i}\). Noticeably, we use \(\sigma_{v}\) and \(\sigma_{e}\) to balance the node-wise and edge-wise likelihood. In practice, we use \(\sigma_{v}=1.0\) and \(\sigma_{e}=2.0\) to make the matching more sensitive to node-level semantics.
**Jaccard-coefficient & Szymkiewicz-Simpson index** In addition to the likelihood index, we also consider other widely used graph similarity indices such as the Jaccard and Szymkiewicz-Simpson indices. Both indices measure the similarity between two sets.
We adopt a similar method as in [7], generating a set \(S(G)\) for each graph \(G\) by combining nodes and edges, such that \(|S(G)|=|V|+|E|\). The Jaccard coefficient \(\tau_{J}\) and Szymkiewicz-Simpson index \(\tau_{S}\) are then defined as follows:
\[\tau_{J}(G_{q},G_{s}^{i})=\frac{|S(G_{q})\cap S(G_{s}^{i})|}{|S(G_{q})|+|S(G_ {s}^{i})|-|S(G_{q})\cap S(G_{s}^{i})|},\tau_{S}(G_{q},G_{s}^{i})=\frac{|S(G_{q })\cap S(G_{s}^{i})|}{min(|S(G_{q})|,|S(G_{s}^{i})|)} \tag{3}\]
Given that we already know \(|S(G_{q})|\) and \(|G_{s}^{i}|\), we simply need to compute \(|S(G_{q})\cap S(G_{s}^{i})|\), which consists of nodes or edges that belong to both \(G_{q}\) and \(G_{s}^{i}\). We can define this union by applying distance thresholds \(\tau_{v}\) and \(\tau_{e}\) for node and edge separately:
\[S(G_{q})\cap S(G_{s}^{i})=\{(v_{q}^{k},v_{s^{l}}^{\pi(k)})|dist(f_{q}^{k},f_{s^ {l}}^{\pi(k)})<\epsilon_{v}\}+\{(e_{q}^{k},e_{s^{l}}^{\pi(k)})|dist(e_{q}^{k}, e_{s^{l}}^{\pi(k)})<\epsilon_{e}\} \tag{4}\]
Here, \(\pi\) is a data association between \(G_{q}\) and \(G_{s}^{i}\), where \(\pi(k)=\text{argmin}_{\pi(k)}(dist(s_{k},s_{\pi(k)}))\). \(\epsilon_{v}\) and \(\epsilon_{e}\) are threshold parameters. The differences between \(\tau_{L}\), \(\tau_{J}\), and \(\tau_{S}\) can be understood as follows: \(\tau_{L}\) describes the maximum likelihood among all possible matches between \(G_{q}\) and \(G_{s}^{i}\). Both \(\tau_{J}\) and \(\tau_{S}\) use thresholds \(\epsilon_{v}\), \(\epsilon_{e}\) to convert the node and edge matches to binary, and they measure the overall match rate with different normalization.
## 4 System Evaluation
Our OVSG framework experiments addressed these research questions: 1) How does our context-aware grounding method compare to prevailing approaches, including the SOTA semantic method and the recent work in the landscape of 3D semantic/spatial mapping, ConceptFusion [29] 2) How well does OVSG handle open-vocabulary queries? 3) What differences do our graph similarity-based methods show? 4) How well does OVSG perform inside a real robot environment?
These questions are imperative as they not only test the robustness of the OVSG framework but also its comparative efficacy against notable methods like ConceptFusion in the ability to handle the intricacies of context-aware open-vocabulary queries.
### Queries, Dataset, Metrics & Baselines
**Queries**: We have two categories of queries for evaluation:
* **Object-only Queries** These queries are devoid of any specific agent or region preference. They are less generic and assess the system's grounding ability based purely on objects. An example might be: "Can you identify a monitor with a keyboard positioned behind it?"
* **Whole Queries** These queries inherently contain a mix of agent, region, and object preferences. For instance, these queries may include agents and other different entity types. An example would be: "Locate the shower jet that Nami loves, with a mirror to its right."
**ScanNet**: We employed ScanNet's validation set (312 scenes) for evaluation. Since ScanNet only includes objects, we emulated agents, induced their abstract relationships to objects, captured spatial
relationships between objects, and extracted object features via OVIR-3D before integrating the dataset into our evaluation pipeline. Resource limitations prevented manual labeling of scenes; hence, we synthetically generated 62000 queries (approx.) for evaluation (details in Appendix E.1).
**DOVE-G** We created DOVE-G to support open-vocabulary queries within scenes using natural language. Each scene includes manually labeled ground truth and 50 original natural language queries (\(L_{q}\)). Using LLMs, we expanded this by generating four extra sets of queries, totaling 250 queries per scene and 4000 overall to test OVSG's capabilities with diverse language expressions.
**ICL-NUIM** To thoroughly compare our method, notably with ConceptFusion, we utilized the ICL-NUIM dataset[8]. We have created 359 natural language queries for the 'Whole Query' category and 190 natural language queries for the 'Object-only Query'. It should be noted that our approach is not merely a superficial addition of another dataset; instead, we have adapted and generated natural language queries for each scene within ICL-NUIM, emulating our methodology with DOVE-G. To adapt it to our framework, we performed similar preprocessing steps as with DOVE-G, importantly manually labeled ground-truth annotations and leveraging OVIR-3D for feature extraction. Using this dataset, we demonstrate the superiority of our proposed method over ConceptFusion, especially concerning complex natural language queries that hinge on multiple relationships as context.
**Evaluation Metrics**
For each query, we evaluated the system's performance using three distinct metrics:
* **IoUBB** For each query, this measures the 3D bounding box IoU between the ground truth and the top-k candidates yielded by our system.
* **IoU3D** For each query, this measures the IoU between the point cloud indices of the ground truth instance and the predicted instance.
* **Grounding Success Rate** For each scene, this measures the fraction of queries where the system's predictions accurately match the ground truth given that the overlap is significant(**IoUBB**\(\geq\) 0.5 or **IoU3D**\(>\) 0.5). The overlap threshold can be adjusted to alter the strictness of the success criteria.
We reported the Top1 and Top3 Grounding Success Rates and average IoU scores for each scene, reflecting the performance of our system in the Top-k results returned for each query.
**Baselines** We assessed five methods in our study. The SOTA open-vocabulary grounding method, OVIR-3D, is our primary baseline as it will not leverage any inter-notion relations, providing a comparative measure for the effectiveness of contextual information integration in the other methods. Unlike OVIR-3D, ConceptFusion integrates spatial relationships implicitly. The other three methods, namely OVSG-J, OVSG-S, and OVSG-L (for Jaccard coefficient, Szymkiewicz-Simpson index, and Likelihood, respectively) implement Context-Aware Entity Grounding using different sub-graph matching techniques, as detailed in Section 3.4.
### Performance
**ScanNet** Table 1 averages results across 312 ScanNet scenes. Contextual data greatly improved entity grounding, with graph similarity variants (OVSG-S, OVSG-L) surpassing OVIR-3D, especially in scenes with repetitive entities like bookstores. More details are in Appendix E.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{Query Type} & \multirow{2}{*}{\# Queries} & \multirow{2}{*}{Metric} & \multicolumn{3}{c}{Avg. Top1 Scores per Query} & \multicolumn{3}{c}{Avg. Top3 scores per Query} \\ \cline{4-9} & & OVIR-3D & OVSG-J & OVSG-S & OVSG-L (Ours) & OVIR-3D & OVSG-J & OVSG-S & OVSG-L (Ours) \\ \hline \multirow{4}{*}{Object-only} & \multirow{4}{*}{18,683} & **IoUn** & 0.38 & 0.15 & 0.4 & **0.81** & 0.52 & 0.4 & 0.52 & **0.85** \\ & & **IoU3D** & 0.38 & 0.22 & 0.44 & **0.55** & - & - & - \\ & & Grounding Success Rate**nm** & 38.52 & 15.29 & 40.99 & **52.18** & 52.95 & 41.25 & 53.6 & **56.25** \\ & & & Grounding Success Rate**nm** & 45.13 & 17.22 & 47.79 & **60.35** & - & - & - \\ \hline \multirow{4}{*}{Whole} & \multirow{4}{*}{20,173} & **IoUn** & 0.37 & 0.22 & 0.44 & **0.55** & 0.53 & 0.45 & 0.55 & **0.57** \\ & & **IoU3D** & 0.39 & 0.16 & 0.41 & **0.53** & - & - & - \\ & & Grounding Success Rate**nm** & 38.56 & 24.33 & 47.77 & **58.85** & 56.28 & 47.84 & 59.87 & **61.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of OVSG on ScanNet
**DOVE-G** Table 2 averages performance over DOVE-G scenes for five query sets. OVSG-L consistently led, further detailed in Appendix F.3. While OVSG-J and OVSG-S were competitive in some scenes, OVSG-L was generally superior. OVIR-3D shined in the Top3 category, especially since DOVE-G scenes had fewer repetitive entities. Additional insights in Appendix F.
**ICL-NUIM** Table 3 shows ICL-NUIM results with OVSG-L outperforming other methods, especially in the 'Whole Query' segment, contrasting with ScanNet and DOVE-G performances. ConceptFusion's performance was inconsistent across ICL-NUIM scenes (see Appendix G.3), with notable success in one scene (highlighted in orange in Table 3). Simplified queries improved ConceptFusion's results, as depicted in the 'ConceptFusion (w/o rel)' column. Due to its point-level fusion approach, we evaluated different point thresholds and found optimal results at the Top 1500 points. Metrics like \(\mathbf{IoU_{BB}}\) are not applicable for ConceptFusion. Further details on ICL-NUIM are in Appendix G. Despite ConceptFusion's strategy to avoid motion-blurred ScanNet scenes [29], its efficacy was still suboptimal in certain clear scenes.
Apart from these results, we also provide vocabulary analysis on OVSG as well as two robot experiments. Due to space limits, we put them to Appendices C and D.
## 5 Conclusion & Limitation
Although we have demonstrated the effectiveness of the proposed OVSG in a set of experiments, there still remains three major limitations for our current implementation. First, OVSG heavily relies on an open-vocabulary fusion system like OVIR-3D, which may lead to missed queries if the system fails to identify an instance. Second, the current language processing system's strong dependence on LLMs exposes it to inaccuracies, as any failure in parsing the query language may yield incorrect output. Third, as discussed in Section 3.4, calculating graph likelihood by multiplying nodes and edges likelihoods may not be optimal, as likelihoods from distinct types might carry varying levels of importance and distribution. Accurately balancing these factors remains a challenge for future research, as our efforts with a GNN have not yielded satisfactory results.
Despite the aforementioned areas for improvement, we observe that OVSG significantly improves context-aware entity grounding compared to existing open-vocabulary semantic methods. Since OVSG only requires natural language as the query input, we believe it holds great potential for seamless integration into numerous existing robotics systems.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Query Type} & \multirow{2}{*}{\# Queries} & \multirow{2}{*}{Metric} & \multicolumn{6}{c}{Avg. Top1 Scores per Query} & \multicolumn{6}{c}{Avg. Top1 values per Query} \\ \cline{4-11} & & & OVIR-3D & OVSG-J & OVSG-S & OVSG-L (Ours) & OVIR-3D & OVSG-J & OVSG-J & OVSG-S & OVSG-L (Ours) \\ \hline \multirow{4}{*}{Object-only} & \multirow{4}{*}{320} & \multirow{4}{*}{\begin{tabular}{c} **IoUmb** \\ **IoU3D** \\ Grounding Sockets Range \\ \end{tabular} } & 0.37 & 0.14 & 0.39 & **0.49** & 0.57 & 0.36 & **0.56** & **0.56** & **0.56** \\ & & & 0.41 & 0.43 & **0.54** & - & - & - & - \\ & & Grounding Sockets Range & 36.56 & 13.75 & 39.60 & **48.44** & 58.12 & 34.06 & 56.25 & **56.56** \\ & & Grounding Sockets Range & 49.69 & 18.44 & 53.13 & **67.82** & - & - & - & - \\ \hline \multirow{4}{*}{Whole} & \multirow{4}{*}{400} & \multirow{4}{*}{
\begin{tabular}{c} **IoUmb** \\ **IoU3D** \\ Grounding Sockets Range \\ \end{tabular} } & 0.35 & 0.2 & 0.41 & **0.51** & 0.55 & 0.41 & 0.55 & **0.56** \\ & & Grounding Sockets Range & 0.37 & 0.21 & 0.43 & **0.55** & - & - & - & - \\ & & Grounding Sockets Range & 35.5 & 23.0 & 47.55 & **54.25** & 56.0 & 41.0 & 56.75 & **57.0** \\ & & Grounding Sockets Range & 41.5 & 25.25 & 50.25 & **65.75** & - & - & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of OVSG on DOVE-G
## Acknowledgments
This work is supported by NSF awards 1846043 and 2132972.
|
2309.15798 | Node-Aligned Graph-to-Graph (NAG2G): Elevating Template-Free Deep
Learning Approaches in Single-Step Retrosynthesis | Single-step retrosynthesis (SSR) in organic chemistry is increasingly
benefiting from deep learning (DL) techniques in computer-aided synthesis
design. While template-free DL models are flexible and promising for
retrosynthesis prediction, they often ignore vital 2D molecular information and
struggle with atom alignment for node generation, resulting in lower
performance compared to the template-based and semi-template-based methods. To
address these issues, we introduce Node-Aligned Graph-to-Graph (NAG2G), a
transformer-based template-free DL model. NAG2G combines 2D molecular graphs
and 3D conformations to retain comprehensive molecular details and incorporates
product-reactant atom mapping through node alignment which determines the order
of the node-by-node graph outputs process in an auto-regressive manner. Through
rigorous benchmarking and detailed case studies, we have demonstrated that
NAG2G stands out with its remarkable predictive accuracy on the expansive
datasets of USPTO-50k and USPTO-FULL. Moreover, the model's practical utility
is underscored by its successful prediction of synthesis pathways for multiple
drug candidate molecules. This not only proves NAG2G's robustness but also its
potential to revolutionize the prediction of complex chemical synthesis
processes for future synthetic route design tasks. | Lin Yao, Wentao Guo, Zhen Wang, Shang Xiang, Wentan Liu, Guolin Ke | 2023-09-27T17:16:32Z | http://arxiv.org/abs/2309.15798v2 | # Node-Aligned Graph-to-Graph Generation for
###### Abstract
Single-step retrosynthesis is a crucial task in organic chemistry and drug design, requiring the identification of required reactants to synthesize a specific compound. With the advent of computer-aided synthesis planning, there is growing interest in using machine learning techniques to facilitate the process. Existing template-free machine learning-based models typically utilize transformer structures and represent molecules as 1D sequences. However, these methods often face challenges in fully leveraging the extensive topological information of the molecule and aligning atoms between the production and reactants, leading to results that are not as competitive as those of semi-template models. Our proposed method, Node-Aligned Graph-to-Graph (NAG2G), also serves as a transformer-based template-free model but utilizes 2D molecular graphs and 3D conformation information. Furthermore, our approach simplifies the incorporation of production-reactant atom mapping alignment by leveraging node alignment to determine a specific order for node generation and generating molecular graphs in an auto-regressive manner node-by-node. This method ensures that the node generation order coincides with the node order in the input graph, overcoming the difficulty of determining a specific node generation order in an auto-regressive manner. Our extensive benchmarking results demonstrate that the proposed NAG2G can outperform the previous state-of-the-art baselines in various metrics.
## 1 Introduction
The single-step retrosynthesis (SSR) [1] is an essential operation in organic chemistry and _de novo_ drug design, involving the reverse synthesis of a target molecule in a single step. This process requires tracing back from the target molecule to determine the reactants needed for its synthesis. SSR serves as the basis for multi-step synthesis planning, which aims to identify a complete synthesis route in which
the target molecule can be synthesized through a series of one-step reactions. Retrosynthesis demands a thorough understanding of organic chemistry principles and reaction mechanisms. Nevertheless, the advent of computer-aided synthesis planning has spurred increasing interest in employing machine learning techniques to expedite the retrosynthesis process, particularly for the template-free method.
Early machine learning-based SSR models [2; 3; 4; 5; 6] predominantly employ 1D sequences, such as SMILES (Simplified Molecular Input Line Entry System), for molecular representation. Consequently, existing models from Natural Language Processing (NLP), including RNN [7] and Transformer [8], can be readily utilized. Despite its simplicity, the 1D sequence-based model exhibits several limitations. Firstly, as indicated in numerous previous studies [9; 10; 11], the sequence disregards the extensive topological information depicted in molecular graphs. Secondly, the generation of valid molecular sequences is non-trivial due to the intricate rules involved. Lastly, incorporating production-reactant atom mapping alignment information, which significantly contributes to performance, poses a considerable challenge.
Several recent studies have proposed the adaptation of 2D molecular graphs for molecular representation in SSR models [12; 13; 14; 15]. These graph-based models offer improved encoding of molecules, and facilitate the integration of production-reactant atom mapping alignment information. However, generating a graph remains a significant challenge. Prior approaches have employed a repeated graph edit strategy [12] for graph generation, wherein a graph is iteratively modified by predicted edit actions (such as adding, removing, or updating nodes or edges) until no further alterations are required. This method necessitates the advance planning of edit action routes, and the iterative process of predicting actions based on modified graphs results in a significant computational burden.
To improve the efficiency of graph generation, we propose an auto-regressive approach that generates graphs node-by-node, drawing inspiration from language generation techniques. However, unlike language, graphs lack a natural one-dimensional sequence order, which poses a challenge in determining a specific order for node generation. In the context of SSR, this issue can be addressed by utilizing node alignment. Given the small difference between the input (the production molecule's graph) and the output (the reactants' graphs), we enforce the node generation order to match the node order in the input graph, as illustrated in Figure 1.
Building on this concept, we introduce a novel graph-based SSR template-free model, based on Transformer encoder-decoder architecture [8], denoted as Node-Aligned Graph-to-Graph (NAG2G). In NAG2G, the production molecule's graph initially serves as input for the encoder, and subsequently, the decoder generates reactant molecule graphs. For each node, the model generates the atom type, associated hydrogens and charges, and edges connecting to existing nodes. This generation process proceeds node-by-node in an auto-regressive manner, employing the aforementioned node alignment strategy to determine the node generation order. Besides graph structure information, to encode the sequential generation order, the 1D positional encodings are incorporated into the Transformer encoder and decoder. Moreover, data augmentation, like shuffling the node order, is employed to enhance the overall performance. Finally, we propose an efficient method for integrating dynamic graph-level features into self-attention during graph generation.
We conducted experiments using two widely recognized datasets, USPTO-50k [16] and USPTO-Full [17; 4]. The experimental results unequivocally illustrate that the proposed NAG2G substantially surpasses the performance of all prior baseline models. Additionally, we carried out an ablation study to scrutinize the impact of individual components within the proposed methodology. The findings offer compelling evidence of the effectiveness of the proposed NAG2G.
## 2 Related Work
The single-step retrosynthesis (SSR) [1] is a crucial process in organic chemistry that involves breaking down a target molecule into simpler precursor molecules.
There are several different approaches to modeling retrosynthesis, which can be broadly classified into three categories: template-based, semi-template-based, and template-free.
### Template-free Methods
Template-free methods offer more flexibility than template-based and semi-template methods because they do not rely on pre-defined reaction templates and synthons. Instead, they use machine learning
models to directly infer reactants from given productions, allowing for greater adaptability as the model can learn from any available data without being restricted by pre-existing templates. This approach is well-suited for predicting retrosynthesis for complex or novel chemical structures that may not be compatible with existing templates. However, the template-free approach can also present challenges, as models may struggle to learn the transformation rules between productions and reactants, as well as the validness of molecule representations, leading to the generation of invalid reactants. Additionally, predicting the correct reactants in situations where there are multiple possible solutions may result in duplicate, mixed, or incorrect outcomes. Therefore, sophisticated machine learning algorithms and data representations are needed to address these problems.
In template-free methods, a popular approach is to use SMILES strings as a compact and standardized way of representing chemical structures. Models such as SCROP [2], Tied Transformer [3], Aug. Transformer [4], RetroDCVAE [5], and Retroformer [6] all consider retrosynthesis as a SMILES sequence-to-sequence prediction problem and use Transformer architecture to address it. Despite the progress made in recent years, SMILES representations suffer from a lack of explicit topological information, and the utilization of production-reactant atom mapping alignments remains difficult. To address these limitations, Graph2SMILES [9] leverages a Graph Neural Network (GNN) to extract topological features, while GET [10] combines graph and SMILES encoders. Additionally, GTA [11] incorporates topological information into attention bias. Notably, models such as GTA [11] and Retroformer [6] aim to fully exploit production-reactant atom alignments to enhance performance. Nevertheless, the complete utilization of topological information and production-reactant atom mapping alignments remains a formidable challenge.
Alternative template-free methods that employ graph-based representations include MEGAN [12] and G2GT [18]. MEGAN [12] utilizes graph edit strategies to generate graphs by iteratively modifying the graph until no further changes are necessary. On the other hand, G2GT [18] aimed to generate graphs in an autoregressive manner, generating nodes sequentially. However, G2GT neglected to consider atom mapping alignment, resulting in suboptimal performance due to the challenges associated with determining a specific order for node generation. Furthermore, the results of G2GT are not directly comparable to those of other methods, as several additional techniques, such as self-training, were employed to achieve high benchmark scores.
Figure 1: The illustration of the node-aligned graph-to-graph generation. Given the high similarity between the input (the production molecule’s graph) and the output (the reactants’ graphs) graphs, we ensure that the node generation order corresponds to the node order in the input graph. This approach effectively addresses the challenge of determining a specific order for nodes in the auto-regressive graph generation process.
### Template-based and Semi-template-based Methods
For template-based methods, the first step is to prepare a template library in advance. These templates are typically based on known chemical reactions and can be either manually curated or generated automatically from reaction databases. Then, a model is trained to learn which templates in the library can be used for synthesizing the given productions. This dependence on the template library poses some challenges. Firstly, the library may not contain all possible reactions. Secondly, the relationship between the productions and templates may not be easily learned, especially when dealing with complex productions. Therefore, they may struggle with complex or novel chemical structures that do not fit well with existing templates. Prior methodologies, exemplified by RetroSim [19] and NeuralSym [20], have employed conventional molecular similarity metrics, such as fingerprints and Tanimoto similarity, to match templates and productions. Nevertheless, contemporary approaches, including GLN [17] and RetroComposer [21], have surpassed their predecessors due to their utilization of graph neural networks (GNN) as a central framework for more efficient data representations.
Since inferring reactants directly from productions is challenging, semi-template-based methods divide the retrosynthesis prediction process into two simpler stages. The first stage involves identifying synthons by detecting reactive bonds or atoms in the production. The synthons are not usually included in the datasets and need to be pre-calculated before training. The second stage involves completing the synthons into reactants using either leaving groups selection [22], SMILES generation [23; 24] or graph generation [13; 14; 15]. Error propagation might be easier in a two-stage model compared to end-to-end models, which means that errors in the first stage may still affect the results of the second stage. However, through the implementation of a two-stage methodology, researchers can attain supplementary information pertaining to synthons, and broaden the searching scope allowing for the exploration of various possibilities from production to synthons and from synthons to reactants, ultimately leading to superior overall performance compared to template-free models. Models including G2G [13], RetroXpert [23], RetroPrime [24], GraphRetro [22] and SemiRetro [25], fall in this categories. In the second stage, promising models such as G2Retro [14] and MARS [15] employ graph edit strategies [12] for graph generation. It is worth noting that while these graph edit strategies have demonstrated their effectiveness in semi-template-based approaches, they generally do not exhibit superiority in template-free methods [15].
## 3 Approach
### Model Architecture
As shown in Figure 2, the NAG2G constitutes a Transformer-based encoder-decoder architecture, wherein the encoder's purpose is to learn the representation of target molecules. Several potent models, such as Graphormer [26] and Uni-Mol [27], already exist for effectively learning molecular representations. As our focus is not on proposing a new molecular representation model, we directly employ an existing model as the encoder. Specifically, we adopt the model backbone from [28] as the encoder for NAG2G, which is capable of learning molecular representations based on both 2D graph and 3D conformation. Furthermore, to encode the node order, an 1D positional encoding is additionally used. Formally, we denote the process of the encoder as:
\[\mathbf{O}^{\text{enc}}=f_{\text{enc}}(\mathbf{X},\mathbf{P}^{\text{enc}},\mathbf{E},\mathbf{R}; \mathbf{\theta}^{\text{enc}}), \tag{1}\]
where \(\mathbf{X}\) is the atom feature, \(\mathbf{P}^{\text{enc}}\) is 1D positional encoding which is additionally added to atom embeddings, \(\mathbf{E}\) is the edge feature of the 2D graph, \(\mathbf{R}\) is the atom coordinate of the 3D conformation, \(\mathbf{\theta}^{\text{enc}}\) is the learnable parameters of the encoder, and \(\mathbf{O}^{\text{enc}}\) is the learned representation of the encoder.
The primary function of the decoder is to generate the graph node-by-node through an auto-regressive approach. At the \(i\)-th time step, the decoder receives three inputs:
1) The output from the encoder. In line with most encoder-decoder Transformer models, the encoder's output serves as the Key and Value in the cross-attention layer between the encoder and decoder. This process enables a more effective information exchange between the encoding and decoding stages, ultimately improving the overall performance of the model.
2) The decoder's outputs from previous time steps, ranging from \(1\) to \(i-1\). This mirrors the approach of most auto-regressive generative models, which utilize outputs from earlier time steps as inputs. Additionally, the 1D positional encoding is added to the inputs as a standard practice in the majority of auto-regressive models. The inclusion of this encoding is crucial for NAG2G, since it facilitates the alignment of the atom order between the encoder inputs and the decoder outputs. During training,
the teacher-forcing technique is employed to enhance efficiency and stability, under the assumption that the outputs from previous time steps are 100% accurate.
3) The graph-level features, such as degrees and shortest paths, extracted from the existing predicted outcomes. Although these graph-level features hold the potential to enhance the generative performance of the decoder, incorporating them directly into the model presents an efficiency challenge, as the graph features vary across time steps. To address this issue, we propose an efficient method for integrating these graph-level features, with details provided in Section 3.3.
Employing the given inputs, the decoder generates a new node at the \(i\)-th time step, which comprises its atomic type and associated formal charge, the number of connected hydrogen atoms, and its edges linked to prior nodes. This procedure is inherently auto-regressive, meaning that the information for each node is produced sequentially. Formally, we can denote this process as
\[\begin{split} t_{i}&=f_{\text{dec}}(\mathbf{P}_{1:i}^ {\text{dec}},\mathbf{N}_{1:i-1},\mathbf{G}_{1:i-1},\mathbf{O}^{\text{enc}};\mathbf{\theta}^{ \text{dec}}),\\ c_{i}&=f_{\text{dec}}(t_{i},\mathbf{P}_{1:i}^{\text{ dec}},\mathbf{N}_{1:i-1},\mathbf{G}_{1:i-1},\mathbf{O}^{\text{enc}};\mathbf{\theta}^{\text{dec}}), \\ h_{i}&=f_{\text{dec}}(c_{i},t_{i},\mathbf{P}_{1:i}^{ \text{dec}},\mathbf{N}_{1:i-1},\mathbf{G}_{1:i-1},\mathbf{O}^{\text{enc}};\mathbf{\theta}^{ \text{dec}}),\\ e_{i,1}&=f_{\text{dec}}(h_{i},c_{i},t_{i},\mathbf{P}_{1: i}^{\text{dec}},\mathbf{N}_{1:i-1},\mathbf{G}_{1:i-1},\mathbf{O}^{\text{enc}};\mathbf{\theta}^{ \text{dec}}),\\...&\\ e_{i,k}&=f_{\text{dec}}(e_{i,k-1},...,e_{i,1},h_ {i},c_{i},t_{i},\mathbf{P}_{1:i}^{\text{dec}},\mathbf{N}_{1:i-1},\mathbf{G}_{1:i-1},\mathbf{O} ^{\text{enc}};\mathbf{\theta}^{\text{dec}}),\end{split} \tag{2}\]
where \(\mathbf{N}_{1:i-1}\) represents the set of nodes generated from the previous \(i-1\) time steps, \(\mathbf{P}_{1:i}^{\text{dec}}\) denotes the 1D positional encoding of the current \(i\) nodes, \(\mathbf{G}_{1:i-1}\) represents the graph feature extracted from previous outputs, and \(\mathbf{\theta}^{\text{dec}}\) denotes the learnable parameters of the decoder. The atomic type, associated formal charge, and the number of connected hydrogen atoms for the \(i\)-th node are represented by \(t_{i}\), \(c_{i}\), and \(h_{i}\), respectively. The \(d\)-th edge, denoted by \(e_{i,d}=(j,b)\), connects the \(i\)-th node and the \(j\)-th node with the bond type \(b\). In this context, only the bonds within the molecules are considered as edges. Moreover, to establish a specific generative order for edges, the edges connected to the newly generated nodes (with larger 1D positions) are produced first. To minimize the number
of generative steps, the generation of \(c_{i}\) (and \(h_{i}\)) will be omitted if the node has zero formal charges (or no connected hydrogen atoms).
### Node Alignment and Data Augmentation
The Transformer-based model exhibits permutation invariance for input nodes, necessitating the incorporation of 1D positional encodings to distinguish the sequential positions of these nodes. To specify an order, we employ the atom order in the SMILES sequence corresponding to a given production molecule as the 1D position in the encoder. Figure 4 shows the process of node alignment. The atom order of the decoder generations can be bifurcated into two parts. The first part pertains to the atoms that exist in both the productions and reactants, which are arranged in the same order as the production's SMILES. The second part comprises atoms that are present only in the reactants, which are placed at the end of the generations. By align the reactants' SMILES with that of the production based on RDKit [29], the atoms that are exclusively present in the reactants, are selected following the aligned reactants' SMILES orders and appended to the end of the decoder generations. Furthermore, during training, supplementary data augmentation techniques are applied to enhance the model's robustness. Specifically, as shown in Figure 4, RDKit is employed to randomly permute the production's SMILES, and the atom order in the permuted SMILES is used for the encoder. The reactants' SMILES are then aligned with the permuted SMILES. This data augmentation approach allows the model to be more resilient to variations in generation orders.
### Efficient Time-Varying Graph-Level Features
With the implementation of teacher-forcing technology during training, data at various time steps are processed in parallel to enhance efficiency. The interaction between the current time step and previous time steps is addressed within the decoder's attention layer. To prevent potential leakage from future time steps, the attention matrix is masked using an upper triangular matrix. Formally, we denote this process as:
\[\text{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{softmax}\left(\frac{\mathbf{Q}\mathbf{K }^{T}}{\sqrt{d_{h}}}+\mathbf{M}\right)\mathbf{V}, \tag{3}\]
where \(\mathbf{Q},\mathbf{K},\mathbf{V}\in\mathbb{R}^{n\times d_{h}}\) represent the query, key, and value matrices, respectively, and \(d_{h}\) is the dimension of one head, \(n\) is the number of time steps. \(\mathbf{M}\) is an additive mask matrix that ensures
Figure 4: An example to illustrate the process of data augmentation and production-reactant alignment. The red numbers indicate the atoms present in both the production and reactants, while the blue ones represent the atoms found only in the reactants.
that only the relevant information from the current and previous time steps is considered during the attention computation. For the sake of simplicity, we present the calculation for only one head. The multi-head attention process executes the above single-head calculation in parallel. During the calculation of one head, the computational complexity is \(\mathcal{O}(n\times n\times d_{h})\), and the peak memory consumption is \(\mathcal{O}(n\times n)\).
As previously mentioned, graph-level features vary across time steps, and their direct utilization poses an efficiency challenge during model training. Specifically, to maintain the time-varying graph features, a matrix with shape \(n\times n\times d_{h}\) is required1. These time-varying graph features are then employed as additive positional encodings. As a result, the attention layer can be represented as:
Footnote 1: Here, we consider node-wise graph features, such as node degrees. Pair-wise graph features, such as the shortest path, will consume significantly more memory.
\[\text{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V},\mathbf{D})=\text{softmax}\left(\frac{\mathbf{ Q}(\mathbf{K}+\mathbf{D})^{T}}{\sqrt{d_{h}}}+\mathbf{M}\right)(\mathbf{V}+\mathbf{D}), \tag{4}\]
where \(\mathbf{D}\in\mathbb{R}^{n\times n\times d_{h}}\) denotes the time-varying graph features, and the shape of \(\mathbf{Q},\mathbf{K},\mathbf{V}\) is reshaped to \(n\times 1\times d_{h}\) for broadcasting. In this process, although the computational complexity remains unchanged, the peak memory consumption increases to \(\mathcal{O}(n\times n\times d_{h})\). Considering that \(d_{h}\) is typical \(32\) or even larger, this significant increase in peak memory consumption is considered impractical for real-world applications.
To reduce the cost, we can first remove \(\mathbf{D}\) added to \(\mathbf{V}\). Then, observe that \(\mathbf{Q}(\mathbf{K}+\mathbf{D})^{T}=\mathbf{Q}\mathbf{K}^{T}+\mathbf{Q}\mathbf{D}^{T}\), where the cost is bottlenecked at \(\mathbf{Q}\mathbf{D}^{T}\). Thus, we can reduce the size of the last dimension for this computation. Combining these observations, we obtain:
\[\text{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V},\mathbf{D_{2}})=\text{softmax}\left(\frac{ \mathbf{Q}\mathbf{K}^{T}}{\sqrt{d_{h}}}+\frac{\mathbf{Q}\mathbf{U}\mathbf{D}_{2}^{T}}{\sqrt{d_{h2 }}}+\mathbf{M}\right)\mathbf{V}, \tag{5}\]
where \(\mathbf{U}\in\mathbb{R}^{d_{h}\times d_{h2}}\) is employed to reduce the dimension of \(\mathbf{Q}\), and \(\mathbf{D}_{2}\in\mathbb{R}^{n\times n\times d_{h2}}\) represents the time-varying graph features with a much smaller dimension \(d_{h2}\). With this configuration, the peak memory is reduced to \(\mathcal{O}(n\times n\times d_{h2})\). Figure 3 illustrates the design of a self-attention layer for time-varying graph features.
## 4 Experiment
### Setting
DataWe utilize two widely-accepted retrosynthesis benchmark datasets, USPTO-50k [16] and USPTO-Full [17; 4], as our benchmark datasets. USPTO-50k comprises 50,016 atom-mapped reactions, categorized into 10 reaction classes. We employ the same data split as in previous work [23], resulting in 40,008, 5,001, and 5,007 reactions for the training, validation, and test sets, respectively. Following previous studies, we evaluate model performance under two conditions: one with known reaction classes and another without.
The USPTO-Full dataset is a more extensive collection containing approximately 1 million atom-mapped reactions. In our study, we employed the filtered USPTO-Full dataset as described by Tetko et al. [4], instead of the original USPTO-Full dataset created by Dai et al. [17]. This filtered version eliminates incorrect reactions, leading to an approximate 4% reduction in the size of the training, validation, and test sets, which now comprise approximately 769,000, 96,000, and 96,000 reactions respectively. Consistent with previous works, we did not benchmark the results with reaction classes in USPTO-Full.
Model TrainingWe established the model using a 6-layer encoder and a 6-layer decoder. The input embedding dimension was set to 768, and the number of attention heads was set to \(24\). We employed the Adam optimizer [30] with \((\beta_{1},\beta_{2})\) = \((0.9,0.999)\), and utilized linear warmup and decay with a peak learning rate of 2.5e-4. For training on the USPTO-50k dataset, we employed a total of 12,000 training steps, a batch size of 16, and the process took approximately 6 hours on a single NVIDIA A100 GPU. For training on the USPTO-Full dataset, we employed a total of 48,000 training steps, a batch size of 64, and the process took approximately 30 hours on eight NVIDIA A100 GPUs.
Model Inference & EvaluationDuring the inference process, we employ the widely-used beam search technique to generate top candidate predictions. Specifically, we set the beam size to 10, with a length penalty of 0 and a temperature of 1. It is important to note that data augmentation is not applied during inference. Additionally, RDChiral [31] is used to address the stereochemistry of reactants based on the stereochemistry of the productions.
To evaluate prediction accuracy, we adopt the method proposed by Liu et al. [32], which considers a prediction accurate only if all reactants in a reaction are correctly predicted. We measure the top-\(k\) accuracy of predictions, defined as the proportion of test cases in which the correct answer appears among the top \(k\) candidates of the beam search results.
### Result
Uspto-50kWe evaluate our proposed method, NAG2G, by comparing it with recent baseline approaches, including template-based, semi-template-based, and template-free methods. The results are summarized in Table 1. Based on these findings, we draw the following conclusions. (1)
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{8}{c}{**Top-\(k\) Accuracy (\%)**} \\ \cline{2-10} & \multicolumn{8}{c}{**USPTO-50k**} \\ \cline{2-10} & \multicolumn{8}{c}{**Reaction Class Known**} & \multicolumn{8}{c}{**Reaction Class Unknown**} \\ \cline{2-10}
**Model** & 1 & 3 & 5 & 10 & 1 & 3 & 5 & 10 \\ \hline \multicolumn{10}{l}{**Template-Based**} \\ \hline RetroSim [19] & 52.9 & 73.8 & 81.2 & 88.1 & 37.3 & 54.7 & 63.3 & 74.1 \\ NeuralSym [20] & 55.3 & 76.0 & 81.4 & 85.1 & 44.4 & 65.3 & 72.4 & 78.9 \\ GLN [17] & 64.2 & 79.1 & 85.2 & 90.0 & 52.5 & 69.0 & 75.6 & 83.7 \\ MHNreact [33] & - & - & - & - & 50.5 & 73.9 & 81.0 & 87.9 \\ RetroComposer [21] & 65.9 & 85.8 & 89.5 & 91.5 & 54.5 & **77.2** & 83.2 & 87.7 \\ \hline \multicolumn{10}{l}{**Semi-Template-Based**} \\ \hline G2G [13] & 61.0 & 81.3 & 86.0 & 88.7 & 48.9 & 67.6 & 72.5 & 75.5 \\ RetroXpert [23] & 62.1 & 75.8 & 78.5 & 80.9 & 50.4 & 61.1 & 62.3 & 63.4 \\ RetroPrime [24] & 64.8 & 81.6 & 85.0 & 86.9 & 51.4 & 70.8 & 74.0 & 76.1 \\ GraphRetro [22] & 63.9 & 81.5 & 85.2 & 88.1 & 53.7 & 68.3 & 72.2 & 75.5 \\ SemiRetro [25] & 65.8 & 85.7 & 89.8 & 92.8 & 54.9 & 75.3 & 80.4 & 84.1 \\ G2Retro [14] & 63.6 & 83.6 & 88.4 & 91.5 & 54.1 & 74.1 & 81.2 & 86.7 \\ MARS [15] & 66.2 & 85.8 & 90.2 & 92.9 & 54.6 & 76.4 & 83.3 & 88.5 \\ \hline \multicolumn{10}{l}{**Template-Free**} \\ \hline LV-Transformer [34] & - & - & - & - & 40.5 & 65.1 & 72.8 & 79.4 \\ SCROP [2] & 59.0 & 74.8 & 78.1 & 81.1 & 43.7 & 60.0 & 65.2 & 68.7 \\ GET [10] & 57.4 & 71.3 & 74.8 & 77.4 & 44.9 & 58.8 & 62.4 & 65.9 \\ Tied Transformer [3] & - & - & - & - & 47.1 & 67.1 & 73.1 & 76.3 \\ MEGAN [12] & 60.7 & 82.0 & 87.5 & 91.6 & 48.1 & 70.7 & 78.4 & 86.1 \\ Aug. Transformer [4] & - & - & - & - & 48.3 & - & 73.4 & 77.4 \\ Aug. Transformer \(*\)[4] & - & - & - & - & 53.5 & 69.4 & 81 & 85.7 \\ GTA [11] & - & - & - & - & 51.1 & 67.6 & 74.8 & 81.6 \\ Graph2SMILES [9] & - & - & - & - & 52.9 & 66.5 & 70.0 & 72.9 \\ RetroDCVAE [5] & - & - & - & - & 53.1 & 68.1 & 71.6 & 74.3 \\ DualTF [35] & 65.7 & 81.9 & 84.7 & 85.9 & 53.6 & 70.7 & 74.6 & 77.0 \\ Retroformer [6] & 64.0 & 82.5 & 86.7 & 90.2 & 53.2 & 71.1 & 76.6 & 82.1 \\ G2GT [18] & - & - & - & - & 48.0 & 57.0 & 64.0 & 64.5 \\ G2GT \(*\)[18] & - & - & - & - & 54.1 & 69.9 & 74.5 & 77.7 \\ NAG2G (ours) & **67.2** & **86.4** & **90.5** & **93.8** & **55.1** & 76.9 & **83.4** & **89.9** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Top-\(k\) accuracy for retrosynthesis prediction on USPTO-50k. The best performance is in **bold**, and the best results in each method type are underlined. Models denoted by an asterisk (\(*\)) employed supplementary datasets for training or incorporated techniques to enhance the accuracy during inference. In order to maintain a fair comparison, we also present their results without the implementation of these additional techniques.
Within the template-free category, NAG2G significantly outperforms all previous baselines across all metrics. Although some baselines employ additional data or techniques to enhance the benchmark results (denoted with \(*\)), the proposed method still outperforms them substantially. (2) Despite the additional use of pre-defined rules in template-based and semi-template-based methods, NAG2G still surpasses them, demonstrating a considerable improvement. Notably, this is the first instance of a template-free model outperforming both template-based and semi-template-based methods, as previous template-free baselines have been unable to achieve this goal.
Conclusion
In this paper, we have introduced a novel graph-based SSR template-free model, Node-Aligned Graph-to-Graph (NAG2G), which leverages Transformer encoder-decoder architecture to generate reactant molecule graphs in an auto-regressive manner. By utilizing node alignment strategy, we effectively address the challenge of determining node generation order in molecular graphs. Experimental results on widely recognized datasets, USPTO-50k and USPTO-Full, demonstrate that NAG2G significantly outperforms previous state-of-the-art baseline models. Ablation studies provide insights into the impact of individual components within our methodology, further validating the effectiveness of the proposed NAG2G model.
Our work represents a significant advancement in the application of machine learning techniques for single-step retrosynthesis, with the potential to greatly expedite the retrosynthesis process and contribute to the broader fields of organic chemistry and _de novo_ drug design. Future research may explore additional enhancements to the proposed model, as well as the integration of our approach into multi-step synthesis planning for even more complex and diverse chemical synthesis tasks.
### Limitations
The proposed graph-to-graph generation method is primarily designed for single-step retrosynthesis prediction, as there is a small difference between the input and the output graphs. However, for general graph-to-graph generation problems, particularly those with significant differences between inputs and outputs, the proposed method may not perform optimally.
|
2309.14492 | AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers | To date, endovascular surgeries are performed using the golden standard of
Fluoroscopy, which uses ionising radiation to visualise catheters and
vasculature. Prolonged Fluoroscopic exposure is harmful for the patient and the
clinician, and may lead to severe post-operative sequlae such as the
development of cancer. Meanwhile, the use of interventional Ultrasound has
gained popularity, due to its well-known benefits of small spatial footprint,
fast data acquisition, and higher tissue contrast images. However, ultrasound
images are hard to interpret, and it is difficult to localise vessels,
catheters, and guidewires within them. This work proposes a solution using an
adaptation of a state-of-the-art machine learning transformer architecture to
detect and segment catheters in axial interventional Ultrasound image
sequences. The network architecture was inspired by the Attention in Attention
mechanism, temporal tracking networks, and introduced a novel 3D segmentation
head that performs 3D deconvolution across time. In order to facilitate
training of such deep learning networks, we introduce a new data synthesis
pipeline that used physics-based catheter insertion simulations, along with a
convolutional ray-casting ultrasound simulator to produce synthetic ultrasound
images of endovascular interventions. The proposed method is validated on a
hold-out validation dataset, thus demonstrated robustness to ultrasound noise
and a wide range of scanning angles. It was also tested on data collected from
silicon-based aorta phantoms, thus demonstrated its potential for translation
from sim-to-real. This work represents a significant step towards safer and
more efficient endovascular surgery using interventional ultrasound. | Alex Ranne, Yordanka Velikova, Nassir Navab, Ferdinando Rodriguez y Baena | 2023-09-25T19:34:12Z | http://arxiv.org/abs/2309.14492v1 | # AiAReSeg: Catheter Detection and Segmentation in Interventional Ultrasound using Transformers
###### Abstract
To date, endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature. Prolonged Fluoroscopic exposure is harmful for the patient and the clinician, and may lead to severe post-operative sequelae such as the development of cancer. Meanwhile, the use of interventional Ultrasound has gained popularity, due to its well-known benefits of small spatial footprint, fast data acquisition, and higher tissue contrast images. However, ultrasound images are hard to interpret, and it is difficult to localise vessels, catheters, and guidewires within them. This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences. The network architecture was inspired by the Attention in Attention mechanism, temporal tracking networks, and introduced a novel 3D segmentation head that performs 3D deconvolution across time. In order to facilitate training of such deep learning networks, we introduce a new data synthesis pipeline that used physics-based catheter insertion simulations, along with a convolutional ray-casting ultrasound simulator to produce synthetic ultrasound images of endovascular interventions. The proposed method is validated on a hold-out validation dataset, thus demonstrated robustness to ultrasound noise and a wide range of scanning angles. It was also tested on data collected from silicon-based aorta phantoms, thus demonstrated its potential for translation from sim-to-real. This work represents a significant step towards safer and more efficient endovascular surgery using interventional ultrasound.
## I Introduction
Cardiovascular disease is the most common cause of death in the world, accounting for 17.9 million deaths per annum [1]. Traditionally, open surgery is performed to expose the diseased vasculature, which poses significant trauma for the patient. As an alternative, computer-assisted minimally invasive endovascular surgery has been widely adopted due to its benefits of reducing patient recovery time, and lower risk of infection, thus saving costs for healthcare providers, and more importantly saving lives.
In endovascular surgery, catheters and guidewires are steered, under Fluoroscopic guidance, through tortuous vessel trees to reach their desired destination [2]. During navigation, staff and patient are exposed to prolonged periods of ionising radiation, which increases the risk of developing cancer. During an intervention, in order to visualise the vessels, the patient is also injected with a radiopaque dye (Digital Subtractive Angiography, DSA), which is harmful for the kidneys. On the other hand, this system still lacks the ability to obtain feedback on real-time instrument positions relative to the vasculature. This may introduce additional risks for the patient, as there may be frequent and unintentional contacts between these instruments and the vessel wall, with the consequent risks of perforation, dissection, thrombosis and embolization [3].
Alternatively, intraoperative Ultrasound imaging (iUS) offers a non-ionising solution for visualisation. In comparison to Fluoroscopy, US imaging has been a popular tool in diagnosis and aneurysm screening [4], due to the high tissue contrast, temporal resolution, and efficacy [5]. In surgery, clinicians have applied it in conjunction with or completely replacing Fluoroscopy, in endovascular aneurysm repair [6, 7], Balloon Angioplasty [8], and Electrophysiology [9].
In order to monitor the instruments, the surgeon must detect the tip position of the catheter in US images, which poses a significant challenge for them. To begin with, the spatial resolution of an ultrasound image is limited by the number of elements in the transducer, and by the trade-off relationship with the penetration depth [10]. In order to examine deep into the target tissue, the clinician must lower the frequency as waves with higher wavelength experience less attenuation [11]. However, this is at the expense of spatial resolution. Secondly, the noisy nature of ultrasound makes interpretation of images difficult, since images contain clutter, shadowing, and reverberation artifacts. Consequently, labeling and interpreting the image requires expert knowledge in order to relate the physical anatomy with the image, which may vary in quality, resolution, intensity, and acquisition protocols, and are not standardized. The progress in deep learning for
Fig. 1: Main workflow pipeline of proposed system. Stage 1: Data synthesis via physics engine and ray casting. Stage 2: Detection of critical anatomy locations. Stage 3: Semantic segmentation
US instrument detection and segmentation in endovascular procedures offers an opportunity for the field to reform. With the development of object detection networks, researchers have identified their potential in finding objects of varying sizes, and streamlining their workflow. Designing a suitable architecture for this task, and acquiring sufficient data to do so are two challenges that need to be solved.
In this paper, we propose a novel three-step framework to overcome the lack of intraoperative ultrasound data of a catheterisation, required to train our network. In Sect. III, we propose to generate synthetic iUS data with instruments inside by fusing a physics engine into an existing CT-to-US simulator, thus generating mechanically realistic scans. Once generated, this data was then used to train a novel detection and segmentation architecture (AiAReSeg), which we propose in in Sect. III-C. Finally, in Sect. IV, we evaluate the trained model on a hold out validation set of simulated US data, but also with aortic phantom images, which resemble a true surgical environment to a closer degree.
## II Related works
### _State of the art deep learning architectures_
#### Ii-A1 Detection
In terms of architecture, most popular networks fall into two categories: convolution-based (CNN) or attention-based. In CNN-based systems, Faster-R-CNN [12] leveraged the power of its region proposal network (RPN) to select regions of interest, prior to passing such features into fully connected layers for bounding box prediction. The network achieved real-time performance since the need for hand-picked anchor points (found in Fast-R-CNN [13]) was removed. However, following the introduction of attention in the Transformer architecture [14], vision transformers became a strong contender for CNNs as they can learn global dependencies from across the image with a patch-based approach, then concatenating the attention maps together to form the final prediction [15]. Much more recently, researchers have continued to evolve the field by combining the benefits of both worlds, fusing a CNN feature extractor with the attention mechanism. From this idea emerged numerous variants of the transformer, such as the Detection Transformer (DETR) [16], which used a ResNet50 backbone for feature extraction, before feeding its output into a transformer that provided embeddings corresponding to various objects in the scene. Using the bipartite matching loss [17], the network minimised the difference between a prediction output and the ground truth in a class-specific manner.
#### Ii-A2 Semantic Segmentation
In the context of semantic segmentation, the CNN-based UNet [18] architecture and its adaptations have also performed exceptionally well in segmentation tasks. Following the introduction of the nnUNet pipeline [19], which proposed an all-in-one pre-processor, parameter and model selection pipeline, the performance of UNet has been further refined. With that said, nnUNet does not operate in real-time, making it not suitable for high-speed US. On the other hand, attention-based segmentation networks have also seen much success, such as with the DETR to perform panoptic segmentation tasks [16], or in the case of the Segmentation Transformer (SETR) [20], which removed the ResNet50 backbone, but used a Sigmoid activation function to generate segmentation masks.
#### Ii-A3 AiTrack - Learning with temporal features
Thus far, aforementioned models only use spatial features learned in a single frame to make the prediction. This may be sufficient for good-quality images, but may fail when there is occlusion due to shadows or artifacts. To solve this problem, we drew inspiration from clinicians, who rely on prior knowledge from the previous position of the aorta to reposition the probe and relocate the lost targets. This concept was previously captured in the AiATrack framework [21], where a ResNet50-Transformer framework was used together with a corner-predictor based bounding box head. However, the final box prediction still only draws information from the transformer decoder outputs, instead of across the entire sequence of data. We believe we can further improve this invention to operate on even more challenging tasks, such as locating a small catheter's cross-section in sequences of highly variable US images.
### _State of the art in US image simulation_
Despite the impressive results achieved by deep learning, the majority of network architectures are supervised, learning from an extensive number of labeled ground truths, which for Ultrasound is not readily available due to the difficulties faced in acquisition. However, there are publicly available large sets of pixel-level labeled CT volumes, which can be translated into simulated US image/label pairs [22] where further data augmentation can be added via applying rotation, brightness jitter, random shadowing, and artificial tissue deformations, etc. In this way, a large training dataset can be generated for the pretraining of deep learning architectures, allowing the networks to learn domain-specific feature extraction, before retraining on a significantly smaller, real dataset. Evidence of successful transfer shown previously in Velikova et al.'s work [23, 24] motivates this idea.
## III Methodology
### _CT Data selection_
In this work, labeled CT volumes of 8 men and women were acquired from the publicly available dataset Synapse 1. The labels of bone, fat, skin and lungs were added in the label map to allow for the simulation to function. The detailed process of generating the interventional US is detailed below.
Footnote 1: [https://www.synapse.org/#](https://www.synapse.org/#)!Synapse.syn3193805/wiki/89480
### _Physics-based catheterisation simulation_
Since obtaining a large dataset for the initial training of a deep learning architecture is time consuming, we are proposing a new US data simulation pipeline for generating interventional data, which is otherwise only attainable from the operating room environment or via a phantom.
In literature, there are two ways to generate US data: finite difference solutions of the wave equation [25, 26], or ray-casting through an image volume, semantically labelled with its acoustic attenuation properties [24]. Since
the former method requires solving a large system for each frame, it typically requires significantly longer computation time. Thus, the second method, albeit not as accurate, was selected. The simulator selected uses a hybrid ray-tracing convolutional method to define an anatomical representation that mimics the texture of real US images, define anisotropic properties, generate artifacts, and provide tissue contrast that allows regions of interest to be easily discernable.
In order to generate a dataset consisting of catheters, we repurposed an open source catheterisation simulator developed by Jianu et al. [27], which is able to recreate high fidelity catheter-aorta mechanical interaction simulations. CathSim is built in the MuJoCo physics engine (DeepMind, London, UK) [28, 27], which is a powerful package that can perform real time multi-joint dynamics computations with contacts using a C based API.
The final preprocessing pipeline is detailed in Fig. 2. CathSim renders the mesh models of the aorta and the catheter separately, while the aorta mesh was divided into 1024 convex hulls, decomposed using the V-HACD algorithm. The decomposed hulls were transformed into the same coordinate frames as the simulated environment via Blender v3.2.1 (Blender Foundation, Amsterdam, Netherlands) and imported into CathSim. The insertion simulation was performed with a linear translation speed of 0.1m/s, and inserted for 1,000 time increments, where each increment represents 1/60th of a second, and positions of the catheter were sampled at regular intervals along its body, and exported into a time series csv, which was transformed back into the CT's coordinate system. Finally, the simulator was initialised with multiple splines on the surface of the patient's torso, and the splines were tilted at angles of 0, \(\pm\) 30 and \(\pm\) 60 degrees, and a sweep of 1,000 images were generated for each angle.
### _AiAReSeg Architecture_
Attention in Attention + ResNet for Segmentation (AiAReSeg) is a novel segmentation architecture that is adapted from AiTrack [21], shown in Fig.4. The main architecture consists of three components, the attention-in-attention module, the transformer architecture, and the outer upconvolution-deconvolution layers.
Attention-in-attention (AiA) was first proposed by Gao et al.'s work [21], where the authors observed that each query-key pair generated an independent attention map, which ignored the features of other maps. The original attention mechanism used the following dot-product equation:
\[Attn(\mathbf{Q},\mathbf{K},\mathbf{V})=(\text{Softmax}\left(\frac{\bar{ \mathbf{Q}}\bar{\mathbf{K}}^{T}}{\sqrt{C}}\right)\bar{\mathbf{V}})\mathbf{W_{ o}} \tag{1}\]
Where \(\bar{Q}=QW_{q}\), \(\bar{K}=KW_{k}\), \(\bar{V}=VW_{v}\), where \(Q,K,V\) are the queries, keys and values, respectively, while \(W_{q},W_{k},W_{v}\) denotes the learnable weight arrays for the query, keys and values, and \(C\) is the channel size. In the case of a noisy dataset with distracting backgrounds, the model may become confused from clutter in the scene, leading to poor predictions. However, it was noticed that the attention weights near regions of interest were significantly higher, and pixels in such regions were of more interest than pixels with high attention weights further away. Thus, the designed AiA module applied attention once again on the attention map \(M\) to filter out distant weights, which can be represented as:
\[InnerAttn(\mathbf{M})=(\text{Softmax}\left(\frac{\bar{\mathbf{Q}}^{T}\bar{ \mathbf{K}}^{T}}{\sqrt{D}}\right)\bar{\mathbf{V}}^{\prime})(\mathbf{1}+ \mathbf{W^{\prime}_{o}}) \tag{2}\]
Fig. 3: Closeup of the 3D deconvolution pipeline. The three input feature maps are color coded in green, blue, and green for the initial, search and intermediate frames, respectively. The three frames are stacked with the output from the previous deconvolutional block (amber), then deconvolution is performed with a 3D 2x2x2 kernel.
Fig. 2: The pipeline for generating synthetic ultrasound images from open source CT volumes. The aorta model is initially extracted, imported into the catheterisation engine, with the catheter positions exported as a time series, then redrawn on the CT images, before passed into a ray-tracing based simulator to generate the finished image
Where \(\bar{Q}^{\prime},\bar{K}^{\prime},\bar{V}^{\prime}\) are intermediate weighted queries, keys and values, which were feature vectors taken from columns in \(M\), while \(D\) represents the intermediate channel size, defined in this case as the height of the attention map.
When combined, the AiA module computes the following:
\[AiA(\mathbf{Q},\mathbf{K},\mathbf{V})=(\text{Softmax}\,(\mathbf{M}+\text{ InnerAttn}(\mathbf{M})))\,\bar{\mathbf{V}})\mathbf{W_{o}} \tag{3}\]
AiATrack consists of three input branches, the initial frame, the current search frame, and selected intermediate frames. All frames pass through the ResNet50 feature extractor, before performing self-attention in the transformer encoder, which searches for correlation within the same feature array (\(Q=K=V\)). Thereafter, the feature maps from each branch were combined during the long-term (LT), and short-term (ST) cross-attention modules, thus enable learning across time (\(Q\neq K\neq V\)). During inferencing, the network incorporated an additional checking algorithm that stores high-performing prior examples (classified by the Dice metric) in its memory, and then call upon them when predicting the current frame by concatenating it to the value. Note that the ResNet50 and the transformer encoders on each branch share their weights.
Similiar to AiATrack, AiAReSeg (in Fig.4) consists of three branches, where each branch processes the image through ResNet50. At each feature level of the convolution, the feature channels were connected to the output deconvolution layers via skip connections. The ResNet50 feature maps were processed with intermediate convolutional layers to gradually reduce its channel size, such that it matches the deconvolution layer's intermediate outputs (similiar to UNet++ [29]). Then, as shown in Fig. 3, the feature maps were stacked along a new dimension, generating (H,W,T) sized images, where T represents the time, before it was passed into a 3D convolutional layer to reduce T to 1, prior to further stacking from the next skip connection branch.
In order to adapt AiAReSeg to both aorta and catheter segmentation, we had combined the following loss functions: Dice coefficient (DSC), Binary Cross Entropy (BCE), and Mean Squared Error (MSE), and weighted their importance with factors of 5, 2 and 2, respectively. The Dice loss measures the similarity between predicted and ground truth masks, and encourages models to produce more accurate segmentation results, and handles class imbalance. We also used the BCE loss to assign higher probabilities to the correct class and lower probabilities to the incorrect class, and the MSE to minimise the pixel-to-pixel distance between the ground truth and the prediction.
## IV Experimental evaluation
We divided the evaluation of this pipeline into two phases: evaluation on hold out simulated image set, and on unseen phantom data. This test examines the capability of the network in generalising to unseen datasets, with the latter being a closer representation of the real patient anatomy.
To evaluate the performance of our system, we selected a handful of common and top-performing detection and segmentation models in literature. Most notably, this includes the Faster-R-CNN and DETR for detection, compared against the performance of AiATrack, while for segmentation we selected the standard UNet, and a clustering based approach, which is explained in Sect. IV-C.
The models were trained and evaluated on both detection and segmentation of the aorta and catheter, evaluated
Fig. 4: The AiAReSeg architecture. Details of each module and its channel number is shown in the figure.
separately. The reason for this choice is that analysis of a larger feature such as the aorta is easier to perform, as it is unique in the input image (with only one duplicate of the cross section if scanned near the aortic arch). Catheters, on the other hand, are significantly more challenging to detect due to the noisy nature of the background, since their shape and intensity range can easily be confused with artifacts or other features, thus affecting performance. In addition, their small size also created significant class imbalance between the feature and the background, making some of the loss metrics highly volatile (such as the Dice loss).
We evaluated the tracking models using the average precision (AP) metric, defined as the area under the precision-recall curves, evaluated at different intersections over union (IOU) thresholds between 50 and 95\(\%\), including their average to form the mean AP score. On the segmentation side, we used the Dice metric (DSC), which indicates the degree of overlap, and the mean absolute error (MAE), which represents the distance from each pixel to its ground truth.
### _Training details_
Experiments were conducted on a workstation with NVIDIA GeForce RTX3080, 32GB RAM, Intel core i7 (10700K). The physics-based simulations were performed on MuJoCo 2.10, where mesh models were decomposed into convex hulls using V-HACD [30]. The US simulations were generated on the ImFusion Suite (ImFusion GmbH, Munich, Germany), where the ray-casting algorithm [24] was implemented. 8 torso CT images of men and women were selected from the Synapse dataset, and used for catheterisation. Catheterisation simulation was performed for a total duration of 60 seconds, where a data recording of the catheter positions was performed at 4mm intervals along its body to provide a reasonable spatial resolution for reconstruction. During US simulation, the transducer was programmed to follow a predefined spline, and performed a ray-casting simulation at 0.1mm increments along the line. To increase data variability, this spline was also rotated by \(\pm 30\) degrees, and re-projected onto the volume surface, creating different viewing angles of the anatomy. Finally, images were divided by sequences into folders, where image/mask pairs that do not contain a catheter were filtered out.
### _Phantom data collection details_
A small set of 2D testing images were collected manually in a free-hand manner from a ZONAE Z.One Ultra-Ultrasound machine, using a C8-3 (3D) transducer at a scanning depth of 10cm. Axial view scanning was performed on an Elastrat silicon-based aortic arch phantom, immersed in lukewarm saline solution. To mimic a catheterisation procedure, we inserted a Merit Medical 5F vertical catheter at the distal end of the phantom, then followed the tip of the catheter with the US probe. We collected 5 US sequences with varying lengths, ranging between 400 - 700 frames.
### _Model specific details_
**AiATrack:** A patch of size \(5^{2}\) was cropped from the frame, and resized into a common dimension of 320 x 320 pixels. A ResNet-50 backbone was used [31] for feature extraction, downsampling the input until a size of 20x20 was achieved. Each feature map was flattened and passed into the transformer. A 4-head attention module was used, with the inner AiA module reducing the channel dimension of queries and keys to 64. The final prediction head used 3 Conv-BN-ReLU layers, a PrPooling layer and 2 fully connected layers. The model was pretrained for 300 epochs on the LaSOT dataset [32], then for an additional 200 epochs on the synthetic US dataset, both at a learning rate of \(10^{-4}\).
**DETR:** A standard DETR, with weights pretrained for 500 epochs on the COCO2017 dataset was used in this application [33]. The COCO dataset consists of more than 200,000 images consisting of over 80 categories of objects, thus equipping the model with the necessary feature extraction filters. The model is retrained on ultrasound data for 100 epochs. The learning rate in both cases is \(10^{-5}\).
**Faster-R-CNN:** An implementation of Faster-R-CNN from the Detectron2 library was used [34]. A ResNet50 backbone was used, together with a feature pyramid network. A COCO2017 pre-trained model was retrained on our own dataset for 100 epochs, with a learning rate of \(10^{-5}\).
**AiAReSeg:** Our proposed AiAReSeg framework offers training in an end-to-end manner. However, in order to accelerate the process of training, model weights prior to the final segmentation head were initialized with weights from an AiATrack model, pretrained with 300 epochs on LaSOT at a learning rate of \(10^{-5}\).
**UNet:** A standard UNet from Ronneberger et al.'s work [18] was implemented with MONAI [35] and trained for 100 epochs with a learning rate of \(10^{-3}\).
**Clustering Based evaluation:** We also designed a partially unsupervised workflow to extract the catheter given a valid aorta segmentation. In this case, an US image was first filtered with the proposed aorta segmentation mask from AiAReSeg, extracting the aorta and its embedded catheter. This patch was then thresholded at a 70\(\%\) intensity level, before a K-means clustering was performed (K=2). The final cluster selection was done based on the root mean square variance of each cluster (computed via Eq.4, where the cluster with the smallest RMS variance was selected).
\[VAR_{rms}=\sqrt{(var_{x})^{2}+(var_{y})^{2}}\]
## V Results
Results from tracking experiments are shown in Tab. I, which presents the findings for simulations, whereas results from benchtop phantom trials are presented in Tab.II. In all cases, AiATrack demonstrated the highest level of mean AP, with a score of 94.77 for aorta and 22.86 for catheter tracking. While AiATrack surpassed all other models across all AP IOU thresholds for aorta tracking, it fell short of the DETR's AP50 of 77.10 (vs 70.99). Nevertheless, it still outperformed the DETR on average. When applied to phantom trials, the DETR and Faster-R-CNN struggled to generalise its performance to these images, with DETR yielding an especially poor mAP performance of 1.4. The same observation was made with catheter detection, where the DETR did not yield any metric for AP, while the Faster-R-CNN's performance was also poor. The AiATrack model's performance far exceeded both cases, at 45.7 and 14.3 for aorta and catheter detection respectively.
Similarly for segmentation, Tab. III presents testing of the model on simulation, and Tab. IV is for phantoms trials. We found that AiAReSeg's performance surpassed UNet in both aorta and catheter segmentation, in simulated (aorta: 91.92 vs 88.95, catheter: 83.10 vs 80.06) and phantom trials (aorta: 34.11 vs 32.30, catheter: 62.51 vs 20.39), indicating that the model was able to generalize to some degree from simulation to reality, without needing to retrain.
## VI Discussions
From these results, we first observe that AiA-based systems yielded the highest level of performance across nearly all detection and segmentation tasks. With simulations, where the texture of generated images were similar to the training data, the detection model performed better on average and at the 50\(\%\) and 75\(\%\) thresholds. This indicated that within the same image domain, the model surpassed a selection of existing frameworks. This finding is within our expectations, as the model draws upon temporal information from across the sequences, effectively supplying the knowledge about how this feature is changing over time.
Furthermore, in neighbouring but different image domains (such as the phantom image domain), although the performance was severely impacted due to lack of retraining, AiATrack still surpassed its competitors, especially in the case of aorta detection, yielding an AP of 100 at 50\(\%\) and 82.62 at \(75\%\) thresholds. For the more challenging catheter detection task, AiATrack's performance was still higher, in the case that the DETR and the Faster-R-CNN completely failed to generalise. These results indicated the robustness of the AiA framework in adapting to new domains. In the case that the model is provided with a small subset of images from this new domain, it is reasonable to assume that AiATrack will start training with more adapted weights (transferred from previous training examples) to this domain, and require less data to achieve similar levels of performance as Tab. I.
With segmentation, AiAReSeg used temporal features at the attention and reconstruction level as prior knowledge at different spatial scales to aid it in mask generation. As a result, the AiAReSeg architecture surpassed its UNet competitor in both aorta and catheter segmentation tasks in simulation, and in phantom trials. We recognise that due to the challenging nature of catheter semantic segmentation, where the mask label for each frame typically only consists of 20-100 pixels, the Dice metric is rather harsh in penalising the model, even where the absolute error between the model output and the ground truth is very low. Thus, when we examine the MAE metric, it was also noted that AiAReSeg was significantly better at minimising its distance with the ground truth in the phantom case (0.00018 for AiAReSeg vs 0.00068 for UNet). Considering that catheter localisation in a clinical environment demands high accuracy, we believe that these results demonstrate the potential for our system to perform well when it is sufficiently retrained.
Finally, the poor performance for phantom aorta segmentation from both models was also investigated, and the main reason found was the significant difference in appearance of the tubular structure in simulation vs in phantom. While our chosen phantom mimics the mechanical properties of an aorta, and its aesthetic appearance, the acoustic behaviour of silicone is very different from reality. As a result, a phantom axial image has high intensity on the top surface of the aorta (indicating high reflectivity), while the lower surface is shadowed, creating a discontinuous tubular shape. This shape was not observed by the model during training using simulated data, hence confusing the networks.
## VII Conclusions
In this paper, we presented a solution to the data shortage problem in the field of interventional ultrasound, by presenting a bespoke data synthesis pipeline. Through experimentation, we have demonstrated that the dataset step towards bridging the gap between simulation and reality. Deep learning models trained with this dataset were able to exhibit satisfactory preliminary results on silicon phantoms without needing to retrain. These results pave the way for future works which verifies such models on real patient anatomy. We also present our innovation, the AiAReSeg architecture, which combines temporal information both when attention is applied, and during reconstruction in the 3D deconvolution layers. Injection of temporal information enhanced the model to become a competitive option for catheter segmentation tasks among its rivals. |
2307.16729 | The Chemical Inventory of the Inner Regions of Planet-forming Disks --
The JWST/MINDS Program | The understanding of planet formation has changed recently, embracing the new
idea of pebble accretion. This means that the influx of pebbles from the outer
regions of planet-forming disks to their inner zones could determine the
composition of planets and their atmospheres. The solid and molecular
components delivered to the planet-forming region can be best characterized by
mid-infrared spectroscopy. With Spitzer low-resolution (R=100, 600)
spectroscopy, this approach was limited to the detection of abundant molecules
such as H2O, C2H2, HCN and CO2. This contribution will present first results of
the MINDS (MIRI mid-IR Disk Survey, PI: Th. Henning) project. Due do the
sensitivity and spectral resolution (R~1500-3500) provided by JWST we now have
a unique tool to obtain the full inventory of chemistry in the inner disks of
solar-types stars and brown dwarfs, including also less abundant hydrocarbons
and isotopologues. The Integral Field Unit (IFU) capabilities enable at the
same time spatial studies of the continuum and line emission in extended
sources such as debris disks, the flying saucer and also the search for mid-IR
signatures of forming planets in systems such as PDS70. These JWST observations
are complementary to ALMA and NOEMA observations of the outer disk chemistry;
together these datasets provide an integral view of the processes occurring
during the planet formation phase. | Inga Kamp, Thomas Henning, Aditya M. Arabhavi, Giulio Bettoni, Valentin Christiaens, Danny Gasman, Sierra L. Grant, Maria Morales-Calderón, Benoît Tabone, Alain Abergel, Olivier Absil, Ioannis Argyriou, David Barrado, Anthony Boccaletti, Jeroen Bouwman, Alessio Caratti o Garatti, Ewine F. van Dishoeck, Vincent Geers, Adrian M. Glauser, Manuel Güdel, Rodrigo Guadarrama, Hyerin Jang, Jayatee Kanwar, Pierre-Olivier Lagage, Fred Lahuis, Michael Mueller, Cyrine Nehmé, Göran Olofsson, Eric Pantin, Nicole Pawellek, Giulia Perotti, Tom P. Ray, Donna Rodgers-Lee, Matthias Samland, Silvia Scheithauer, Jürgen Schreiber, Kamber Schwarz, Milou Temmink, Bart Vandenbussche, Marissa Vlasblom, Christoffel Waelkens, L. B. F. M. Waters, Gillian Wright | 2023-07-31T14:51:52Z | http://arxiv.org/abs/2307.16729v1 | # The Chemical Inventory of the Inner Regions of Planet-forming Disks - The JWST/MINOS Program1
###### Abstract
The understanding of planet formation has changed recently, embracing the new idea of pebble accretion. This means that the influx of pebbles from the outer regions of planet-forming disks to their inner zones could determine the composition of planets and their atmospheres. The solid and molecular components delivered to the planet-forming region can be best characterized by mid-infrared spectroscopy. With Spitzer low-resolution (\(R\) = 100, 600) spectroscopy, this approach was limited to the detection of abundant molecules such as H\({}_{2}\)O, C\({}_{2}\)H\({}_{2}\), HCN and CO\({}_{2}\). This contribution will present first results of the MINDS (MIRI mid-IR Disk Survey, PI: Th. Henning) project. Due do the sensitivity and spectral resolution (\(R\) \(\sim\) 1500 - 3500) provided by JWST we now have a unique tool to obtain the full inventory of chemistry in the inner disks of solar-types stars and brown dwarfs, including also less abundant hydrocarbons and isotopologues. The Integral Field Unit (IFU) capabilities enable at the same time spatial studies of the continuum and line emission in extended sources such as debris disks, the flying saucer and also the search for mid-IR signatures of forming planets in systems such as PDS 70. These JWST observations are complementary to ALMA and NOEMA observations of the outer disk chemistry; together these datasets provide an integral view of the processes occurring during the planet formation phase.
## 1 Introduction
Much of the exoplanet population studied so far resides inside 10 au from their host star. The composition of these planets should carry traces of the chemical composition of the inner regions of planet-forming disks. This can manifest itself in the bulk C/O ratio of gas giant planet atmospheres [1, 2, 3, 4], but also affect the bulk interior composition of terrestrial planets (e.g. the sulphur content) [5] and the delivery of water to them [6, 7].
Planet-forming disks are expected to be layered in their chemical content, with the surface layers being ionized/atomic and deeper layers being molecular [8, 9]. More recently, thermochemical models also showed that the spatial distribution of molecules comes in layers, with OH being closest to the surface and CO, H\({}_{2}\)O, CO\({}_{2}\), HCN and C\({}_{2}\)H\({}_{2}\) residing ever closer to the midplane [10] (Fig. 1). In addition, theory and observations have shown that dust grains can radially migrate in the disk if they reach sizes that allow them to dynamically de-couple from the gas [11, 12]. As a consequence, volatile ices carried along and sublimating at the respective iceline locations could alter the elemental composition of the gas in the disk [13]. The inner disk regions are generally highly optically thick (unless some processes have removed material and carved gaps/holes [14]), preventing us from probing down to the midplane. However turbulent mixing timescales are less than 10000 yr [15], thus ensuring that any change of element abundance due to radial transport in the mid |
2309.10192 | Geometric Ramsey Interferometry with a Tripod Scheme | Ramsey interferometry is a key technique for precision spectroscopy and to
probe the coherence of quantum systems. Typically, an interferometer is
constructed using two quantum states and involves a time-dependent interaction
with two short resonant electromagnetic pulses. Here, we explore a different
type of Ramsey interferometer where we perform quantum state manipulations by
geometrical means, eliminating the temporal dependence of the interaction. We
use a resonant tripod scheme in ultracold strontium atoms where the
interferometric operation is restricted to a two-dimensional dark-state
subspace in the dressed-state picture. The observed interferometric phase
accumulation is due to an effective geometric scalar term in the dark-state
subspace, which remarkably does not vanish during the free evolution time when
the light-matter interaction is turned off. This study opens the door for more
robust interferometers operating on multiple input-output ports. | Chetan Sriram Madasu, Ketan Damji Rathod, Chang Chi Kwong, David Wilkowski | 2023-09-18T22:53:15Z | http://arxiv.org/abs/2309.10192v2 | # Geometric Ramsey Interferometry with a Tripod Scheme
###### Abstract
Ramsey interferometry is a key technique for precision spectroscopy and to probe the coherence of quantum systems. Typically, an interferometer is constructed using two quantum states and involves a time-dependent interaction with two short resonant electromagnetic pulses. Here, we explore a different type of Ramsey interferometer where we perform quantum state manipulations by geometrical means, eliminating the temporal dependence of the interaction. We use a resonant tripod scheme in ultracold strontium atoms where the interferometric operation is restricted to a two-dimensional dark-state subspace in the dressed-state picture. The observed interferometric phase accumulation is due to an effective geometric scalar term in the dark-state subspace, which remarkably does not vanish during the free evolution time when the light-matter interaction is turned off. This study opens the door for more robust interferometers operating on multiple input-output ports.
Ramsey interferometry employs temporally separated electromagnetic pulses to probe the energy difference and the coherence between two quantum states [1; 2]. Ramsey interferometers, whether employing internal, external or both states of atoms, have become essential tools in probing quantum states in quantum simulations [3; 4], quantum computing [5], interband spectroscopy [6] and in atomic clocks at or below the quantum projection noise limit [7; 8] to name a few.
In contrast to the majority of Ramsey interferometers that rely on the dynamical evolution of the system mediated through light-matter interaction, we explore here a geometric Ramsey interferometer governed by adiabatic evolution in the degenerate dark-state subspace of a tripod scheme. We found that the phase accumulation during the free evolution time arises from a geometric scalar potential. Surprisingly, this potential retains its physical significance even when the pulses are turned off as long as the dressed states of interest remain adiabatically connected to the bare states. Geometric scalar potentials are at the origin of the so-called dark optical lattices [9; 10], and have been employed to create sub-wavelength barriers in an effective spin [11] or spinless [12] configuration. Though the geometric scalar potential plays a role in shaping periodic potential, they are essentially overlooked in the bulk because of their moderate strength in comparison to commonly-used optical potentials [13; 14].
The interferometer operates on an ultracold gas of \({}^{87}\)Sr atoms. The gas is prepared using a two-stage magneto-optical trap [15; 16], followed by evaporative cooling in a crossed-beams optical-dipole trap [17]. We then obtain a quantum degenerate Fermi gas comprised of \(N=4.5(2)\times 10^{4}\) atoms in the \(m_{F}=9/2\) stretched Zeeman substate at a temperature of \(T_{0}=50(3)\,\mathrm{nK}\). This temperature corresponds to \(T_{0}/T_{F}=0.25(2)\) where \(T_{F}\) is the Fermi temperature. Additionally, \(T_{0}/T_{R}=0.21(2)\), where \(T_{R}\) is the recoil temperature associated with the tripod transitions. After evaporative cooling, the optical trap is switched off, and a magnetic field bias is turned on to isolate a tripod scheme on the \({}^{1}S_{0},F_{g}=9/2\to{}^{3}P_{1},F_{e}=9/2\) hyperfine multiplet of the intercombination line at \(689\,\mathrm{nm}\)[18]. Three laser beams resonantly couple the three internal ground states \(|a\rangle\equiv|F_{g},m_{F}\rangle\), with \(a=\{1,2,3\}\) and \(m_{F}=\{5/2,7/2,9/2\}\), respectively, to a common excited state \(|e\rangle\equiv|F_{e},m_{F}=7/2\rangle\), as shown in Figs. 1a&b. The light-matter interaction is characterized by three complex Rabi frequencies \(\Omega_{a}\), associated with the \(|a\rangle\rightarrow|e\rangle\) transitions.
Our geometric Ramsey interferometric sequence consists of a \(\pi/2\)-pulse and a \(-\pi/2\)-pulse, temporally separated by a free evolution time \(T\) as sketched in Fig. 1c. The first \(\pi/2\)-pulse, composed of three Gaussian pulses, puts the atoms initially in the \(|3\rangle\) state into a coherent superposition of \(|3\rangle\) and \(|1\rangle\) states, ideally with equal probabilities. The relative population of the output states does not depend on the pulse duration, due to its geometrical nature. It is rather controlled by the relative peak Rabi frequency amplitude \(|\Omega_{03}|\) of beam 3 with respect to the peak Rabi frequency amplitudes of beams 1 and 2, which are set equal, namely \(|\Omega_{01}|=|\Omega_{02}|\)[19]. The second pulse, closing the interferometer, is a \(-\pi/2\)-pulse, meaning that, without any further phase accumulation, the second pulse shall bring back the atom into its initial state, namely \(|3\rangle\). Here, an extra phase accumulation between the two arms occurs reducing the population of \(|3\rangle\) at the interferometer output, as shown in Fig. 2 (red squares). Importantly, we note that the remaining population, instead of going to the state \(|1\rangle\) (blue circles) as expected for a standard two-level Ramsey interferometer, is now transferred to state \(|2\rangle\) (green triangles). This un
usual behavior originates from the order of the Gaussian pulses acting on the tripod scheme. As shown in Fig. 1c, the second pulse sequence is a temporal mirror image of the first one, so the beam 1 pulse, which is finishing the sequence, prevents population in state \(|1\rangle\) as expected for any STIRAP scheme [20].
To confirm the phase-sensitive nature of the experiment, we purposely apply a phase jump \(\Phi\) to the beam 3 at the center of the free evolution sequence when the laser is considered to be turned off. As expected, the interferometric readout from the atomic bare state populations after the second pulse shows a sinusoidal evolution as a function of the introduced phase jump, with a half period of \(\pi\) (see Fig. 3a).
During the free evolution time \(T\), a phase accumulation occurs which has a simple physical origin in the bare-state picture. The coherent transfer between the state \(|3\rangle\) and the state \(|1\rangle\) redistributes a photon between the beams 3 and 1 which leads to a momentum kick of \(2p_{r}\hat{y}\) on the atom, as shown in Fig. 2b. \(p_{r}=\hbar k\) is the momentum recoil associated to a \(\lambda=689\,\mathrm{nm}\) photon, \(\hbar\) is the reduced Planck constant, and \(k=2\pi/\lambda\) is the wave number of the light field. Therefore, the phase accumulation corresponds to \(\Delta E_{k}T/\hbar\), where \(m\) is the atomic mass, and \(\Delta E_{k}=2p_{r}^{2}/m\), the kinetic energy difference between the two bare states. We fit the Ramsey interferometer output evolution as a function of \(T\) with a damped oscillation and found a frequency of \(2\pi\times 19.8(16)\,\mathrm{kHz}\) (see dashed black curves in Fig. 3b), in agreement with the theoretical prediction of \(\Delta E_{k}/\hbar=2\pi\times 19.2\,\mathrm{kHz}\).
The damping time, extracted from the fit, is found to be \(\tau_{\mathrm{fit}}=23(4)\,\mathrm{\mu s}\). The damping of the fringe contrast is due to the finite temperature of the gas reducing the wavepacket coherence length to the thermal de Broglie wavelength \(\lambda_{th}=h/\sqrt{2\pi mk_{B}T_{0}}\), where \(h\) is the Planck constant and \(k_{B}\) is the Boltzmann constant. As the two bare-state wavepackets have different momenta, complete loss of interferometric signal occurs when the wavepacket separation equals the thermal de Broglie wavelength, namely \(\tau_{\mathrm{typ}}=\sqrt{m\pi/(2k^{2}k_{B}T_{0})}\approx 63\,\mathrm{\mu s}\), in qualitative agreement with the fitted value \(\tau_{\mathrm{fit}}\).
A rigorous theoretical treatment of the geometric Ramsey interferometer can be done with a brute-force diagonalization of the time-dependent Hamiltonian of the sys
Figure 2: (a-c) Fluorescence images of the ultracold gas after \(9\,\mathrm{ms}\) of time of flight. The images are taken before the first pulse (\(t=0\)), during the free evolution time (\(t=18\,\mathrm{\mu s}\)), and after the second pulse (\(t=48\,\mathrm{\mu s}\)), respectively. Each peak in the momentum distribution is associated with a bare state as indicated in each panel. We extract the bare state populations by fitting each peak to a 2D-Gaussian distribution. (d) Populations of the bare states during the interferometric sequence, with \(\sigma_{t}=2.5\,\mathrm{\mu s}\) and \(T=6\,\mathrm{\mu s}\). The experimental data points are plotted with markers with the error bars representing one standard deviation confidence. The plain and dashed curves represent the numerical integration for temperatures of 0 and 50 nK, respectively. The dotted curve represents the zero temperature theoretical expectations, without the scalar term \(\hat{Q}\).
Figure 1: Schematic showing the implementation of Ramsey interferometry pulse sequence. (a) Energy levels of \({}^{87}\)Sr atoms involved in the tripod scheme. A bias magnetic field of \(67\,\mathrm{G}\) shifts adjacent excited magnetic states by \(\sim 930\Gamma\), where \(\Gamma/2\pi=7.5\,\mathrm{kHz}\) is the linewidth of the intercombination line. (b) The spatial configuration of the tripod beams. (c) Relative Rabi frequencies of the tripod beams as a function of time. The Gaussian pulses are parameterized as \(\Omega_{a}(t)=|\Omega_{0a}|e^{-(t-t_{a}^{(j)})/4\sigma_{t}^{2}}\) where \(|\Omega_{0a}|\) is the peak Rabi frequency with \(a=1,2,3\), \(\sqrt{2}\sigma_{t}\) is the temporal standard deviation and \(t_{a}^{(j)}\) are the centers of the Gaussian pulses for the \(\pi/2\)-pulse (\(j=1\)) and \(-\pi/2\)-pulse (\(j=2\)). The pulse sequence corresponds to \(t_{1}^{(j)}=t_{3}^{(j)}-\eta\sigma_{t}\), \(t_{2}^{(j)}=t_{3}^{(j)}+\eta\sigma_{t}\), \(t_{3}^{(1)}=4\sigma_{t}\) and \(t_{2}^{(2)}=t_{3}^{(1)}+8\sigma_{t}+T\), with \(|\Omega_{01}|=|\Omega_{02}|=2|\Omega_{03}|\approx 2\pi\times 260\,\mathrm{kHz}\) and \(\eta\) is the separation parameter with a value of 1.8. Here, the length of \(\pi/2\)-pulses is defined as duration of the \(\sigma^{-}\) Gaussian _i.e., \(8\sigma_{t}\)_. Therefore, the delay between two \(\pi/2\)-pulses T, is defined as the free evolution time.
tem. However, physical interpretation together with significant simplifications are possible by changing the original bare-state basis to the dressed-state basis of the internal Hamiltonian, defined by two long-lived zero-energy dark states
\[|D_{1}(\mathbf{r},t)\rangle = \sin\varphi(t)e^{2iky}|1\rangle-\cos\varphi(t)e^{ik(y-x)}|2\rangle\] \[|D_{2}(\mathbf{r},t)\rangle = \cos\vartheta(t)(\cos\varphi(t)e^{2iky}|1\rangle+\sin\varphi(t)e ^{ik(y-x)}|2\rangle) \tag{1}\] \[- \sin\vartheta(t)|3\rangle,\]
and two bright states that contain the bare excited state, so subject to a fast decay by photon spontaneous emission. Moreover, the bright states are light shifted by \(\pm\hbar\Omega\), where \(\vartheta=\cos^{-1}\left(|\Omega_{3}|/\Omega\rangle\right)\), \(\varphi=\tan^{-1}\left(|\Omega_{2}|/|\Omega_{1}|\right)\), and \(\Omega=\sqrt{|\Omega_{1}|^{2}+|\Omega_{2}|^{2}+|\Omega_{3}|^{2}}\)[13].
A first simplification occurs because the internal state evolution can be restricted to the dark-state subspace. Indeed, the bright state light shift corresponds to the highest energy scale of the problem (\(\Omega\simeq 2\pi\times 410\,\mathrm{kHz}\)), and the initial bare state \(|3\rangle\) is adiabatically connected to \(|D_{2}\rangle\)[21]. Overall, the populations of the bright states remain negligible during the Ramsey sequence. This point is experimentally checked noticing that there is no significant heating of the gas after the Ramsey sequence. Limiting ourselves now to the dark-state subspace, the effective Hamiltonian reads [21; 22].
\[\hat{H}=\frac{\hat{\mathbf{p}}^{2}\otimes\mathds{1}}{2m}-\frac{\hat{\mathbf{ A}}\cdot\hat{\mathbf{p}}}{m}+\hat{Q}+\hat{w}, \tag{2}\]
where \(\mathds{1}\) is a two-dimensional identity operator defined in the dark-state subspace, and the operators \(\hat{\mathbf{A}}\), \(\hat{Q}\), and \(\hat{w}\) have the respective matrix entries
\[\mathbf{A}_{\mu\nu} = i\hbar\langle D_{\mu}|\mathbf{\nabla}D_{\nu}\rangle\] \[Q_{\mu\nu} = \frac{\hbar^{2}}{2m}\langle\mathbf{\nabla}D_{\mu}|\mathbf{\nabla}D_{\nu}\rangle\] \[w_{\mu\nu} = -i\hbar\langle D_{\mu}|\frac{\partial}{\partial t}D_{\nu}\rangle. \tag{3}\]
Since \(|\mathbf{\nabla}|\sim k\) and the size of the momentum distribution is smaller than the recoil momentum \(p_{r}\), as a second simplification we neglect the kinetic and spin-orbit coupling contributions with respect to the scalar term \(\hat{Q}\), _i.e._ the first and second right-hand side terms of the Hamiltonian in Eq. (2), respectively. The state evolution in the dark-state subspace is then given by the unitary transformation
\[\hat{U}(t)=\mathcal{T}\exp\left[-i\int_{0}^{t}\left(\hat{Q}(t^{\prime})+\hat{ w}(t^{\prime})\right)\mathrm{d}t^{\prime}\right]. \tag{4}\]
where, \(\mathcal{T}\) is the time-ordering operator. From the spatial configuration of our tripod beams (see Fig. 1b), we derive the following expression for the scalar term
\[\hat{Q}=-\frac{p_{r}^{2}}{2m}\left(\begin{array}{cc}2(1+\sin^{2}\varphi)& \cos\vartheta\sin 2\varphi\\ \cos\vartheta\sin 2\varphi&2\cos^{2}\vartheta(1+\cos^{2}\varphi)\end{array}\right) \tag{5}\]
and the final term on the right-hand side Eq. (2) reads
\[\hat{w}=\hbar\cos\vartheta\frac{\partial\varphi}{\partial t}\hat{\sigma}_{y}, \tag{6}\]
where \(\hat{\sigma}_{y}\) is the \(y\)-component Pauli matrix. The operator \(\hat{w}\) plays a key role since it is responsible for the geometric atomic beam splitting [14; 19; 23]. We also note that this term has no specific energy scale since it depends on the temporal profile of the Gaussian pulse. The latter has to be slow enough to fulfill the adiabatic condition, namely \(\langle\hat{w}\rangle\ll\hbar\Omega\), at all times.
The plain curves in Fig. 1d&e and Fig. 3 are obtained with numerical integrations of Eq. (4), whereas the projections onto the bare states are extracted from Eq. (1). The dashed curves are obtained by averaging over the momentum distribution of our thermal sample, with a temperature of \(T_{0}=50\,\mathrm{nK}\). Here the momentum-dependence is obtained in the semi-classical limit by reintroducing the previously overlooked first and second terms in the right-hand side of Eq. (2) [17].
Figure 3: (a) Interference fringes generated with an abrupt phase change of the beam 3 coupling the \(|3\rangle\rightarrow|e\rangle\) transition. This phase jump \(\Phi\) is introduced at the middle of the free evolution time, \(T=6\,\mathrm{\mu s}\). The plain and dashed curves represent the numerical integration for temperatures of 0 and 50 nK, respectively. (b) Populations of bare states after the Ramsey pulse sequence as a function of free evolution time \(T\). The black-dashed curves represent a fit using an exponentially damped oscillation.
Our model, together with the damping due to the finite temperature, captures the main experimental features well, opening the door for insightful physical interpretations of this geometric Ramsey interferometer. As we have already mentioned, the initial state \(|3\rangle\) is connected to \(|D_{2}\rangle\) dark state [21]. For a complete description, we shall highlight that the dark states at the end of \(\pi/2\)-pulse are asymptotically connected to the bare states as \(|D_{1}\rangle\rightarrow|1\rangle\) and \(|D_{2}\rangle\rightarrow|3\rangle\)[19]. This point can be easily verified, using Eq. (1) and noticing that at the end of the \(\pi/2\)-pulse \(\varphi\rightarrow\pi/2\) and \(\vartheta\rightarrow\pi/2\). Similarly, at the end of \(-\pi/2\)-pulse, the dark states are connected to the bare states as \(|D_{1}\rangle\rightarrow|2\rangle\) and \(|D_{2}\rangle\rightarrow|3\rangle\). Hence, we understand that even if the geometric Ramsey interferometer is fundamentally a two-level interferometer in the dark-state subspace, we still need the three bare-ground states for a complete description. It leads to a multiple input-output port device, where the matter-wave propagation direction can be controlled by the pulse ordering sequence and a phase-sensitive signal (compare the bare-state population distribution locations in Fig. 2a-c). This principle can be utilized for implementing an atomtronic bilateral switch where either the phase jump \(\Phi\) or the free evolution time \(T\) can be used as the switching control parameter.
Another insightful interpretation of our model concerns the nature of the phase accumulation during the free evolution time, which can be clearly associated to the scalar term \(\hat{Q}\). Indeed, during the free evolution time \(\partial\varphi/\partial t\to 0\), so \(\hat{w}\rightarrow\hat{0}\), where \(\hat{0}\) is the zero operator.The last remaining term, which is the scalar potential, takes the asymptotic expression
\[\lim_{\varphi\rightarrow\pi/2,\vartheta\rightarrow\pi/2}\hat{Q}=-\frac{p_{r}^ {2}}{m}\left(\begin{array}{cc}2&0\\ 0&0\end{array}\right). \tag{7}\]
Moreover, the dotted curves in Fig. 2d correspond to numerical integrations of Eq. (4) setting \(\hat{Q}=\hat{0}\) at all times. Here, no phase shift is observed as the population transfers back to \(|3\rangle\) at the output of the interferometer. A similar situation occurs with trapped ions in Lamb-Dicke regime [14]. We note that the presence of a non-zero \(\hat{Q}\) term leads to a non-intuitive situation where the dressed-state picture remains meaningful even if the tripod beams are turned off, provided that the adiabatic asymptotic connection, depicted by Eq. (7), is fulfilled. In addition, the energy difference between the states \(|D_{1}\rangle\) and \(|D_{2}\rangle\) leads to a phase accumulation during the free evolution time of \(|\hat{Q}_{11}-\hat{Q}_{22}|T/\hbar=2p_{r}^{2}T/\hbar m\) in agreement with the previously discussed bare state approach.
Finally, we check the geometrical nature of the matter-wave splitter, searching for time-independent behavior by either compressing or inflating the temporal sequence of the matter-wave splitter. We show in Fig. 4 the deviation \(\Delta\theta\equiv\theta_{exp}-\pi/2\) of the polar angle of the dark-state coherent superposition after the first \(\pi/2\)-pulse, as a function of the temporal standard deviation of the Gaussian pulses \(\sigma_{t}\). When the duration of the pulse sequence is within \(3~{}\mu\)s \(<\sigma_{t}<15~{}\mu\)s, the deviation is in agreement with a null value, indicating a time-independent geometric matter-wave splitter. For \(\sigma_{t}<3~{}\mu\)s, the non-zero deviation indicates that the pulse sequence is not fully adiabatic. For \(\sigma_{t}>15~{}\mu\)s, the adiabatic approximation is fulfilled, but \(\hat{w}\) becomes too small with respect to the spin-orbit and kinetic terms of Eq. (2), leading to a breakdown of the approximation of our model given by Eq. (4) [19]. We used \(\sigma_{t}=2.5~{}\mu\)s in the experiment in order to reduce the length of the pulse sequence as a trade-off between non-adiabaticity and effects of thermal dispersion.
In conclusion, we have explored a geometric Ramsey interferometer based on a tripod scheme. This interferometer reduces to a two-level system in the dark-state subspace but can also be viewed as connecting the three internal ground-bare states in a configuration with multi-input-output ports. We showed that the phase accumulation during the free-evolution time is due to a geometric scalar potential that encapsulates the kinetic energy difference of the bare states. Because these states are time-independent, geometric manipulations of quantum states are generally more robust than their dynamical counterparts. This robustness can be translated here to an interferometer that is insensitive to the mean velocity of the atomic ensemble, making it suitable for possible applications in quantum simulations and computing, and atomtronics circuits [24; 25; 26; 27].
In the future, other types of interferometers, such as Ramsey-Borde interferometers [28; 29] or Mach-Zehnder interferometers [30] can be envisioned using similar geometric approaches. The former can be utilised for precision measurements of the photon recoil shift to determine the fine-structure constant [31; 32], while the latter can serve for inertial sensing applications such as gravimetry [33], gradiometry [34], or tests of the equivalence prin
Figure 4: Deviation of the polar angle of the dark-state coherent superposition after the first \(\pi/2\)-pulse as a function of \(\sigma_{t}\).
ciple [35], to name a few. Finally, the inherent slow response time of adiabatic transformation can be addressed using shortcuts to adiabaticity schemes [36], enabling the implementation of large-area interferometry [37].
The authors thank Du Jinyi and Lucas Gabardos for careful reading of the manuscript. This work was supported by the CQT/MoE funding Grant No. R-710-002-016-271, the Singapore Ministry of Education Academic Research Fund Tier2 Grant No. MOE-T2EP50220-0008, and the Temasek Laboratories Grant No. TLSP23-08.
|
2308.16812 | Tail estimates for the stationary stochastic six vertex model and ASEP | This work studies the tail exponents for the height function of the
stationary stochastic six vertex model in the moderate deviations regime. For
the upper tail of the height function we find upper and lower bounds of
matching order, with a tail exponent of $\frac{3}{2}$, characteristic of KPZ
distributions. We also obtain an upper bound for the lower tail of the same
order.
Our results for the stochastic six vertex model hold under a restriction on
the model parameters for which a certain "microscopic concavity" condition
holds. Nevertheless, our estimates are sufficiently strong to pass through the
degeneration of the stochastic six vertex model to the ASEP. We therefore
obtain tail estimates for both the current as well as the location of a second
class particle in the ASEP with stationary (Bernoulli) initial data. Our
estimates complement the variance bounds obtained in the seminal work of
Bal\'azs and Sepp\"al\"ainen.} | Benjamin Landon, Philippe Sosoe | 2023-08-31T15:40:57Z | http://arxiv.org/abs/2308.16812v4 | # Tail estimates for the stationary
# Tail estimates for the stationary
stochastic six vertex model and ASEP
Benjamin Landon
University of Toronto
Department of Mathematics
[email protected]
Philippe Sosoe
Cornell University
Department of Mathematics
[email protected]
**Abstract:** This work studies the tail exponents for the height function of the stationary stochastic six vertex model in the moderate deviations regime. For the upper tail of the height function we find upper and lower bounds of matching order, with a tail exponent of \(\frac{3}{2}\), characteristic of KPZ distributions. We also obtain an upper bound for the lower tail of the same order.
Our results for the stochastic six vertex model hold under a restriction on the model parameters for which a certain "microscopic concavity" condition holds. Nevertheless, our estimates are sufficiently strong to pass through the degeneration of the stochastic six vertex model to the ASEP. We therefore obtain tail estimates for both the current as well as the location of a second class particle in the ASEP with stationary (Bernoulli) initial data. Our estimates complement the variance bounds obtained in the seminal work of Balazs and Seppalainen.
## 1 Introduction
The stochastic six vertex model (S6V) is a specialization of the classical six vertex model, and has been studied extensively since its introduction by Gwa and Spohn in [30]. There has been a renewed interest in the S6V model in recent years, starting with the work [14], due to its belonging to the Kardar-Parisi-Zhang (KPZ) class of stochastic growth models. Advances in the theory of integrable probability have led to the discovery of exact formulas for observables in this model. These have allowed a rigorous confirmation of previously predicted asymptotic properties of this model and its variants, including the identification of limiting distributions of the height function with those coming from random matrix theory that are characteristic of the KPZ universality class [2, 14]. Higher-spin and colored vertex models have been introduced as generalizations of the S6V model that retain a high degree of integrability [16].
In this paper, we derive tail estimates of the correct order in the moderate deviations regime for the height function of the stationary S6V model. It is well known that under a certain degeneration of the parameters, the S6V model converges to the asymmetric simple exclusion process (ASEP) [1]. While our results for the S6V model require certain conditions on the parameters to hold (in order to construct a certain "microscopic concavity" coupling of the S6V model), they are otherwise uniform in the parameters and survive degeneration to the ASEP. We consequently deduce estimates both for the current fluctuations as well as for the location of a unique second class particle in the ASEP at equilibrium.
The ASEP is another classical model of mathematical physics in the KPZ class, the understanding of which has progressed greatly over the past two decades. Our results for ASEP supplement those of Balazs-Seppalainen [12] in their breakthrough paper on cube-root fluc
tuations in ASEP. In the spirit of [9, 11, 12], our work relies on certain couplings of the S6V dynamics for different initial data to control the fluctuations (see also [3, 37] for other works exploiting couplings in the S6V model). We thus forego contour integral representations and other exact formulas obtained by transfer matrix methods or Yang-Baxter relations.
Gwa and Spohn [30] noted that, for special values of the weights, the classical six vertex model satisfies a certain spatial Markovian property that implies that configurations can be sampled sequentially in subdomains. This observation enabled them to study the model by transfer matrix methods, and identify fluctuation exponents characteristic of the Kardar-Parisi-Zhang universality class. Later, Borodin, Corwin and Gorin [14] performed a detailed spectral analysis of the transfer matrix, providing contour integral representations amenable to asymptotic analysis. They proved that the height function of the model, after centering by its limit shape, exhibits KPZ fluctuations for step initial condition. These authors also explained how the ASEP could be realized as a limit of the stochastic six vertex model, so that the latter model appears as a natural two-dimensional generalization of ASEP. Indeed, considering one of the two coordinate directions as time, the stochastic six vertex model can be interpreted as a discrete-time version of a simple exclusion process, with the continuous time ASEP appearing after taking a suitable limit. This approximation of ASEP by the stochastic six vertex model was rigorously proved by Aggarwal in [1].
Aggarwal then exploited approximation by the six vertex model in his study of fluctuations of the stationary ASEP [2] (see also Aggarwal-Borodin [4]), using a stationary variant of the S6V model, whose spectral theory had been developed in great generality by Borodin and Petrov in [15]. Aggarwal showed that the rescaled fluctuations of the current of these models in equilibrium (with Bernoulli initial data) are asymptotically described by the Baik-Rains distribution. This distribution is characteristic of KPZ models at equilibrium. The methods in both [1] and [11] play important roles in the present work.
Here, we obtain tail estimates for KPZ quantities in the S6V and ASEP models, including the location of a second class particle started at the origin in a model with equilibrium initial data. Our upper and lower bounds for the upper tail of the height function and current are of optimal order, in the sense that the upper tail exponents match those of the asymptotic distribution found by Aggarwal. We also obtain upper bounds for the lower tail of the height function, but with a tail exponent \(\frac{3}{2}\); the optimal exponent is likely \(3\).
For the ASEP, our main results, Theorems 2.4 and 2.5 should be compared with the corresponding results in the seminal work by Balazs and Seppalainen [12], in which cube root fluctuations were first obtained for the stationary model, based on an argument originally developed by these authors and Cator in [9]. Our results can be seen as estimates for the tail on an exponential scale, whereas [11] estimates the fluctuations at the level of low moments.
### Moderate deviations of KPZ models
In recent years there has been significant interest in obtaining tail estimates or moderate deviations results for observables of models in the KPZ universality class. One motivation is that tail estimates are often required as inputs in studying more detailed properties of KPZ models beyond convergence of the one-point distributions: i.e., weak convergence of the height function may not be sufficient by itself to take union bounds over a growing number of events. See, for example, the work [25] on the annealed path measure of the continuum directed random polymer, the work [5] on the construction of the ASEP speed process, and
recent works on fluctuations of lozenge tilings [6, 31, 32], among many others, all requiring inputs beyond the one-point convergence of KPZ quantities.
One straightforward application (pointed out to us by Aggarwal, and mentioned in his work [2]) is weak convergence of the two-point function of the stationary ASEP and six vertex model, defined by \(S(y,x):=\mathrm{Cov}(\eta_{y}(x),\eta_{0}(0))\) where \(\eta_{y}(x)\) is the indicator function of there being a particle in the ASEP at site \(x\) at time \(y\), or a, say, vertical arrow outgoing from the vertex \((x,y)\) in the S6V, for \(y\to\infty\) and \(x\) near the characteristic line. Due to the fact that \(S(y,x)\) can be seen as the discrete Laplacian (in variable \(x\)) of the height function, this essentially follows from convergence in distribution of the height function (proved by Aggarwal [1]) and sufficient tightness (the result of our work), allowing one to deduce convergence of the variance. Indeed, this argument was carried out by Baik, Ferrari and Peche in the case of the TASEP [8], by proving tightness on an exponential scale and relying on the distributional convergence proven in [28], and the same argument applies here. We remark that two point function for the ASEP was considered in the 1985 work [45] of van Beijeren, Kutner and Spohn who predicted that \(S(x,y)\) would be of order \(y^{-2/3}\) near the characteristic line. This application of our work confirms this prediction.
A second motivation lies in the fact that the variety of approaches available for tail estimates offers hope of understanding models that are not exactly solvable or integrable. Tail exponents seem to be more universal than the one-point distributions themselves: for example, the exponents of \(3/2\) and \(3\) govern the upper and lower tails of both the Baik-Rains and Tracy-Widom distributions, and have been shown to govern the solution of the KPZ equation with general initial data [20]. In the work [35] we exhibited an upper tail exponent of \(\frac{3}{2}\) for a class of diffusions with asymmetric interaction that are not expected to be integrable.
#### 1.1.1 Integrable approaches to tail estimates
Methods on the integrable side should be distinguished both between approaches that work for zero-temperature models admitting determinantal descriptions and positive temperature models with more involved formulas, and as well as between methods that work for the upper and lower tails, with the lower tail typically being more challenging.
Due to the nature of their determinantal representation, upper tail estimates for zero-temperature models such as exponential last passage percolation (LPP) can be proved directly via upper bounds on the operator kernel that appears in the determinant. This fails in general for the lower tail. Approaches to the lower tail include Riemann-Hilbert methods for LPP [7] as well as the application of random matrix methods via distributional identities between KPZ models and random matrices [36].
Upper tail estimates have also been achieved for positive temperature models via determinantal formulas, see e.g., [13] for the case of the log gamma polymer. Exact formulas for moments were exploited in [20] to obtain an estimate for the upper tail of the KPZ equation with wedge initial data. This work also obtained estimates for general initial data.
An impressive array of integrable approaches for the lower tail of positive temperature models are actively being developed. Exact formulas for Laplace transforms were exploited in the work [21] on the KPZ equation. The work [18] also studies the lower tail of the KPZ equation via a Riemann-Hilbert approach. The recent work [22] exploited connections to a periodic LPP model to establish lower tail estimates for the \(q\)-pushTASEP.
Sub-optimal tail estimates for the ASEP with step-Bernoulli initial data were obtained
in [5] via an identity for the \(q\)-Laplace transform obtained via degeneration from the S6V model. In contrast, we obtain tail estimates for the stationary S6V model itself, before the degeneration.
#### 1.1.2 Probabilistic and geometric methods
Approaches based on probabilistic couplings have their roots in the seminal works of Cator, Balazs and Seppalainen finding scaling exponents for the current in exclusion processes [9, 11]. These methods were further extended to directed polymer models [19, 38, 43, 44]. However, these methods could only access low moments of the KPZ observables of interest.
Works of the second author with Noack [39, 40] extended these estimates for polymer models to arbitrary moments with only a small polynomial error deficiency. Around the same time, Emrah, Janjigian and Seppalainen [27] gave probabilistic proofs of tail estimates for exponential LPP using couplings and an additional input of a certain identity for the moment generating function of a two-parameter version of LPP, originally due to Rains [42] (hereafter, the _Rains-EJS identity_).
In our previous work [35], we established upper and lower bounds for the upper tail for the stationary versions of the four integrable polymers as well as a non-integrable model of diffusions, extending the methodology of [27] and finding a Rains-EJS identity for polymer models (see also [46] for some related results obtained simultaneously). As a byproduct, this approach also gives a sub-optimal estimate (but nonetheless subexponential) for the lower tail, a result typically delicate from the point of view of integrable probability. This method relies on the fact that the models are formulated in a quadrant, and expressing the KPZ observable of interest as a sum of boundary increments along two boundary components.
Particle systems such as the ASEP are defined on the entire line \(\mathbb{Z}\), and do not naturally fit into this framework, and so it is unclear how exactly the method could be extended to such processes. However, the stationary S6V model does lie in a quadrant, and so our contribution is to extend for the first time coupling arguments based on [27, 35] to this model. Care must be taken to obtain estimates that survive degeneration to the ASEP. Further discussion of our methodological novelties is deferred to Section 2.5. Our work thus fulfills a goal of producing upper _and_ lower tail estimates in two positive temperature models, the ASEP and the S6V, delicate results even from the point of view of integrable probability.
We also briefly discuss approaches based on geometry to moderate deviations. The work [29] shows that in LPP, non-optimal tail estimates can be used as input to derive tail estimates with the optimal exponent. Remarkably, this works for general weight distributions (however the required input estimates for general weights are beyond current techniques). In [34] we extended this approach to positive temperature polymer models, and used the sub-optimal lower tail estimate from our prior work [35] to establish the optimal tail exponent of \(3\) for the lower tail of the O'Connell-Yor polymer.
At the moment it is not clear how these geometric considerations can be applied to particle systems or the S6V model. It would be interesting if the \(3/2\) exponent we obtain for the lower tail of the ASEP can be improved to any \(3/2+\varepsilon\) via geometric considerations.
Finally, we also note some recent works on the large deviations of the ASEP and related models. A large deviations upper bound in some regimes for the ASEP was obtained in [23] via Fredholm determinants and applied to their study of the coarsening model in \(\mathbb{Z}^{d}\). The full large deviations principle was then obtained in [26]. Recent work on the lower large deviations
tail for the \(q\)-deformed polynuclear growth was obtained in [24].
## 2 Definitions and results
### Definition of stochastic six vertex model
A six vertex directed path ensemble is a collection of up-right non-crossing directed paths connecting vertices of the quadrant
\[\mathbb{T}:=\{(x,y)\in\mathbb{Z}^{2}:x\geq 0,y\geq 0\}\backslash\{(0,0)\} \tag{2.1}\]
such that:
* Every path begins from either the \(x\)-axis or the \(y\)-axis, and the first segment of each path leaves the coordinate axes and immediately enters the _bulk_\(\{(x,y)\in\mathbb{Z}^{2}:x,y>0\}\).
* Paths do not share edges, but they may share vertices.
An example of a six vertex directed path ensemble (restricted to a finite box) appears in Figure 1. As a consequence of the definition, each vertex in \(\mathbb{Z}_{>}^{2}\) (here \(\mathbb{Z}_{>}:=\{n\in\mathbb{Z}:n>0\}\)) has six possible configurations which are displayed in Figure 2 (along with weights which will be used to define the random model momentarily). We view the second vertex configuration in Figure 2 as two paths bouncing off each other as opposed to crossing.
The stochastic six vertex (S6V) model is defined as follows. First, it requires the specification of _boundary_ or _initial data_. That is, a specification of which of the vertices \(\{(1,y):y\geq 1\}\) contain incoming horizontal arrows from the left and which of the vertices \(\{(x,1):x\geq 1\}\) contain incoming vertical arrows from below. One example commonly considered in the literature is step initial data, in which all of the vertices along either the \(x\)-axis or \(y\)-axis contain incoming arrows but the other axis contains none. However, the main case of interest in the present work are _two-sided Bernoulli initial data_, in which incoming arrows along the \(y\)-axis appear independently with probability \(b_{1}\) and along the \(x\)-axis with probability \(b_{2}\). We call this \((b_{1},b_{2})\) two-sided Bernoulli initial data.
Given the boundary data and parameters \(\delta_{1},\delta_{2}\in[0,1]\) sampling of the S6V model proceeds in the following Markovian manner. Let \(\mathbb{T}_{n}:=\{(x,y)\in\mathbb{Z}_{>}^{2}:x+y\leq n\}\). Beginning with \(n=2\), we sample all the vertices in \(\mathbb{T}_{n}\backslash\mathbb{T}_{n-1}\), conditional on the incoming arrows from \(\mathbb{T}_{n-1}\) to \(\mathbb{T}_{n}\) independently according to the probabilities in Figure 2. Note that the configurations of the vertices in \(\mathbb{T}_{n-1}\) specify all of the incoming arrows to vertices in \(\mathbb{T}_{n}\backslash\mathbb{T}_{n-1}\) and so the order of sampling the new vertices does not matter. The stochastic vertex model is then defined as the limit of these measures as \(n\to\infty\). However, in our work we will not need to make any use of the infinite volume measure, as the observables we consider will depend only on finitely many vertices, typically the configuration in a box \(\{(x,y):x\leq X,y\leq Y\}\).
It is helpful to think of the arrows as particle trajectories, especially in the context of the scaling limit to the ASEP. We will often refer to the arrows as particles and use these terms interchangeably.
The main observable of interest for the stochastic six vertex model is the height function \(H(x,y)\). We define it as the net flux of particles/arrows crossing the straight line segment connecting \((0,0)\) to \((x,y)\), with an arrow crossing from left to right contributing \(+1\) to the flux and \(-1\) if it crosses from right to left (an arrow crossing this line segment at the point
\((x,y)\) counts - i.e., \(H(x,y)\) coincides with the net flux across the line segment connecting \((0,0)\) to \((x+\varepsilon,y+\varepsilon)\) for all small \(\varepsilon>0\)).
If we want to emphasize the dependence of the height function on the boundary configuration of incoming arrows \(\xi_{0}\), then we will use the notation \(H(x,y;\xi_{0})\) to indicate the dependence. We introduce the following notation for the case of two-sided Bernoulli initial data.
**Definition 2.1**.: _Consider a stochastic six vertex model with \((b_{1},b_{2})\) two-sided Bernoulli initial data. The height function as defined above at the point \((x,y)\) will be denoted by \(H^{(b_{1},b_{2})}(x,y)\)._
Another definition of the height function given in [2] is as follows. Give a path the color red if it originates from the \(x\) axis, and blue if it originates from the \(y\) axis. The height function is then the number of blue paths that intersect the line \(\{(i,j):j=y\}\) to the right of \((x,y)\) minus the number of red paths that intersect the line \(\{(i,j):j=y\}\) at or to the left of \((x,y)\). Note that at most one of these numbers may be non-zero, due to the non-intersection property. This definition coincides with ours up to \(\pm 1\), in the case that a red arrow intersects the line \(\{(i,j):j=y\}\) to the left of \((x,y)\) but then proceeds horizontally to the right before eventually turning upwards at a point to the right of \((x,y)\). The \(\pm 1\) discrepancy makes no difference for any of our main results. We remark that our flux definition does not require the non-crossing interpretation of the final vertex configuration of Figure 2, nor does it require the blue/red color labelling, so it has some advantages. For most purposes the difference is immaterial.
Figure 1: An example of a six vertex directed path ensemble.
Figure 2: Weights of the possible vertex configurations of the stochastic six vertex model. The final configuration on the right is viewed as two paths bouncing off each other instead of crossing.
### Results for stochastic six vertex model
Consider the stochastic six vertex model with \((b_{1},b_{2})\) two-sided Bernoulli initial data. Assume that \(1>\delta_{1}>\delta_{2}>0\) and introduce
\[\kappa:=\frac{1-\delta_{1}}{1-\delta_{2}}. \tag{2.2}\]
As observed in [2], in the case that
\[\frac{b_{1}}{1-b_{1}}=\kappa\frac{b_{2}}{1-b_{2}}, \tag{2.3}\]
the S6V model is stationary or translation-invariant in a sense that will be made precise in Section 3 below. In brief, this stationarity is the statement that, for any \((x,y)\), whether there is a horizontal arrow incoming to the vertices \(\{(x,i):i\geq y\}\) or a vertical arrow incoming to the vertices \(\{(i,y):i\geq x\}\) are Bernoulli random variables with probabilities \(b_{1},b_{2}\), respectively. In particular, this is the same as the initial boundary data.
Due to our uses of various couplings in the stochastic six vertex model, our main results do not hold for all choices of parameters. Essentially, we require that at least one of the parameters \(\delta_{1},\delta_{2}\) is strictly less than \(\frac{1}{2}\). Our methods work as long as this holds.
However, due to the fact that we wish to study the ASEP via degeneration, we need to allow \(\delta_{1},\delta_{2}\) to tend to \(0\) as the coordinates \((x,y)\) at which we measure the height function tend to infinity. We therefore allow \(\delta_{1},\delta_{2}\) to depend on \(x\) and \(y\), but make some minimal quantitative assumptions that simplify the analysis. For this reason, the below assumptions look slightly more complicated than simply that \(\delta_{2}<\frac{1}{2}\), but nonetheless are satisfied if this holds and the parameters are fixed, independent of \(x\) and \(y\). Additionally, the assumptions below allow degeneration to the ASEP.
**Assumption 2.2**.: _Let \(1>\delta_{1}>\delta_{2}>0\) be the parameters of the six vertex model and \(\kappa\) be as above. Let \(\theta\) be defined by_
\[\theta:=\frac{\delta_{1}\wedge 0.5-\delta_{2}}{\delta_{1}\wedge 0.5+\delta_{2}}. \tag{2.4}\]
_We assume there is a constant \(\mathfrak{a}>0\) so that:_
* \(\theta\geq\mathfrak{a}\)_._
* \(1-\delta_{1}\geq\mathfrak{a}\)_._
* \(\mathfrak{a}\delta_{1}\leq 1-\kappa\leq\mathfrak{a}^{-1}\delta_{1}\)__
_Note that the first assumption implies \(\frac{1}{2}>\delta_{2}\)._
**Remark.** If we regard \(\delta_{1}\) and \(\delta_{2}\) as fixed parameters, then the assumptions hold under the qualitative assumptions \(1>\delta_{1}>\delta_{2}>0\) and \(\delta_{2}<\frac{1}{2}\).
In the more general case where \(\delta_{1}\) and \(\delta_{2}\) vary, then the above assumptions are quantitative restrictions on the nature in which they can vary. Essentially, they require that \(\delta_{1}\) and \(\delta_{2}\) cannot degenerate to \(1\) and \(\frac{1}{2}\), respectively, and that \(\delta_{1}-\delta_{2}\) is not too small compared to \(\delta_{1}\).
In the degeneration to the ASEP, one takes \(\delta_{1}=\varepsilon L\) and \(\delta_{2}=\varepsilon R\) for fixed \(L\neq R\) and \(\varepsilon\downarrow 0\). In particular, the above assumptions hold for this degeneration.
Our main result on the tails of the height function of the stationary stochastic six vertex model is as follows.
**Theorem 2.3**.: _Let \(1>\delta_{1}>\delta_{2}>0\) and assume they satisfy (i) and (ii) of Assumption 2.2 for some \(\mathfrak{a}>0\). Let \(b_{1}\) and \(b_{2}\) satisfy (2.3) and that \(\mathfrak{a}\leq b_{i}\leq 1-\mathfrak{a}\) for \(i=1,2\). Let \(y\) satisfy, \(y(1-\kappa)\geq 1.\) Let_
\[x_{0}:=y\kappa\left(\frac{1+\kappa^{-1}\beta_{1}}{1+\beta_{1}}\right)^{2}. \tag{2.5}\]
_There are constants \(c,C>0\) depending only on \(\mathfrak{a}\) so that for, \(1\leq u\leq(y(1-\kappa))^{2/3}\) we have,_
\[\mathbb{P}\left[\left|\frac{H^{(b_{1},b_{2})}(x,y)-\mathbb{E}[H^{(b_{1},b_{2} )}(x,y)}{(y(1-\kappa))^{1/3}}\right|>u\right]\leq C\mathrm{e}^{-cu^{3/2}}+C \mathrm{e}^{-cu^{2}(y(1-\kappa))^{2/3}/|x-x_{0}|}. \tag{2.6}\]
_If there is an \(A>0\) so that \(|x-x_{0}|\leq A(y(1-\kappa))^{2/3}\), and additionally, (iii) of Assumption 2.2 holds, then,_
\[c\mathrm{e}^{-Cu^{3/2}}\leq\mathbb{P}\left[\frac{H^{(b_{1},b_{2})}(x,y)- \mathbb{E}[H^{(b_{1},b_{2})}(x,y)}{(y(1-\kappa))^{1/3}}>u\right]\leq C\mathrm{ e}^{-cu^{3/2}} \tag{2.7}\]
_for \(1\leq u\leq c(y(1-\kappa))^{2/3}\), where the constants depend on \(A\)._
_If there is a \(C_{1}>0\) so that \(|x-x_{0}|\leq C_{1}y(1-\kappa)\) then there is a \(C_{2}>0\) so that for \(u>C_{2}(y(1-\kappa))^{2/3}\),_
\[\mathbb{P}\left[\left|H^{(b_{1},b_{2})}(x,y)-\mathbb{E}[H^{(b_{1},b_{2})}(x,y) ]\right|>u(y(1-\kappa))^{1/3}\right]\leq C\mathrm{e}^{-cu(y(1-\kappa))^{1/3}}, \tag{2.8}\]
_if all three items of Assumption 2.2 hold._
We now discuss the above results. It is important to examine the role of all of the parameters in order to appreciate the stated estimates; in particular, one would like to understand the various appearances of the parameter \((1-\kappa)\), to see whether or not the above estimates contain the optimal scaling in this parameter.
The main context in which to understand the above results is the work of [2] which shows that the recentered height function, when rescaled by \((y(1-\kappa))^{1/3}\), converges to (a slight rescaling of) the Baik-Rains distribution, when \(y\to\infty\) and when \(x\) satisfies the characteristic direction assumption,
\[|x-x_{0}|\leq A(y(1-\kappa))^{2/3}. \tag{2.9}\]
It is therefore natural to take the main large asymptotic parameter in the above statements to be \((y(1-\kappa))\). In particular, we need to obtain the correct dependence in \((1-\kappa)\) as this parameter tends to \(0\) in the degeneration to the ASEP.
The upper tail of the Baik-Rains distribution decays as \(\mathrm{e}^{-cs^{3/2}}\). Therefore, under (2.9), the estimates (2.7) are expected to be optimal up to the value of the constant in the exponential, and moreover the scaling in the parameter \(y(1-\kappa)\) is of the correct order. Note that under the assumption (2.9) the additional Gaussian term on the RHS of (2.6) is dominated by the \(\mathrm{e}^{-cu^{3/2}}\) term. Under the characteristic directon assumption, we therefore obtain an upper bound for the lower tail of the height function of the same order as the upper tail. This estimate for the lower tail is not expected to be optimal (we expect \(\mathrm{e}^{-c|s|^{3}}\), but the dependence on \((1-\kappa)\) is still correct), but is nonetheless useful, as obtaining decay of the lower tail is in general a delicate problem.
When \(|x-x_{0}|\gg(y(1-\kappa))^{2/3}\) one instead expects the height function to have Gaussian fluctuations to leading order; this is reflected in the sub-Gaussian tail on the RHS of (2.6). The dependence on \(|x-x_{0}|\) is correct. In fact, given the above estimates and the stationarity discussed in Section 3 below it is straightforward to prove convergence to Gaussian fluctuations. See, e.g., the proof of Corollary 2.4 of [33]. Of course, this would also follow from [2].
Finally, the restriction to \(u\leq c(y(1-\kappa))^{2/3}\) is expected; the range \(u\geq(y(1-\kappa))^{2/3}\) is in the macroscopic large deviations regime, and here one expects a model-dependent rate function to arise. In this regime we have stated for possible further use the estimate (2.8) which is likely non-optimal (but matches the tail \(\mathrm{e}^{-cu^{3/2}}\) in the cross-over regime \(u\approx(y(1-\kappa))^{2/3}\)).
### Results for the ASEP
The asymmetric exclusion process (ASEP) is an interacting particle system on \(\mathbb{Z}\), where each site \(i\in\mathbb{Z}\) can contain at most one particle. The ASEP evolves as follows. Given two rates \(L,R>0\), we place two independent exponential clocks at each site \(i\in\mathbb{Z}\) with rates \(L\) and \(R\), respectively. Whenever a clock rings at an occupied site \(i\), the particle attempts to jump to the right or left if the appropriate adjacent site is unoccupied. If the target site is occupied, then the particle does nothing.
The main initial condition we consider for the ASEP is \(b\)-Bernoulli initial data, in which initially each site is occupied with probability \(b\) independently. Note that these are invariant measures for the ASEP.
The height function or current \(J_{t}(x)\) of the ASEP is the net flux of particles across the line connecting \((0,1/2)\) to \((t,x+1/2)\) in the space-time plane. In particular \(J_{t}(0)\) is the net flux of particles crossing the edge \((0,1)\). We have the decomposition of \(J_{t}(x)\) as
\[J_{t}(x)=J_{t}(0)-\sum_{j=1}^{x}\eta_{j}(t) \tag{2.10}\]
where \(\eta_{j}(t)\) is the indicator function of whether or not there is a particle at site \(j\) at time \(t\).
**Theorem 2.4**.: _Let \(b\in(0,1)\) and consider the height function of the ASEP \(J_{t}(x)\) with \(b\)-Bernoulli initial data. Assume the rates satisfy \(L>R\geq 0\). Let \(T\geq 1\) and define,_
\[x_{0}:=(L-R)(2b-1)T. \tag{2.11}\]
_Then for all \(1\leq u\leq(T(L-R))^{2/3}\) we have,_
\[\mathbb{P}\left[\left|\frac{J_{T}(x)-\mathbb{E}[J_{T}(x)]}{(T(L-R))^{1/3}} \right|>u\right]\leq C\mathrm{e}^{-cu^{3/2}}+C\mathrm{e}^{-cu^{2}(T(L-R))^{2/3 }/(1+|x-x_{0}|)}, \tag{2.12}\]
_for some \(C,c>0\). If there is an \(A>0\) so that \(|x-x_{0}|\leq A(T(L-R))^{2/3}\) then,_
\[c\mathrm{e}^{-Cu^{3/2}}\leq\mathbb{P}\left[\frac{J_{T}(x)-\mathbb{E}[J_{T}(x) ]}{(T(L-R))^{1/3}}>u\right]\leq C\mathrm{e}^{-cu^{3/2}} \tag{2.13}\]
For the ASEP with stationary initial data, [1] shows convergence of the current rescaled by \(T^{1/3}\) to the Baik-Rains distribution under the condition \(|x-x_{0}|\leq AT^{2/3}\). As discussed in the previous section, the upper and lower bounds above are therefore of optimal order.
#### 2.3.1 Second class particles
Second class particles arise naturally in the study of the ASEP. They can be defined for this model in a few equivalent ways. A first definition is obtained by designating the particles of the ASEP whose dynamics was described in the previous section as "first class." First class particles are subject to exclusion by other first class particles. One then adds second class particles, subject to exclusion by both first and second class particles. Their dynamics is as follows: when a clock rings at a site occupied by a second class particle, it performs the corresponding jump if there is no first or second class particle at the target site. However, if a first class particle jumps to a site occupied by a second class particle, the two particles swap locations.
A second definition is obtained by considering the difference between two ordered occupation processes \(\eta\) and \(\zeta\). The set-up is as follows. We assume that for the initial data we have \(\eta_{0}\leq\zeta_{0}\) (i.e., wherever \(\eta_{0}\) has a particle, so does \(\zeta_{0}\)), and that \(\eta_{t}\) and \(\zeta_{t}\) evolve under the _basic coupling_; that is, we place independent exponential clocks of rates \(L\), \(R\) at all sites \(i\in\mathbb{Z}\). Whenever the clocks ring, jumps are attempted by _both_ the particles of \(\eta\) and \(\zeta\) (subject to exclusion only by particles in their corresponding occupation process, so that marginally both \(\eta\) and \(\zeta\) evolve as the ASEP). By the attractivity of the ASEP process, we have that \(\eta_{t}\leq\zeta_{t}\) for all \(t\). Furthermore, if one considers the joint process of \(\eta_{t}\) and the discrepancies between \(\zeta_{t}\) and \(\eta_{t}\) (which we denote \(\zeta_{t}-\eta_{t}\)) then \((\eta_{t},\zeta_{t}-\eta_{t})\) evolves as the occupation process of first and second class particles.
For the position of the second class particle we obtain the following.
**Theorem 2.5**.: _Let \(Q(t)\) denote the position of a second class particle in the ASEP started from the origin with \(b\)-Bernoulli initial data elsewhere (that is, the origin is empty but the other sites are occupied independently with probability \(b\)). Let \(x_{0}\) be as above, i.e., \(x_{0}=(L-R)(2b-1)T\). There are constants \(C,c>0\) so that for \(0\leq u\leq T(L-R)\) we have,_
\[\mathbb{P}\left[|Q(T)-x_{0}|>u\right]\leq C\mathrm{e}^{-cu^{3}(T(L-R))^{-2}}. \tag{2.14}\]
We also obtain analogous results for second class particles in the S6V; for brevity we omit any such statement here. The interested reader is referred to Section 5.
### Two-point function
As discussed above, our results, together with the convergence proved by Aggarwal [2], imply the following for the two-point function of the stochastic six vertex model and the ASEP. The two-point function \(S(T,x)\) is defined by \(S(T,x)=\mathrm{Cov}(\eta_{x}(T),\eta_{0}(0))\) in the ASEP; for the six-vertex model one can replace \(\eta_{x}(T)\) by the event there is an outgoing vertical arrow from vertex \((x,T)\). One has that \(2S(T,x)=\Delta_{x}\mathrm{Var}(J_{T}(x))\) where \(\Delta_{x}f(x)=f(x+1)+f(x-1)-2f(x)\) is the discrete Laplacian, and a similar identity for the S6V [41]. From this, our tightness result, and the convergence of the current and height function to the Baik-Rains distribution [2], we deduce the following. The proof is similar to [8] (where it was proven for the TASEP) and is therefore omitted.
**Corollary 2.6**.: _For the ASEP with Bernoulli \(b\) initial data and \(R>L>0\) we have that the function \(w\to 2T^{2/3}\chi^{1/3}S(\delta^{-1}T,(1-2b)T+2\chi^{1/3}T^{2/3}w)\) converges to \(\frac{\chi}{4}g^{\prime\prime}_{BR}(w)\) as \(T\to\infty\)
_when integrated against smooth, compactly supported functions. Here, \(\delta=R-L\), \(\chi=b(1-b)\) and \(g_{BR}(w)\) is the variance of the Baik-Rains distribution \(F_{BR;w}\) as defined in, e.g., [2]._
_If \(S(y,x)\) instead denotes the two point function of the stochastic six vertex model (with \((b_{1},b_{2})\) Bernoulli initial data obeying (2.3)) and \(\delta_{2}>\delta_{1}\) with \(\delta_{1}<\frac{1}{2}\) then the function \(w\to 2T^{2/3}S(yT,x(T+\zeta wT^{2/3}))\) converges to \(\frac{\mathcal{F}^{2}}{\zeta^{2}}g_{BR}^{\prime\prime}(w)\) as \(T\to\infty\) in the same sense. Here, \(x=(1-\delta_{2})(b_{1}+\kappa(1-b_{1}))\), \(y=(b_{2}+\kappa^{-1}(1-b_{2}))(1-\delta_{1})\), \(\chi_{i}=b_{i}(1-b_{i})\) and_
\[\zeta=\frac{2(\delta_{2}-\delta_{1})^{2/3}\chi_{1}^{1/6}\chi_{2}^{1/6}}{(1- \delta_{1})^{1/2}(1-\delta_{2})^{1/2}},\qquad\mathcal{F}=(\delta_{1}-\delta_{2 })^{1/3}\chi_{1}^{1/3}\chi_{2}^{1/3}. \tag{2.15}\]
### Methods and outline
In [9, 11], Cator, Balasz and Seppalainen introduced a general methodology to bound the fluctuations of ASEP, which has since been extended and applied to many interacting particle systems, random growth models and polymers. The Cator-Balasz-Seppalainen method jointly estimates the fluctuations of the height function/passage time/polymer partition function \(H\), and a second quantity \(Q\) which represents the derivative of the first object with respect to a parameter in the initial data. The key point is that this object itself can in turn be bounded by differences of the partition function \(H\) (in our context this method is encapsulated in Propositions 5.1 and 6.1.) This second step is a type of "convexity", and exhibiting it in a given model is non-trivial (we use "convexity" as suggestive of the simple fact that a derivative of a convex function can be controlled by its difference quotients).
As explained in our previous paper [35], in the case of integrable polymer models and certain diffusion models with asymmetric interaction, the previous sketch can be implemented almost literally. The general formulation in [35] has its origins in an observation in the breakthrough paper [27]. In the latter paper, the authors exhibit an exponential identity for the passage time in stationary exponential last passage percolation. Our work [35] presents a way to obtain tail bounds for stationary models formulated in a quadrant, provided one can express the quantity of interest \(H\) as a sum of boundary increments along two boundaries. This method applies even to models which, unlike LPP and the four integrable polymer models, do not possess a straightforward interpretation as a sum or maximization over paths.
Hereafter, we will refer to the type of exponential identity central to [27, 35] as the _Rains-EJS formula_ - see Lemma 3.2 for the Rains-EJS formula in the six-vertex model. Interacting particle systems on the whole line \(\mathbb{Z}\), such as the ASEP, do not naturally fit in this framework, but the stationary stochastic six vertex model in a rectangular domain does. This opens the door to applying the tools in [9, 11, 27, 35] to S6V.
For the S6V model, it is unclear how to exhibit a meaningful coupling of the models that is a.s. convex with respect to parameters in the initial data. Instead, we consider second class paths in the vertex model, whose positions play the role of \(Q\). An adaptation of Balasz and Seppalainen's _microscopic concavity coupling_[10, 11], see Section 4.4, allows us to construct couplings for which the second class particles in models with ordered initial data are also ordered; see Proposition 4.1. Generating this microscopic concavity coupling is one of the main sources of the restriction on our parameters \(\delta_{1},\delta_{2}\). We exploit this to relate the position of second class particles to differences of height functions. Given this comparison, we implement an exact moment generating function (mgf) computation, in the spirit of the Rains-EJS formula of [27]; see Lemma 3.2. We estimate differences in height functions for
different values of the parameters through comparison of the mgfs by Taylor expansion, see Lemma 5.2.
Prior works have also considered probabilistic arguments exploiting basic couplings (under which the second class particles arise) in the S6V. Similar couplings arise in the colored six vertex models which were studied algebraically in depth by Borodin and Wheeler in [17] (in which different arrows are given different colors, indicating their priority over other arrows). Basic couplings leading to multi-class S6V models were introduced by Aggarwal in [3] (and one of the couplings we use falls into these), and were used by Lin to classify stationary distributions of the S6V [37]. Our methods thus add to a growing body of probabilistic approaches to the S6V.
Finally, a careful examination of Aggarwal's results [1] on convergence of the six vertex model to ASEP reveals that the stated results in that paper can be strengthened to estimates on the tail of the current distribution and second class particle in ASEP from the corresponding bounds we obtain for the S6V model.
### Step initial data
It is also possible to derive an upper bound for the upper tail for the case of step initial data by a simple monotonicity argument combined with the Rains-EJS formula. That is, there is a simple coupling in which the height function of any step-Bernoulli initial condition dominates the height function of step initial condition. Optimization over the Bernoulli parameters then gives an upper bound which in fact recovers the correct constant infront of the \(u^{3/2}\) term in the tail estimate.
By step initial data for the S6V model we mean that every vertex along the \(x\)-axis has an incoming arrow, and there are no incoming arrows along the \(y\)-axis. We will denote this height function by \(H^{(s)}(x,y)\). Similarly, the current of the ASEP with step initial condition (i.e., particles starting at every site \(i\) with \(i>0\)) will be denoted by \(J_{t}^{(s)}(x)\). Denote,
\[\mathcal{H}(x,y) :=-\frac{1}{\delta_{1}-\delta_{2}}\left(\sqrt{y(1-\delta_{1})}- \sqrt{x(1-\delta_{2})}\right)^{2}\] \[\sigma(x,y)^{3} :=\frac{\sqrt{xy}}{\kappa(\kappa^{-1/2}-\kappa^{1/2})^{3}}\left( 1-\sqrt{\frac{y\kappa}{x}}\right)^{2}\left(1-\sqrt{\frac{x\kappa}{y}}\right) ^{2}. \tag{2.16}\]
as well as
\[\mathcal{J}_{t}(x,y) :=-\frac{t}{4(L-R)}\left(\frac{x}{t}-(L-R)\right)^{2}\] \[\nu_{t}(x)^{3} :=\frac{t}{16(L-R)^{3}}\left((L-R)^{2}-\frac{x^{2}}{t^{2}}\right) ^{2}. \tag{2.17}\]
**Theorem 2.7**.: _Let \(1>\delta_{1}>\delta_{2}>0\). Suppose there is an \(\mathfrak{a}>0\) so that \(\kappa>\mathfrak{a}\) and_
\[\kappa+a\mathfrak{a}(1-\kappa)\leq\frac{y}{x}\leq\frac{1-(1-\kappa)\mathfrak{ a}}{\kappa}. \tag{2.18}\]
_Assume \(y(1-\kappa)\geq 10\). Then there are constants \(C,c>0\) depending only on \(\mathfrak{a}>0\) so that for any \(0<u<c(y(1-\kappa))^{2/3}\) we have,_
\[\mathbb{P}\left[\frac{H(x,y)-\mathcal{H}(x,y)}{\sigma(x,y)}>u\right]\leq\exp \left(-\frac{4}{3}u^{3/2}+C\frac{u^{2}}{(y(1-\kappa))^{1/3}}\right) \tag{2.19}\]
_For the ASEP, fix \(L>R>0\). Assume there is an \(\mathfrak{a}>0\) so that \(|x|\leq(1-\mathfrak{a})(L-R)t\). There are constants \(C,c>0\) depending only on \(L,R\) and \(\mathfrak{a}\) so that for \(0\leq u\leq ct^{2/3}\) we have_
\[\mathbb{P}\left[\frac{J_{t}(x)-\mathcal{J}_{t}(x)}{\nu_{t}(x)}>u\right]\leq \exp\left(-\frac{4}{3}u^{3/2}+C\frac{u^{2}}{t^{1/3}}\right) \tag{2.20}\]
**Remark.** The quantities in the probabilities on the LHS of (2.19) and (2.20) are known to converge to the Tracy-Widom distribution which has a tail behaving like \(\mathrm{e}^{-\frac{4}{3}u^{3/2}}\) and so the above estimates recover the optimal constant.
### Other notation and terminology
For two positive quantities \(f,g\) depending on some parameters \(i\in\mathcal{I}\), where \(\mathcal{I}\) is some abstract index set, we say that \(f(i)\asymp g(i)\) if there is a constant \(c_{1}>0\) so that \(c_{1}f(i)\leq g(i)\leq c_{1}^{-1}f(i)\) for all \(i\in\mathcal{I}\).
We will use the notation \(c,C\) to denote small or large constants whose value can change from line to line.
For two stochastic six vertex models \(\xi\) and \(\eta\) we use the notation \(\xi\geq\eta\) if at every vertex, whenever \(\eta\) contains an incoming horizontal and/or vertical arrow, so does \(\xi\). We use a similar notational convention \(\xi_{0}\geq\eta_{0}\) if the wherever there is an incoming arrow on the boundary in \(\eta_{0}\), there is also one in \(\xi_{0}\).
We also use the notation \(\mathbb{Z}_{>}:=\{n\in\mathbb{Z}:n>0\}\), \(\mathbb{Z}_{\geq}:=\{n\in\mathbb{Z}:n\geq 0\}\) as well as
\[\Delta_{xy}:=\{(i,j)\in\mathbb{Z}_{>}^{2}:i\leq x,j\leq y\}. \tag{2.21}\]
Labelling points in \(\Delta_{xy}\) by \((i,j)\) we will refer to the edge \(\{(i,j):i=1\}\) (resp., \(\{i=x\}\), \(\{j=1\}\), \(\{j=y\}\)) as the western (resp., eastern, southern, northern) boundaries of \(\Delta_{xy}\). We say that a directed path of a six vertex model exits out the northern boundary of the box \(\Delta_{xy}\) if it crosses the horizontal line \(\{(i,j):j=y+\frac{1}{2}\}\) to the left of the vertical line \(\{(i,j):i=x+\frac{1}{2}\}\). Similarly, we say that it exits out the eastern boundary if it crosses the vertical line \(\{(i,j):i=x+\frac{1}{2}\}\) below the horizontal line \(\{(i,j):j=y+\frac{1}{2}\}\). Note that with this convention paths exit \(\Delta_{xy}\) out either the northern or eastern boundaries.
We will often consider the stochastic six vertex model with initial data given by independent Bernoulli random variables (we have already introduced \((b_{1},b_{2})\) two-sided Bernoulli initial data). The parameters \(a_{1},a_{2}\) and \(b_{1},b_{2}\) will always denote the parameters of Bernoulli initial data, with \(a_{1},b_{1}\) for \(y\)-axis and \(a_{2},b_{2}\) for \(x\)-axis. Sometimes not all of the boundary data will be Bernoulli with one parameter; sometimes an axis might have a mixture (still denoted using \(a_{i}\) or \(b_{i}\)) or a few vertices may deterministically always have or have no incoming arrow. For \(a_{i},b_{i}\) we introduce
\[\alpha_{i}:=\frac{a_{i}}{1-a_{i}},\qquad\beta_{i}:=\frac{b_{i}}{1-b_{i}}. \tag{2.22}\]
We always assume \(a_{i},b_{i}\in(0,1)\). Whenever we write \(a_{i},b_{i}\) anywhere, it is always assumed that \(\alpha_{i},\beta_{i}\) have been introduced as above. This notation is natural due to the stationarity condition (2.3) and will simplify various expressions arising in calculations below.
### Organization of remainder of paper
In Section 3 we summarize the equilibrium properties of the S6V model. That is, the stationarity properties observed in [2] as well as the version of the EJS-Rains identity applicable to the S6V model. In Section 4 we introduce the various couplings between S6V models with different initial data. We also introduce second class particles or arrows in the context of the S6V which play a key role in our arguments, as well as various couplings representing equivalent ways of generating the second class particles. In Section 5 we prove tail estimates for second class particles, and then in Section 6, use these estimates to prove tail estimates for height functions. Our main results for the S6V, Theorem 2.3, is proven in Section 6.6. Section 7 contains the proofs of our results for the ASEP, Theorems 2.4 and 2.5, which are obtained via degeneration of the S6V model. Theorem 2.7 on step initial data is proven in Section 8. Appendices collect various auxiliary results.
## 3 Equilibrium properties of the six vertex model
As mentioned above, under the condition (2.3) the stochastic six vertex model enjoys various translation invariance properties. To state these, for any \((x,y)\in\mathbb{Z}_{>}^{2}\) introduce the random variables \(\varphi^{(h)}(x,y)\) and \(\varphi^{(v)}(x,y)\) that are the indicator functions of whether the vertex \((x,y)\) contains an incoming horizontal or vertical arrow, respectively.
The following is Lemma A.2 of [2].
**Lemma 3.1**.: _Consider the stochastic six vertex model with \(\delta_{1},\delta_{2}\in(0,1)\) and \(b_{1},b_{2}\in(0,1)\) satisfying (2.3). For any \((x,y)\) the random variables_
\[\{\varphi^{(h)}(x,i):i\geq y\}\cup\{\varphi^{(v)}(i,y):i\geq x\} \tag{3.1}\]
_are mutually independent. Moreover, the \(\varphi^{(h)}\) are Bernoulli with probability \(b_{1}\) and the \(\varphi^{(v)}\) are Bernoulli with probability \(b_{2}\)._
Consider now the box \(\Delta_{xy}=\{(i,j)\in\mathbb{Z}_{>}^{2}:i\leq x,j\leq y\}\). We define the number of arrows entering the box \(\Delta_{xy}\) along the west and south boundaries by,
\[W(x,y):=\sum_{j=1}^{y}\varphi^{(h)}(1,j),\qquad S(x,y):=\sum_{i=1}^{x}\varphi^{ (v)}(i,1). \tag{3.2}\]
We define the number of arrows exiting the box \(\Delta_{xy}\) along the east and north boundaries by,
\[E(x,y):=\sum_{j=1}^{y}\varphi^{(h)}(x+1,j),\qquad N(x,y):=\sum_{i=1}^{x}\varphi ^{(v)}(i,y+1). \tag{3.3}\]
With the terminology introduced in Section 2.7, any path exiting \(\Delta_{xy}\) along the eastern or northern boundary contributes to \(E(x,y)\) or \(N(x,y)\), respectively.
For the height function we have that,
\[H(x,y)=E(x,y)-S(x,y)=W(x,y)-N(x,y). \tag{3.4}\]
Using the above representation, one can derive the following lemma. It is the analogue of a similar formula first derived for exponential last passage percolation by Rains [42] and re-introduced by Emrah, Janjigian and Seppalainen [27].
**Lemma 3.2**.: _Let \(\delta_{1},\delta_{2}\in(0,1)\) and let \(a_{1},a_{2}\in(0,1)\) and \(\varepsilon\in\mathbb{R}\) satisfy,_
\[\mathrm{e}^{\varepsilon}\frac{a_{1}}{1-a_{1}}=\frac{1-\delta_{1}}{1-\delta_{2} }\frac{a_{2}}{1-a_{2}}. \tag{3.5}\]
_Then,_
\[\mathbb{E}\left[\exp\left(\varepsilon H^{(a_{1},a_{2})}(x,y)\right)\right]=( \mathrm{e}^{\varepsilon}a_{1}+(1-a_{1}))^{y}(\mathrm{e}^{-\varepsilon}a_{2}+(1 -a_{2}))^{x}. \tag{3.6}\]
**Proof.** We write \(H^{(a_{1},a_{2})}(x,y)=W(x,y)-N(x,y)\). Let \(X_{p}\) denote a Bernoulli random variable with \(\mathbb{P}[X_{p}=1]=p\). Clearly for \(f:\{0,1\}\to\mathbb{C}\),
\[\mathbb{E}[\mathrm{e}^{\varepsilon X_{p}}f(X_{p})]=(\mathrm{e}^{\varepsilon}p +(1-p))\mathbb{E}[f(X_{\hat{p}})] \tag{3.7}\]
where \(\hat{p}=\mathrm{e}^{\varepsilon}p/(1-p+\mathrm{e}^{\varepsilon}p)\). With \(\mathbb{E}^{(a,b)}\) denoting expectation of the six vertex model wrt Bernoulli \((a,b)\) initial data we see that,
\[\mathbb{E}\left[\exp\left(\varepsilon H^{(a_{1},a_{2})}(x,y) \right)\right] =\mathbb{E}^{(a_{1},a_{2})}\left[\exp\left(\varepsilon W(x,y)- \varepsilon N(x,y)\right)\right]\] \[=(\mathrm{e}^{\varepsilon}a_{1}+(1-a_{1}))^{y}\mathbb{E}^{(\hat{ a}_{1},a_{2})}\left[\exp\left(-\varepsilon N(x,y)\right)\right], \tag{3.8}\]
where \(\hat{a}_{1}=\mathrm{e}^{\varepsilon}a_{1}/(1-a_{1}+\mathrm{e}^{\varepsilon}a_ {1})\). A short calculation shows that,
\[\frac{\hat{a}_{1}}{1-\hat{a}_{1}}=\frac{1-\delta_{1}}{1-\delta_{2}}\frac{a_{2 }}{1-a_{2}}. \tag{3.9}\]
By Lemma 3.1, under \(\mathbb{E}^{(\hat{a}_{1},a_{2})}\), \(N(x,y)\) is a sum of \(x\) iid Bernoulli \(a_{2}\) random variables. Therefore,
\[\mathbb{E}^{(\hat{a}_{1},a_{2})}\left[\exp\left(-\varepsilon N(x,y)\right) \right]=(\mathrm{e}^{-\varepsilon}a_{2}+(1-a_{2}))^{x} \tag{3.10}\]
and the claim follows.
**Remark.** Formally taking the limit to the ASEP using the scaling in [1] gives,
\[\mathbb{E}\left[\exp\left(\eta J_{t}(x)^{(a_{1},a_{2})}\right)\right]=\exp \left(x(\log(1-a_{2})-\log(1-a_{1}))+t(R-L)(a_{2}-a_{1})\right) \tag{3.11}\]
where \(\mathrm{e}^{\eta}\frac{a_{1}}{1-a_{1}}=\frac{a_{2}}{1-a_{2}}\), and \(J_{t}(x)^{(a_{1},a_{2})}\) denotes the current in the ASEP with step Bernoulli initial data (i.e., Bernoulli \(a_{1}\) for sites \(x\leq 0\) and \(a_{2}\) for sites \(x>0\)). In fact, we expect that this formula can be justified rigorously using the exponential estimates derived in the proof of Corollary 4 of [1].
A straightforward corollary of Lemma 3.1 is,
**Corollary 3.3**.: _Let \(1>\delta_{1}>\delta_{2}>0\) and \(0<b_{i}<1\) satisfy (2.3). Then,_
\[\mathbb{E}\left[H^{(b_{1},b_{2})}(x,y)\right]=yb_{1}-b_{2}x. \tag{3.12}\]
## 4 Couplings and second class particles
In this section, we collect some couplings between stochastic six vertex models with different boundary data. We then introduce the notion of second class particles/arrows in the stochastic six vertex model. These are natural analogs of the second class particles in interacting particle systems, such as the ASEP, in that they are discrepancies between two coupled samples of the stochastic six vertex model with different boundary data.
### Basic coupling
The basic coupling is a natural coupling between samples of interacting particle systems where the initial data of one dominates the other. In this section we review basic couplings for the stochastic six vertex model. Such couplings have been considered before, see, e.g., [3].
Let us suppose we have two boundary data \(\xi_{0}\) and \(\eta_{0}\). The basic coupling is constructed under the assumption that one of the boundary data dominates the other, i.e. \(\eta_{0}\leq\xi_{0}\). That is, we assume that if \(\eta_{0}\) contains a horizontal or vertical incoming arrow to some vertex \(\{(i,j)\in\mathbb{Z}_{>}^{2}:i=1\text{ or }j=1\}\), then so does \(\xi_{0}\). In the basic coupling, this domination is preserved at all other vertices in the quadrant.
The boundary data \(\xi_{0}\) and \(\eta_{0}\) are extended to six vertex path ensembles on the full plane, \(\xi\) and \(\eta\), as follows. For each vertex \((i,j)\in\mathbb{Z}_{>}^{2}\), sample Bernoulli random variables \(v_{ij}\) and \(h_{ij}\) with probabilities \(\delta_{1}\) and \(\delta_{2}\), respectively. For a given fixed vertex \((i,j)\), \(v_{ij}\) and \(h_{ij}\) can be correlated, but the pairs of Bernoullis should be independent between different vertices.
Then, assign configurations to each of the vertices in the order as described above in Section 2.1, i.e., to each \(\mathbb{T}_{n}\) successively using the results of \(h_{ij}\) and \(v_{ij}\) as appropriate (i.e., use them to decide the direction of an outgoing arrow in the case that there is only a single incoming horizontal or vertical arrow to \((i,j)\)). The point is to use the same \((h_{ij},v_{ij})\) to generate both \(\xi\) and \(\eta\). That is, if an ensemble has been constructed up to some vertex \((i,j)\) and contains only an incoming vertical arrow to this vertex, we allow that arrow to exit vertically iff \(v_{ij}=1\) and to the right otherwise. For arrows incoming horizontally, use \(h_{ij}\) instead.
By induction, one sees that under the assumption that \(\xi_{0}\) dominates \(\eta_{0}\), so do the full quadrant configurations \(\xi\) and \(\eta\). That is, any edge that appears in \(\xi\) appears also in \(\eta\), i.e., \(\xi\leq\eta\).
### Second class particles
Second class particles in the S6V model are constructed as follows. Suppose we have two S6V models \(\xi\) and \(\eta\) that are coupled together in the basic coupling described in the previous section; in particular, \(\xi\geq\eta\) for all realizations. Then, color each edge present in \(\xi\) but not in \(\eta\) grey. The grey arrows now form an ensemble of non-crossing directed paths themselves, due to the fact that the particle number is conserved at each vertex. These are the second class particles. An example appears in Figure 3. For the height functions we have,
\[H(x,y;\xi)=H(x,y;\eta)+H(x,y;\xi-\eta) \tag{4.1}\]
where \(H(x,y;\xi-\eta)\) refers to the net flux of the grey arrows across the line segment connecting \((0,0)\) to \((x,y)\).
A common situation we will consider is the following. Let \(\eta_{0}^{+}\) be \((b_{1},b_{2})\) two-sided Bernoulli initial data except that we guarantee an incoming arrow from \((1,0)\) to \((1,1)\). Let \(\eta_{0}^{-}\) denote the same boundary data except that there is no incoming arrow from \((1,0)\) to \((1,1)\). Using the basic coupling described above we extend these to ensembles \(\eta^{+}\geq\eta^{-}\). There will be a single second class particle entering the quadrant from \((1,0)\) to \((1,1)\) and we will be interested in the behavior of its exit point from the box \(\Delta_{xy}\). We will also consider this construction where the single second class particle enters from a different location on the boundary. We discuss couplings in this case in more detail in the next section.
Due to the Markovian nature of the update rules of the stochastic six vertex model, there are other distributionally equivalent ways of constructing second class particles. Here is one such example, others will be introduced later where needed. This mirrors the discussion of the ASEP in Section 2.3.1.
First, given a boundary condition \(\eta_{0}\), we choose some empty locations for grey arrows, \(\chi_{0}\). Generate a stochastic six vertex model \(\eta\) from the boundary conditions for \(\eta_{0}\). Now, let \(\chi\) be a path ensemble of second class particles generated according to the rules outlined in Figure 4. That is, if a grey arrow enters a vertex with no other arrows, then it evolves as if it were a black arrow, passing straight through horizontally with probability \(\delta_{2}\), and passing straight through vertically with probability \(\delta_{1}\) (and turning upwards, resp., rightwards on the complementary events). On the other hand, if a grey arrow enters a vertex that has already a black arrow, then it must take the remaining available path, i.e., the edge that the black path doesn't take. If two grey arrows enter, then two grey arrows leave along different edges.
The path ensemble \(\xi\) generated by taking the union of the black and grey arrows (and forgetting the colors) is also a stochastic six vertex model with initial condition the union of \(\chi_{0}\) and \(\eta_{0}\). Moreover, it dominates \(\eta\).
### Stochastic six vertex model with a single second class particle
We single out some special classes of S6V models with a single second class particle. Let \(v_{0}\) be a boundary vertex \(v_{0}\in\{(i,j)\in\mathbb{Z}_{\geq}^{2}:i=0\text{ or }j=0\}\backslash\{(0,0)\}\). Let \(\xi_{0}\) be a boundary data configuration \(\xi_{0}\) (deterministic or random) that almost surely has no incoming arrow emanating from \(v_{0}\). The S6V model with second class particle starting at \(v_{0}\) and boundary data \(\xi_{0}\) is the S6V model \(\xi\) together with a grey directed path \(Q\) starting at \(v_{0}\) that is generated as in the previous section. For example, one can couple the S6V models for boundary data \(\xi_{0}\) to \(\xi_{0}^{+}\) (where \(\xi_{0}^{+}\) is the same as \(\xi_{0}\) but now we add an arrow incoming from \(v_{0}\)) through the basic coupling, and then \(Q\) is the discrepancy between the two path ensembles. Alternatively, one can sample \(\xi\), and then generate \(Q\) by allowing it to evolve as described in the previous
Figure 3: An example of a six vertex directed path ensemble with second class particles. The boundary data \(\eta_{0}\) with less particles has incoming arrows to the vertices \((2,1)\) and \((4,1)\). The boundary data \(\xi_{0}\) includes also an incoming arrow at \((1,2)\). The ensemble \(\eta\) is given by only the black arrows while \(\xi\) is the union of the black and grey arrows.
section using the rules of Figure 4 (i.e., when it encounters a vertex where only it is present, it evolves as a black arrow would, and when it encounters a vertex with an incoming (and necessarily outgoing) black arrow, it takes the remaining outgoing edge).
We single out one final useful construction of a S6V model with second class particle starting at \(v_{0}\) and boundary data \(\xi_{0}\). First, let \(\xi_{0}^{+}\) be the augmented boundary data as above, with an incoming arrow emanating from \(v_{0}\). Sample a stochastic six vertex model \(\xi^{+}\) with this boundary data. Now, beginning at \(v_{0}\), we sample an _anti-particle_ random walk on the black arrows of \(\xi^{+}\). That is, we will construct a path \(Q\) that follows along an existing sequence of the black arrows in \(\xi^{+}\).
The path \(Q\) begins by following the directed path starting at \(v_{0}\), coloring it grey along the way. When it encounters a vertex where it is the only incoming arrow (and it necessarily runs along this arrow), it takes the only outgoing arrow available to it. When it encounters a vertex where the other incoming edge is occupied by a black arrow, it has the option to switch, as there are necessarily two outgoing black arrows. If the grey arrow \(Q\) arrives from the left, it exits out horizontally to the right with probability \(\delta_{1}\) and exits upwards with probability \(1-\delta_{1}\). If it enters from the bottom, it exits upwards vertically with probability \(\delta_{2}\) and exits out the right with probability \(1-\delta_{2}\). These transitions are depicted in Figure 5.
The resulting ensemble formed from the remaining black edges of \(\xi^{+}\) and the grey path \(Q\) has the same distribution as the stochastic six vertex model with boundary data \(\xi_{0}\) and second class particle originating at \(v_{0}\).
### Microscopic concavity coupling
In this section we introduce a final important coupling, which we call the microscopic concavity coupling. In this construction, we have two boundary configurations, one denser than the other, with each having a second class particle starting at the same initial vertex. In the microscopic concavity coupling, the second class particle of the denser system will stay to the right of the second class particle in the sparser one. As will be seen in the construction, we will require \(1>\delta_{1}>\delta_{2}\geq 0\) and, most critically, \(\delta_{2}\leq\frac{1}{2}\).
Figure 4: Top row: weights associated to evolution of second class particle in absence of incoming black arrows at same vertex. Second row: deterministic evolution of second class arrow encountering vertex with an incoming black arrow. The grey arrow deterministically fills the empty outgoing edge. The cases where the grey arrow is incoming vertically from the south are similar.
A construction of a microscopic concavity coupling for the ASEP was first produced by Balazs and Seppalainen in [11]. The importance of this construction was then realized in a follow-up work also with Komjathy [10], in which the construction was also generalized to other interacting particle systems. Our construction uses some of the main ideas of [11].
Let us fix two boundary configurations \(\xi_{0}\) and \(\eta_{0}\) and a distinguished boundary vertex \(v_{0}=(i_{0},j_{0})\) such that either \(i_{0}=0\) or \(j_{0}=0\) (but not both). We will assume that \(\xi_{0}\) contains an incoming arrow emanating from \(v_{0}\) but \(\eta_{0}\) does not. Moreover, we assume that \(\xi_{0}\geq\eta_{0}\). Let \(\xi_{0}^{-}\) be the boundary condition \(\xi_{0}\) except we remove the particle entering from \(v_{0}\).
In this section we construct a coupling of the stochastic six vertex models \(\xi^{-}\) and \(\eta\) together with second class particles starting at \(v_{0}\) such that the second class particle of the denser system \(\xi^{-}\) stays to the right of the second class particle of \(\eta\). That is, for each \(n\), the second class particle of \(\xi^{-}\) intersects the line \(\{(i,j):i+j=n\}\) weakly to the southeast of \(\eta\)'s second class particle (weakly just means they may intersect at the same vertex).
First, construct the stochastic six vertex models \(\xi\) and \(\eta\) through the basic coupling as described above in Section 4.1. Consider the ensemble of non-colliding paths resulting from the edges that are in \(\xi\) but not in \(\eta\). Note that one path starts from the vertex \(v_{0}\). Label these non-colliding paths consecutively by integers \(\mathbb{Z}\) with the path starting at \(v_{0}\) labelled by \(0\), with negative paths to the left or northwest of the positive paths.
The second class particles for \(\xi^{-}\) and \(\eta\) will be constructed by performing a random walk on these particle labels. The result will be two directed paths traced out along the grey edges that will give the required paths of the second class particles. We will denote these labels by \(a(n)\) and \(b(n)\), so that \(a\) corresponds to \(\xi^{-}\) and \(b\) to \(\eta\). Each of these labels indicates which path of the grey paths the second class particle uses to get from the line \(\{(x,y):x+y=n\}\) to the line \(\{(x,y):x+y=n+1\}\). We need to ensure that \(a(n)\geq b(n)\) for all \(n\), and that \(a(n)\) traces out an _anti-particle random walk_ and that \(b(n)\) traces out a _particle random walk_.
Let \(n_{0}\) be the line such that \(v_{0}\in\{(x,y):x+y=n_{0}\}\). By convention, \(a(n_{0})=b(n_{0})=0\). The labels are constructed as follows. First, if the vertex that the arrow specified by \(a(n)\)
Figure 5: Antiparticle construction of second class particle. Top row indicates probabilities of exiting eastwards or upwards when the second class particle encounters a vertex with an additional incoming black arrow and, necessarily, two outgoing black arrows. Bottom row: deterministic evolution when second class particle encounters a vertex with no additional incoming arrow; it simply follows the outgoing arrow. Cases where second class particle enters from the south are similar.
arrives at in the line \(\{(x,y):x+y=n+1\}\) contains only a single incoming grey arrow (and one or zero incoming black arrows), then we set \(a(n+1)=a(n)\). Similarly, if the vertex that the arrow specified by \(b(n)\) arrives at in the line \(\{(x,y):x+y=n+1\}\) contains only a single incoming grey arrow (and one or zero incoming black arrows), then we set \(b(n+1)=b(n)\).
Next, if the vertex that the arrow specified by \(a(n)\) arrives at in the line \(\{(x,y):x+y=n+1\}\) contains two grey arrows and the other arrow is not specified by \(b(n)\), then there are two cases, whether \(a(n)\) arrives from the left or from below. If it arrives from the left, then we set \(a(n+1)=a(n)+1\) with probability \(\delta_{1}\) and \(a(n+1)=a(n)\) otherwise. If it arrives from the bottom, then we set \(a(n+1)=a(n)-1\) with probability \(\delta_{2}\) and \(a(n+1)=a(n)\) otherwise. The weights are depicted in Figure 6.
If the vertex that the arrow specified by \(b(n)\) arrives at in the line \(\{(x,y):x+y=n+1\}\) contains two grey arrows and the other arrow is not specified by \(a(n)\), then there are two cases, whether \(b(n)\) arrives from the left or from below. If it arrives from the left, then we set \(b(n+1)=b(n)+1\) with probability \(\delta_{2}\) and \(b(n+1)=b(n)\) otherwise. If it arrives from the bottom, then we set \(b(n+1)=b(n)-1\) with probability \(\delta_{1}\) and \(b(n+1)=b(n)\) otherwise.
The last case is when \(a(n)\) and \(b(n)\) enter the same vertex. If there is only one entering grey arrow, then set \(a(n+1)=b(n+1)=a(n)=b(n)\). If there are two grey arrows, then there are three subcases (by induction we may assume \(a(n)\geq b(n)\)):
1. They both come from the left. With probability \(1-\delta_{1}\) send them both up. With probability \(\delta_{1}-\delta_{2}\) send \(a(n+1)\) to the right and \(b(n+1)\) up. With probability \(\delta_{2}\) send them both to the right.
2. They both come from the bottom. With probability \(\delta_{2}\), send them both up. With probability \(\delta_{2}\) send them both up. With probability \(\delta_{2}\) send them both to the right.
Figure 6: Random evolution of the walk \(a(n)\) if it arrives at a vertex containing an additional grey arrow that is not \(b(n)\). In all diagrams, the path coming from the left and exiting north is labelled \(k\) and the path coming from the bottom and exiting to the right is labelled by \(k+1\). Top row contains evolution when \(a(n)\) arrives from the left, corresponding to weights \(\delta_{1},1-\delta_{1}\). Bottom row contains the case when \(a(n)\) arrives from the south, corresponding to weights \(\delta_{2},1-\delta_{2}\). The evolution for \(b(n)\) is identical, except we switch \(\delta_{1}\) and \(\delta_{2}\) (resp., \(1-\delta_{1}\), \(1-\delta_{2}\)).
probability \(\delta_{1}-\delta_{2}\) send \(a(n+1)\) to the right and \(b(n+1)\) up. With probability \(1-\delta_{1}\) send them both to the right.
3. Now \(b\) comes from the left, \(a\) comes from the bottom. With probability \(\delta_{2}\) send them both up. With probability \(1-2\delta_{2}\) send \(b(n+1)\) up and \(a(n+1)\) to the right. With probability \(\delta_{2}\) send them both to the right.
The above update rules are summarized in graphical form in Figure 7. We now verify that the above coupling gives the correct distribution. First, consider \(b(n)\). By marginalizing out \(a(n)\), we see that the path traced by \(b(n)\) indeed evolves as a second-class particle: when it encounters no black arrow at a vertex, its path passes straight through the vertex with probability \(\delta_{1}\) or \(\delta_{2}\) depending on whether it came from the bottom or the left, respectively. Otherwise, it avoids the black arrow.
It is easiest to check that \(a(n)\) generates a second class particle for \(\xi\) by verifying that it produces an _anti-particle random walk_ amongst the black paths of \(\xi\). That is, label the non-intersecting paths of \(\xi\) by integers \(\mathbb{Z}\) with the convention that the path starting at \(v_{0}\) is \(0\). Let \(A(n)\) denote the label of the path in \(\xi\) that the path specified by \(a(n)\) uses to move from the line \(\{(x,y):x+y=n\}\) to \(\{(x,y):x+y=n+1\}\). We claim that \(A(n)\) evolves as follows.
First, if \(A(n)\) enters a vertex with only one path, then \(A(n+1)=A(n)\) (i.e., it is labelling
Figure 7: Weights associated to case when particles labelled by \(a(n)\) and \(b(n)\) meet at the same vertex. Top row describes the case when they both arrive from the left; the middle when they both arrive from the south; and the final row where they arrive from different directions. We have omitted the arguments and wrote \(a=a(n)\), etc., in the interest of brevity.
the only path entering the vertex). If enters a vertex with two arrows from the left, then \(A(n+1)=A(n)+1\) with probability \(\delta_{1}\) and \(A(n+1)=A(n)\) with probability \(1-\delta_{1}\) (i.e., it passes straight through the vertex with probability \(\delta_{1}\) and turns with the complementary probability).
If it enters a vertex with two arrows from the bottom, then \(A(n+1)=A(n)-1\) with probability \(\delta_{2}\) and \(A(n+1)=A(n)\) with probability \(1-\delta_{2}\) (i.e., it passes straight through the vertex with probability \(\delta_{2}\) and turns with the complementary probability). These dynamics describe the anti-particle random walk, as discussed in Section 4.3 and graphically described in Figure 5. Therefore, once we show that \(A(n)\) evolves in this way, the ensemble \(\xi_{-}\) together with the grey path formed by \(A(n)\) specifies a stochastic six vertex model with a second class particle starting from \(v_{0}\).
We finally now check that \(a(n)\) indeed generates the required anti-particle random walk. First, if \(a(n)\) encounters another grey arrow (in the ensemble \(\eta\) together with its second class particles), then clearly by considering the cases above, it switches labels with the required probability. If it encounters another black arrow (that is, an arrow present in \(\eta\) and not in \(\xi\)), then probability that it switches paths in \(\xi\) is determined only by whether or not the single incoming arrow in \(\eta\) switches directions or not. This verifies that \(a(n)\) indeed is a second class particle for \(\xi^{-}\), and we conclude the following.
**Proposition 4.1**.: _Let \(\xi_{0}^{-}\) and \(\eta_{0}\) be two boundary configurations for the stochastic six vertex model. Assume \(1>\delta_{1}>\delta_{2}\geq 0\) and \(\delta_{2}\leq\frac{1}{2}\). Let \(v_{0}\) be a boundary vertex such that neither \(\xi_{0}^{-}\) or \(\eta_{0}\) contain an arrow incoming from \(v_{0}\). Assume \(\xi_{0}^{-}\geq\eta_{0}\). Then there is a coupling between the stochastic six vertex models \(\xi^{-}\) and \(\eta\) such that the second class particles \(Q_{\xi^{-}}\) and \(Q_{\eta}\) for these ensembles have the property that \(Q_{\xi^{-}}\) intersects the line \(\{(x,y):x+y=n\}\) weakly to the southeast of \(Q_{\eta}\) for every \(n\), almost surely._
## 5 Tail estimates for second class particles
In this section we prove various tail estimates for second class particles for stochastic six vertex models with two-sided Bernoulli initial data. Section 5.1 contains estimates in the moderate deviations regime. Section 5.2 contains the large deviations regime.
### Estimates for second class particles starting from origin
In this section we will consider the S6V model with two-sided Bernoulli data. As indicated in Section 2.7, we will use the notation \(a_{1},a_{2},b_{1},b_{2}\) to denote the parameters in the boundary data, and introduce \(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}\) through (2.22).
Our strategy on bounding the second class particle position is as follows. We compare the event that the second class particle exits the box \(\Delta_{xy}\) along, say, the north boundary, to the height functions of some coupled S6V models. This is Proposition 5.1. We then optimize over the parameters of the coupled S6V models. This takes places in Lemma 5.2 and Proposition 5.3. We then repeat the arguments for the eastern boundary instead of the northern boundary.
As stated, the following compares exit points of second class particles to differences of height functions. Throughout this section we will introduce the parameter,
\[\theta:=\frac{\delta_{1}\wedge 0.5-\delta_{2}}{\delta_{1}\wedge 0.5+\delta_{2}}. \tag{5.1}\]
**Proposition 5.1**.: _Let \(b_{1},b_{2}\in(0,1)\) and let \(1>\delta_{1}>\delta_{2}>0\). Let \(\delta_{2}<\frac{1}{2}\). Let \(v_{0}\) be either the vertex \((1,0)\) or \((0,1)\). Let \(\xi_{0}\) be \((b_{1},b_{2})\) two-sided Bernoulli initial data, except that there is no arrow incoming from \(v_{0}\). Let \(Q\) be the second class particle emanating from \(v_{0}\), and let \(\mathcal{N}\) be the event it exits the box \(\Delta_{xy}\) along its northern boundary._
_Let \(a_{1},a_{2}\) satisfy \(0<a_{1}<b_{1}\) and \(0<a_{2}<b_{2}\). Let \(1\geq\varepsilon>0\) and \(k\geq 0\) be an integer. Let \(\theta\) be as in (5.1). Then,_
\[\mathbb{P}[\mathcal{N}]\leq\mathrm{e}^{-\theta k}+\mathrm{e}^{2}\mathrm{e}^{ \varepsilon k}\mathbb{E}\left[\mathrm{e}^{\varepsilon H^{(a_{1},a_{2})}(x,y)} \right]^{1/2}\mathbb{E}\left[\mathrm{e}^{-\varepsilon H^{(b_{1},a_{2})}(x,y)} \right]^{1/2} \tag{5.2}\]
**Proof.** Let \(\eta_{0}\) be \((b_{1},a_{2})\) two-sided Bernoulli initial data except that there is no arrow incoming from \(v_{0}\). We can couple \(\eta_{0}\) and \(\xi_{0}\) so that \(\xi_{0}\) dominates \(\eta_{0}\).
Thanks to Proposition 4.1, we can couple the stochastic six vertex models \(\xi\) and \(\eta\) in such a way that second class particle \(Q_{\xi}\) starting at \(v_{0}\) is always to the right of the second class particle \(Q_{\eta}\). Therefore,
\[\mathbb{P}[Q_{\xi}\text{ exits along north boundary}]\leq\mathbb{P}[Q_{\eta} \text{ exits along north boundary}]. \tag{5.3}\]
We now proceed with a construction that is analogous to that given in the proof of Lemma 5.1 of [11].
Let now \(a_{1}<b_{1}\), and let \(\zeta_{0}\) be \((a_{1},a_{2})\) two-sided Bernoulli initial data except there is no arrow incoming from \(v_{0}\). Let \(\eta_{0}^{+}\) be \((b_{1},a_{2})\) two-sided Bernoulli initial data, except there is always an arrow incoming from \(v_{0}\). We may couple \(\eta_{0}^{+}\) and \(\zeta_{0}\) so that \(\eta_{0}^{+}\geq\zeta_{0}\). Note that along the southern boundary, the arrows incoming for \(\eta_{0}^{+}\) and \(\zeta_{0}\) coincide, except at \(v_{0}\) if it is along the southern boundary.
Couple the S6V models \(\zeta\) and \(\eta^{+}\) in the basic coupling so that \(\zeta\leq\eta^{+}\) and color the resulting discrepancies grey. In the argument that follows, we use these discrepancies to generate a S6V model with boundary data \(\eta_{0}\) and second class particle starting from \(v_{0}\). Note that this quantity appears in the RHS of (5.3). The construction given here will enable us to estimate this probability.
Label the non-intersecting grey paths by consecutive integers \(i\in\mathbb{Z}\). Use the convention that the 0th path starts from \(v_{0}\), and that paths to the right or southeast of this path are labelled by positive integers. We denote by \(X_{i}(n)\) the coordinate of the \(i\)th path along the line \(\{(k,j):k+j=n\}\). Then, paths \(X_{i}(n)\) and \(X_{j}(n)\) have the property that \(X_{i}(n)\) is to the northwest of \(X_{j}(n)\) if \(i<j\). We now generate a random walk \(a(n)\) on the labels of the \(X_{i}\) such that if one takes the ensemble \(\eta^{+}\) and colors grey the path traced out by \(a(n)\), then one obtains a stochastic six vertex model with boundary data \(\eta_{0}\) and second class particle starting at \(v_{0}\). Here, \(a(n)\) will denote the label of the path used to get from the line \(\{(i,j):i+j=n\}\) to the line \(\{(i,j):i+j=n+1\}\), so that the path is traced out by the edges \(\{(X_{a(n)}(n),X_{a(n)}(n+1)\}_{n}\).
Let \(n_{0}\) be such that \(v_{0}\in\{(i,j):i+j=n_{0}\}\) and set \(a(n_{0})=0\). Now, given the label \(a(n)\) we describe how to generate \(a(n+1)\). First, if there is only one incoming grey arrow to the vertex indicated by \(a(n)\), then set \(a(n+1)=a(n)\). If there are two incoming grey arrows to this vertex, then there are two cases:
* If the arrow indicated by \(a(n)\) enters from the left, then set \(a(n+1)=a(n)+1\) with probability \(\delta_{1}\) and \(a(n+1)=a(n)\) otherwise.
2. If the arrow indicated by \(a(n)\) enters from the bottom, then set \(a(n+1)=a(n)-1\) with probability \(\delta_{2}\) and \(a(n+1)=a(n)\) otherwise.
As in the proof of Proposition 4.1, this produces the desired distribution, a S6V model with boundary data \(\eta_{0}\) and second class particle starting at \(v_{0}\). This is due to the fact that the path traced out is an anti-particle random walk on the black paths of \(\eta^{+}\).
Now, the random walk \(a(n)\) described above is in the setting of Lemma A.1. Let \(n_{1}=x+y\). Therefore, \(\mathbb{P}[a(n_{1}-1)\leq-k]\leq\mathrm{e}^{-\theta k}\) and so,
\[\mathbb{P}\left[Q_{\eta}\text{ exits along north boundary}\right]\] \[\leq \mathrm{e}^{-\theta k}+\mathbb{P}\left[\left\{Q_{\eta}\text{ exits along north boundary}\right\}\cap\{a(n_{1}-1)\geq-k\}\right] \tag{5.4}\]
Let \(\hat{H}^{(a_{1},a_{2})}(x,y)\) and \(\hat{H}^{(b_{1},a_{2})}(x,y)\) denote the height functions for \(\zeta\) and \(\eta^{+}\), respectively (note that they are not exactly the height functions \(H^{(a_{1},a_{2})}\) and \(H^{(b_{1},a_{2})}\) of Definition 2.1 due the deterministic absence or presence of a particle at \(v_{0}\), so we use the \(\hat{H}\) notation). We have,
\[\hat{H}^{(b_{1},a_{2})}(x,y)=\hat{H}^{(a_{1},a_{2})}(x,y)+\Phi_{G}(x,y) \tag{5.5}\]
where \(\Phi_{G}(x,y)\) is the net flux of grey arrows across the line connecting \((0,0)\) to \((x,y)\). We now seek an upper bound for \(\Phi_{G}(x,y)\) on the event \(\{a(n_{1}-1)\geq-k\}\).
If the second class particle \(Q_{\eta}\) exits along the north boundary, then it intersects the line \(\{(i,j):i+j=n_{1}\}\) (recall that \(n_{1}\) is chosen so that \((x,y)\) lies along this line) to the left of the point \((x,y)\). Therefore, the path traced out by \(n\to X_{a(n_{1}-1)}(n)\) traces out a path amongst the non-colliding ensemble of grey arrows that crosses the line \(\{(i,j):i+j=n_{1}\}\) to the left/northwest of \((x,y)\). Every path \(X_{i}\) with index \(i<a(n_{1}-1)\) begins on the vertical axis and must also intersect the line \(\{(i,j):i+j=n_{1}\}\) to the left of \((x,y)\). Therefore, such paths cannot contribute to the flux \(\Phi_{G}(x,y)\). Therefore,
\[\Phi_{G}(x,y)\leq-a(n_{1}-1). \tag{5.6}\]
Therefore,
\[\mathbb{P}\left[Q_{\eta}\text{ exits along north boundary}\right]\leq \mathrm{e}^{-\theta k}+\mathbb{P}\left[\hat{H}^{(a_{1},a_{2})}(x,y)-\hat{H}^{ (b_{1},a_{2})}(x,y)\geq-k\right]. \tag{5.7}\]
The estimate of the proposition follows now by Markov's inequality, Cauchy-Schwarz and the fact that there are couplings of the \(\hat{H}^{(a,b)}\) to the equilibrium height functions \(H^{(a,b)}\) such that
\[|\hat{H}^{(a,b)}(x,y)-H^{(a,b)}(x,y)|\leq 1 \tag{5.8}\]
almost surely.
Let \(f(\beta)\) be defined by
\[f(\beta):=\frac{1}{24}\left(\frac{\beta-2(\beta)^{2}}{(1+\beta)^{3}}-3\frac{ \beta^{2}-\beta^{3}}{(1+\beta)^{4}}\right). \tag{5.9}\]
By Taylor expansion, we have for \(s>0\) the following two equalities:
\[\log(1+\mathrm{e}^{s}\beta)= \log(1+\beta)+s\frac{\beta}{1+\beta}+\frac{s^{2}}{2}\frac{\beta}{(1+ \beta)^{2}}+\frac{s^{3}}{6}\frac{\beta(1-\beta)}{(1+\beta)^{3}}+s^{4}f(\mathrm{ e}^{s*}\beta),\] \[c_{1}\log(1+\mathrm{e}^{s}\beta)+ c_{2}\log(1+\mathrm{e}^{s}\alpha)=c_{1}\log(1+\beta)+c_{2}\log(1+ \alpha)+s\left(c_{1}\frac{\beta}{1+\beta}+c_{2}\frac{\alpha}{1+\alpha}\right)\] \[+\frac{s^{2}}{2}\left(c_{1}\frac{\beta}{(1+\beta)^{2}}+c_{2}\frac {\alpha}{(1+\alpha)^{2}}\right)\] \[+\frac{s^{3}}{6}\left(c_{1}\frac{\beta(1-\beta)}{(1+\beta)^{3}}+ c_{2}\frac{\alpha(1-\alpha)}{(1+\alpha)^{3}}\right)+s^{4}\left(c_{1}f(\mathrm{e}^{s**} \beta)+c_{2}f(\mathrm{e}^{s**}\alpha)\right) \tag{5.10}\]
for some \(s_{*},s_{**}\in(0,s)\). Here \(c_{1},c_{2},\alpha,\beta\) are constants.
**Lemma 5.2**.: _Let \(\varepsilon>0\) and \(0<b_{1}<1\). Let,_
\[\alpha_{2}:=\mathrm{e}^{-\varepsilon}\beta_{1}\kappa^{-1},\qquad\alpha_{1}:= \mathrm{e}^{-2\varepsilon}\beta_{1}=\mathrm{e}^{-\varepsilon}\kappa\alpha_{2}. \tag{5.11}\]
_Then, recalling that \(a_{i},b_{i}\) and \(\alpha_{i},\beta_{i}\) are related via (2.22), we have,_
\[\log\left(\mathbb{E}\left[\mathrm{e}^{\varepsilon H^{(a_{1},a_{2}) }(x,y)}\right]\mathbb{E}\left[\mathrm{e}^{-\varepsilon H^{(b_{1},a_{2})}(x,y) }\right]\right)\] \[= \varepsilon^{2}\left(x\frac{\kappa^{-1}\alpha_{1}}{(1+\kappa^{-1 }\alpha_{1})^{2}}-y\frac{\alpha_{1}}{(1+\alpha_{1})^{2}}\right) \tag{5.12}\] \[+ \varepsilon^{3}\left(x\frac{\kappa^{-1}\alpha_{1}(1-\kappa^{-1} \alpha_{1})}{(1+\kappa^{-1}\alpha_{1})^{3}}-y\frac{\alpha_{1}(1-\alpha_{1})}{( 1+\alpha_{1})^{3}}\right)\] (5.13) \[+ \varepsilon^{4}2(yf(\alpha_{*})-xf(\kappa^{-1}\alpha_{*}))+ \varepsilon^{4}16(xf(\kappa^{-1}\alpha_{**})-yf(\alpha_{**})) \tag{5.14}\]
_for some \(\alpha_{*},\alpha_{**}\in(\alpha_{1},\beta_{1})\)._
**Proof.** This is a direct application of Lemma 3.2 and Taylor's theorem. We have,
\[\mathbb{E}\left[\mathrm{e}^{-\varepsilon H^{(b_{1},a_{2})}(x,y)}\right] =\left(\mathrm{e}^{-\varepsilon}b_{1}+(1-b_{1})\right)^{y}( \mathrm{e}^{\varepsilon}a_{2}+(1-a_{2}))^{x}\] \[=\left(\frac{1+\mathrm{e}^{-\varepsilon}\beta_{1}}{1+\beta_{1}} \right)^{y}\left(\frac{1+\mathrm{e}^{\varepsilon}\alpha_{2}}{1+\alpha_{2}} \right)^{x}. \tag{5.15}\]
Secondly,
\[\mathbb{E}\left[\mathrm{e}^{\varepsilon H^{(a_{1},a_{2})}(x,y)}\right] =(\mathrm{e}^{\varepsilon}a_{1}+(1-a_{1}))^{y}\left(\mathrm{e}^{ -\varepsilon}a_{2}+(1-a_{2})\right)^{x}\] \[=\left(\frac{1+\mathrm{e}^{\varepsilon}\alpha_{1}}{1+\alpha_{1}} \right)^{y}\left(\frac{\mathrm{e}^{-\varepsilon}\alpha_{2}+1}{1+\alpha_{2}} \right)^{x}. \tag{5.16}\]
Let,
\[F_{1}(\varepsilon):=2y\log(1+\mathrm{e}^{\varepsilon}\alpha_{1} )-2x\log(1+\mathrm{e}^{\varepsilon}\kappa^{-1}\alpha_{1})\] \[F_{2}(\varepsilon):=-y\log(1+\mathrm{e}^{2\varepsilon}\alpha_{1} )+x\log(1+\mathrm{e}^{2\varepsilon}\kappa^{-1}\alpha_{1}) \tag{5.17}\]
Expressing all variables in terms of \(\alpha_{1}\) and applying (5.10), we have,
\[\log\left(\mathbb{E}\left[\mathrm{e}^{-\varepsilon H^{(b_{1},a_{2}) (x,y)}}\right]\mathbb{E}\left[\mathrm{e}^{\varepsilon H^{(a_{1},a_{2})}(x,y)} \right]\right)\] \[= y\left(2\log(1+\mathrm{e}^{\varepsilon}\alpha_{1})-\log(1+ \mathrm{e}^{2\varepsilon}\alpha_{1})-\log(1+\alpha_{1})\right)\] \[+ x\left(\log(1+\mathrm{e}^{2\varepsilon}\alpha_{1}\kappa^{-1})+ \log(1+\kappa^{-1}\alpha_{1})-2\log(1+\mathrm{e}^{\varepsilon}\kappa^{-1} \alpha_{1})\right)\] \[= F_{1}(\varepsilon)+F_{2}(\varepsilon)-y\log(1+\alpha_{1})+x\log (1+\kappa^{-1}\alpha_{1})\] \[= y\left(-\varepsilon^{2}\frac{\alpha_{1}}{(1+\alpha_{1})^{2}}- \varepsilon^{3}\frac{\alpha_{1}(1-\alpha_{1})}{(1+\alpha_{1})^{3}}\right)\] \[+ x\left(\varepsilon^{2}\frac{\kappa^{-1}\alpha_{1}}{(1+\kappa^{- 1}\alpha_{1})^{2}}+\varepsilon^{3}\frac{\kappa^{-1}\alpha_{1}(1-\kappa^{-1} \alpha_{1})}{(1+\kappa^{-1}\alpha_{1})^{3}}\right)\] \[+ \frac{\varepsilon^{4}}{24}F_{1}^{(iv)}(\varepsilon_{*})+\frac{ \varepsilon^{4}}{24}F_{2}^{(iv)}(\varepsilon_{**}) \tag{5.18}\]
for some \(\varepsilon_{*},\varepsilon_{**}\in(0,\varepsilon)\). We have by direct calculation,
\[F_{1}^{(iv)}(\varepsilon_{*}) =48(yf(\mathrm{e}^{\varepsilon_{*}}\alpha_{1})-xf(\mathrm{e}^{ \varepsilon_{*}}\kappa^{-1}\alpha_{1}))\] \[F_{2}^{(iv)}(\varepsilon_{**}) =16\times 24\left(-yf(\mathrm{e}^{2\varepsilon**}\alpha_{1})+xf( \mathrm{e}^{2\varepsilon**}\kappa^{-1}\alpha_{1})\right) \tag{5.19}\]
and the claim follows.
The previous two results may now be combined to give an estimate on the tail of a second class particle.
**Proposition 5.3**.: _Let \(1>\delta_{1}>\delta_{2}>0\) and let \(\delta_{2}<0.5\). Let \(y\geq 10\) and \(b_{1},b_{2}\in(0,1)\) satisfy (2.3). Assume there is a constant \(\mathfrak{a}>0\) so that \(\mathfrak{a}<b_{i}<1-\mathfrak{a}\), \(i=1,2\) and \(\kappa>\mathfrak{a}\). Let \(v_{0}\) be the vertex \((1,0)\) or \((0,1)\) and let \(\eta_{0}\) be \((b_{1},b_{2})\) two-sided Bernoulli initial data, except that there is no incoming arrow from \(v_{0}\). Consider the stochastic six vertex model with boundary data \(\eta_{0}\) and second-class particle starting at \(v_{0}\). Let \(X_{\eta}\) denote the \(x\)-coordinate of where the second class particle crosses the line \(\{(i,j):j=y+\frac{1}{2}\}\). Let_
\[x_{0}:=y\kappa\left(\frac{1+\kappa^{-1}\beta_{1}}{1+\beta_{1}}\right)^{2}, \tag{5.20}\]
_and let \(x_{1}<x_{0}\), with \(x_{1}\in\mathbb{Z}\). Then, there are constants \(C,c,c_{1}>0\), depending only on \(\mathfrak{a}>0\) so that if,_
\[10\leq x_{0}-x_{1}\leq c_{1}y(1-\kappa) \tag{5.21}\]
_we have,_
\[\mathbb{P}\left[X_{\eta}<x_{1}\right]\leq C\left(\exp\left(-c\frac{\theta(x_{0 }-x_{1})^{2}}{y(1-\kappa)}\right)+\exp\left(-c\frac{(x_{0}-x_{1})^{3}}{y^{2}(1 -\kappa)^{2}}\right)\right) \tag{5.22}\]
_where \(\theta\) is as in (5.1)._
**Proof.** Let \(c_{1}>0\) be as in Lemma C.1 and \(x_{1}\) satisfy (5.21). Let \(0<\hat{\beta}_{1}<\beta_{1}\) solve the equation,
\[x_{1}=y\kappa\left(\frac{1+\kappa^{-1}\hat{\beta}_{1}}{1+\hat{\beta}_{1}} \right)^{2}. \tag{5.23}\]
By Lemma C.1 we have that if \(\varepsilon>0\) satisfies \(\hat{\beta}_{1}=\mathrm{e}^{-2\varepsilon}\beta_{1}\) then,
\[\varepsilon\asymp\frac{x_{0}-x_{1}}{y(1-\kappa)}. \tag{5.24}\]
The event that \(X_{\eta}<x_{1}\) is the same as the second class particle exiting out of the north boundary of the rectangle with vertex \((x_{1},y)\). The probability of this event is bounded by Proposition 5.1. We apply this proposition with \(x\) there being \(x_{1}\) here, and with \(\alpha_{1}=\hat{\beta}_{1}\) and \(\alpha_{2}=\mathrm{e}^{-\varepsilon}\beta_{1}\kappa^{-1}\). Then, we apply the expansion in Lemma 5.2 to estimate the expectations on the RHS of (5.2).
With this choice of \(\alpha_{1}=\hat{\beta}_{1}\), the term on the line (5.12) vanishes and the factor multiplying \(\varepsilon^{3}\) on the line (5.13) equals,
\[x_{1}\frac{\kappa^{-1}\hat{\beta}_{1}(1-\kappa^{-1}\hat{\beta}_ {1})}{(1+\kappa^{-1}\hat{\beta}_{1})^{3}}-y\frac{\hat{\beta}_{1}(1-\hat{\beta} _{1})}{(1+\hat{\beta}_{1})^{3}} =\frac{y\hat{\beta}_{1}}{(1+\hat{\beta}_{1})^{2}}\left(\frac{1- \kappa^{-1}\hat{\beta}_{1}}{1+\kappa^{-1}\hat{\beta}_{1}}-\frac{1-\hat{\beta} _{1}}{1+\hat{\beta}_{1}}\right)\] \[=\frac{2y\hat{\beta}_{1}^{2}(1-\kappa^{-1})}{(1+\hat{\beta}_{1}) ^{3}(1+\kappa^{-1}\hat{\beta}_{1})}\asymp-y(1-\kappa) \tag{5.25}\]
where the last inequalities hold by the assumption \(\kappa\geq\mathfrak{a}\). We consider now the error term on the line (5.14). Since \(f(\kappa^{-1}\alpha)=f(\alpha)+\mathcal{O}(1-\kappa)\) and \(x_{1}=y(1+\mathcal{O}(1-\kappa))\) we see that,
\[\left|2(yf(\alpha_{*})-x_{1}f(\kappa^{-1}\alpha_{*}))+16(x_{1}f(\kappa^{-1} \alpha_{**})-yf(\alpha_{**}))\right|\leq Cy(1-\kappa) \tag{5.26}\]
By taking \(c_{1}>0\) smaller if necessary we see that we have,
\[\mathbb{P}\left[X_{\eta}<x_{1}\right]\leq\mathrm{e}^{-\theta k}+C\mathrm{e}^{ \varepsilon k}\mathrm{e}^{-\varepsilon^{3}cy(1-\kappa)} \tag{5.27}\]
for any \(k>0\). The claim follows after choosing \(k=\lceil\varepsilon^{2}(1-\kappa)yc^{\prime}\rceil\) for some small \(c^{\prime}>0\).
We now briefly sketch the analogous argument that gives an estimate for the right tail of the exit point of a second class particle. The first result is the analog of Proposition 5.1.
**Proposition 5.4**.: _Let \(b_{1},b_{2}\in(0,1)\) satisfy (2.3) and let \(1>\delta_{1}>\delta_{2}>0\) and let \(\delta_{2}<\frac{1}{2}\). Let \(v_{0}\) be either the vertex \((1,0)\) or \((0,1)\). Let \(\xi_{0}\) be \((b_{1},b_{2})\) two-sided Bernoulli initial data, except that there is no arrow incoming from \(v_{0}\). Let \(Q_{\xi}\) be the second class particle emanating from \(v_{0}\). Let \(\mathcal{E}\) be the event it exits the box \(\Delta_{xy}\) along the eastern boundary. Let \(a_{1}>b_{1}\) and \(a_{2}>b_{2}\). Then, for any positive integer \(k\),_
\[\mathbb{P}[\mathcal{E}]\leq\mathrm{e}^{-\theta k}+\mathrm{e}^{2}\mathrm{e}^{ \varepsilon k}\mathbb{E}\left[\mathrm{e}^{\varepsilon H^{(a_{1},a_{2})}(x,y) }\right]^{1/2}\mathbb{E}\left[\mathrm{e}^{-\varepsilon H^{(a_{1},b_{2})}(x,y) }\right]^{1/2} \tag{5.28}\]
**Proof.** Let \(\eta_{0}\) be \((a_{1},b_{2})\) two-sided Bernoulli initial data, except with no arrow incoming from \(v_{0}\). As in the proof of Proposition 5.1, we can apply Proposition 4.1 to obtain a coupling between stochastic six vertex models \(\eta\) and \(\xi\) so that the second class particle \(Q_{\eta}\) stays to the right of the second class particle \(Q_{\xi}\), as \(\eta\) corresponds to the denser system. Therefore,
\[\mathbb{P}\left[Q_{\xi}\text{ exits along east boundary}\right]\leq\mathbb{P} \left[Q_{\eta}\text{ exits along east boundary}\right] \tag{5.29}\]
Let now \(\zeta_{0}\) be \((a_{1},a_{2})\) two-sided Bernoulli initial data except with an arrow incoming from \(v_{0}\) and \(\eta_{0}\) as before. We couple \(\zeta_{0}\) and \(\eta_{0}\) so that \(\zeta_{0}\geq\eta_{0}\). Now, couple the associated stochastic
six vertex models \(\zeta\) and \(\eta\) so that \(\zeta\geq\eta\) and color the resulting discrepancies grey. We now use the discrepancies to generate a second class particle for \(\eta_{0}\). Label the non-intersecting grey paths by \(X_{i}(n)\) as in the proof of Proposition 5.1. We generate a random walk \(b(n)\) on the labels of the \(X_{i}\) such that the union of the paths of \(\eta\) and the grey path traced out by \(b(n)\) gives a second class particle for this ensemble.
Let \(n_{0}\) be such that \(v_{0}\in\{(i,j):i+j=n_{0}\}\) and set \(b(n_{0}+1)=0\). Given \(b(n)\), we describe how to generate \(b(n+1)\). If there is only one incoming grey arrow to the vertex indicated by \(b(n)\), set \(b(n+1)=b(n)\). If there are two incoming grey arrows to this vertex, then there are two cases:
1. If the arrow indicated by \(b(n)\) enters from the left, then set \(b(n+1)=b(n)+1\) with probability \(\delta_{2}\) and \(b(n+1)=b(n)\) otherwise.
2. If the arrow indicated by \(b(n)\) enters from the bottom, then set \(b(n+1)=b(n)-1\) with probability \(\delta_{1}\) and \(b(n+1)=b(n)\) otherwise.
The path traced out by \(b(n)\) is a second class particle for \(\eta_{0}\). The random walk \(Z(n)=-b(n)\) falls into the setting of Lemma A.1. Let \(n_{1}=x+y\). Therefore,
\[\mathbb{P}[b(n_{1})\geq k]\leq\mathrm{e}^{-\theta k}. \tag{5.30}\]
Therefore,
\[\mathbb{P}\left[Q_{\eta}\text{ exits along east boundary}\right] \tag{5.31}\] \[\leq\ \mathbb{P}\left[Q_{\eta}\text{ exits along east boundary} \cap\{b(n_{1})\leq k\}\right]+\mathrm{e}^{-\theta k}.\]
Let \(\hat{H}^{(a_{1},a_{2})}(x,y)\) and \(\hat{H}^{(a_{1},b_{2})}(x,y)\) denote the height functions for \(\zeta\) and \(\eta^{+}\), respectively. We have,
\[\hat{H}^{(a_{1},a_{2})}(x,y)=\hat{H}^{(a_{1},b_{2})}(x,y)+\Phi_{G}(x,y) \tag{5.32}\]
where \(\Phi_{G}\) is the net flux of grey arrows across the line connecting \((0,0)\) to \((x,y)\). Note that the flux \(\Phi_{G}\) is a negative quantity, or at least is at most \(1\) (the grey arrows all start along the \(x\) axis, except for possibly the one started from \(v_{0}\)). If the second class particle \(Q_{\eta}\) exits along the east boundary, then it intersects the line \(\{(i,j):i+j=n_{1}\}\) to the right of the point \((x,y)\). Therefore, the path \(X_{b(n_{1})}\) traces out a path amongst the non-colliding ensemble of grey paths that crosses the line \(\{(i,j):i+j=n_{1}\}\) to the right of \((x,y)\). Every path \(X_{i}\) with index \(i>b(n_{1})\) begins on the horizontal axis and must also intersect the line \(\{(i,j):i+j=n_{1}\}\) to the right of \((x,y)\). These paths cannot contribute to the flux \(\Phi_{G}(x,y)\). Therefore,
\[\Phi_{G}(x,y)\geq-b(n_{1}). \tag{5.33}\]
Therefore,
\[\mathbb{P}\left[Q_{\eta}\text{ exits along east boundary}\right]\leq \mathrm{e}^{-\theta k}+\mathbb{P}\left[\hat{H}^{(a_{1},a_{2})}(x,y)-\hat{H}^{ (a_{1},b_{2})}(x,y)\geq-k\right] \tag{5.34}\]
and we finish via Markov's inequality and Cauchy-Schwarz similarly to Proposition 5.1.
Given the above, the following two results are analogous to Lemma 5.2 and Proposition 5.3. We omit the proofs.
**Lemma 5.5**.: _Let \(\varepsilon>0\) and \(0<b_{2}<1\). Let,_
\[\alpha_{2}=\mathrm{e}^{2\varepsilon}\beta_{2},\qquad\alpha_{1}=\mathrm{e}^{- \varepsilon}\kappa\alpha_{2}=\mathrm{e}^{\varepsilon}\kappa\beta_{2} \tag{5.35}\]
_Then,_
\[\log\left(\mathbb{E}\left[\mathrm{e}^{\varepsilon H^{(a_{1},a_{2}) }(x,y)}\right]\mathbb{E}\left[\mathrm{e}^{-\varepsilon H^{(a_{1},b_{2})}(x,y)} \right]\right)\] \[= \varepsilon^{2}\left(y\frac{\kappa\alpha_{2}}{(1+\kappa\alpha_{2 })^{2}}-x\frac{\alpha_{2}}{(1+\alpha_{2})^{2}}\right)-\varepsilon^{3}\left(y \frac{\kappa\alpha_{2}(1-\kappa\alpha_{2})}{(1+\kappa\alpha_{2})^{3}}-x\frac{ \alpha_{2}(1-\alpha_{2})}{(1+\alpha_{2})^{3}}\right)\] \[+\varepsilon^{4}2(xf(\alpha_{*})-yf(\kappa\alpha_{*}))+ \varepsilon^{4}16(yf(\kappa\alpha_{**})-xf(\alpha_{**})) \tag{5.36}\]
_for some \(\alpha_{*},\alpha_{**}\in(\beta_{2},\alpha_{2})\)._
**Proposition 5.6**.: _Let \(1>\delta_{1}>\delta_{2}>0\) and let \(\delta_{2}<0.5\). Let \(x\geq 10\) and \(b_{1},b_{2}\in(0,1)\) satisfy (2.3). Assume there is a constant \(\mathfrak{a}>0\) so that \(\mathfrak{a}<b_{i}<1-\mathfrak{a}\), \(i=1,2\) and \(\kappa>\mathfrak{a}\). Let \(v_{0}\) be the vertex \((1,0)\) or \((0,1)\) and let \(\eta_{0}\) be \((b_{1},b_{2})\)-doubly sided Bernoulli initial data, except that there is no incoming arrow from \(v_{0}\). Consider the stochastic six vertex model with boundary data \(\eta_{0}\) and second-class particle starting at \(v_{0}\). Let \(Y_{\eta}\) denote the \(y\)-coordinate of where the second class particle crosses the line \(\{(i,j):i=x+\frac{1}{2}\}\). Let_
\[y_{0}=x_{0}\kappa^{-1}\left(\frac{1+\kappa\beta_{2}}{1+\beta_{2}}\right)^{2}, \tag{5.37}\]
_and let \(y_{1}<y_{0}\), with \(y_{1}\in\mathbb{Z}\). Then, there is a constant \(c_{1}>0\) so that if,_
\[10\leq|y_{1}-y_{0}|\leq c_{1}x(1-\kappa) \tag{5.38}\]
_we have,_
\[\mathbb{P}\left[Y_{\eta}<y_{1}\right]\leq C\left(\exp\left(-c\frac{\theta(y_{ 0}-y_{1})^{2}}{x(1-\kappa)}\right)+\exp\left(-c\frac{(y_{0}-y_{1})^{3}}{x^{2}( 1-\kappa)^{2}}\right)\right) \tag{5.39}\]
For later purposes, we prove the following estimate on the probability that the second class particle crosses the horizontal line \(\{(j,k):k=y\}\) at location much larger than the characteristic direction (Proposition 5.3 bounds the case that the exit point is much less/to the left of the characteristic direction).
**Proposition 5.7**.: _Let \(1>\delta_{1}>\delta_{2}>0\) and assume \(\delta<0.5\). Assume there is a constant \(\mathfrak{a}>0\) so that \(\mathfrak{a}<b_{i}<1-\mathfrak{a}\), \(i=1,2\) and \(\kappa>\mathfrak{a}\), and assume \(\theta\geq\mathfrak{a}>0\). Let \(b_{1},b_{2}\) satisfy (2.3). Let \(v_{0}\) be the vertex \((1,0)\) or \((0,1)\) and let \(\eta_{0}\) be \((b_{1},b_{2})\)-doubly sided Bernoulli initial data, except there is no incoming arrow from \(v_{0}\). Consider the stochastic six vertex model with boundary data \(\eta_{0}\) and second class particle starting at \(v_{0}\). Fix \(y_{0}\geq 10\) and let \(X_{\eta}\) be the \(x\) coordinate of the point where the second class particle first touches the horizontal line \(\{(i,y):i\geq 0\}\). Let,_
\[x_{0}=y_{0}\kappa\left(\frac{1+\kappa^{-1}\beta_{1}}{1+\beta_{1}}\right)^{2}. \tag{5.40}\]
_Let \(x_{1}>x_{0}\). There are constants, \(C,c,c_{1}>0\) so that if_
\[10\leq|x_{0}-x_{1}|\leq c_{1}y_{0}(1-\kappa) \tag{5.41}\]
_then,_
\[\mathbb{P}\left[X_{\eta}>x_{1}\right]\leq C\mathrm{e}^{-c|x_{0}-x_{1}|^{3}/(y _{0}(1-\kappa))^{2}}. \tag{5.42}\]
**Proof.** Assume \(x_{1}\in\mathbb{Z}\). Let \(y_{1}\in\mathbb{Z}\) satisfy
\[\left|y_{1}-(x_{1}-1)\kappa^{-1}\left(\frac{1+\beta_{1}}{1+\kappa^{-1}\beta_{1} }\right)^{2}\right|\leq 1 \tag{5.43}\]
By adjusting constants if necessary we may assume that \(x_{1}\) is sufficiently large so that \(y_{1}\geq 10\). If \(X_{\eta}>x_{1}\), then the second class particle must exit the box \(\Delta_{x_{1}-1,y_{1}}\) along the eastern boundary with \(Y\) coordinate less than \(y_{0}\). We have that \(y_{1}\asymp y_{0}\asymp x_{0}\asymp x_{1}\) and that \(|y_{1}-y_{0}|\asymp|x_{0}-x_{1}|\). Therefore, by Proposition 5.6 we have,
\[\mathbb{P}\left[X_{\eta}>x_{1}\right]\leq C\mathrm{e}^{-c(y_{1}-y_{0})^{3}/(x _{1}(1-\kappa))^{2}}\leq C\mathrm{e}^{-c|x_{1}-x_{0}|^{3}/(y_{0}^{2}(1-\kappa) ^{2})}, \tag{5.44}\]
which yields the claim.
### Large deviations regime estimate for second class particles
We state the following estimate for second class particles. In the case that \(\delta_{1}\asymp 1-\kappa\) (which holds under Assumption 2.2(iii)) this simple estimate holds outside the moderate deviations regime \(k\apprle y(1-\kappa)\).
**Proposition 5.8**.: _Let \(Q\) denote a second class particle starting from \((1,0)\). Let \(x>10\) and let \(y_{Q}\) be the height at which \(Q\) crosses the line \(\{(i,j):i=x+1/2\}\). Let \(1>\delta_{1}>\delta_{2}>0\). Then, for any initial data we have for \(k\geq 100x\delta_{1}\)_
\[\mathbb{P}\left[y_{Q}\leq x-k\right]\leq C\mathrm{e}^{-ck} \tag{5.45}\]
_for some \(C,c>0\)._
**Proof.** By sampling the six vertex model with second class particle one column at a time, starting from the left, we see the following. For each column, let \(X_{i}\) be the event that the second class particle passes horizontally through the vertex that it first hits upon entering the \(i\)th column. Let \(\mathcal{F}_{i}\) be the sigma-algebra generated by the status of vertices in the first \(i\) columns. Clearly,
\[\mathbb{E}[X_{i}|\mathcal{F}_{i-1}]\leq\delta_{1}. \tag{5.46}\]
Therefore, for any collection of \(k\) distinct indices,
\[\mathbb{E}[X_{i_{1}}X_{i_{2}}\ldots X_{i_{k}}]\leq\delta_{1}^{k}. \tag{5.47}\]
On the event that \(y_{Q}\leq x-k\), at least \(k\) of the \(X_{i}\) must equal \(1\). Therefore,
\[\mathbb{P}\left[y_{Q}\leq x-k\right]\leq\binom{x}{k}\delta_{1}^{k}\leq\exp \left(k(1+\log(x\delta_{1})-\log(k))\right) \tag{5.48}\]
where we used \(\binom{n}{k}\leq(n\mathrm{e})^{k}k^{-k}\). The claim follows.
Tail estimates for height functions
In this section we apply the tail bounds for second class particles that we obtained in Section 5 to obtain tail estimates for the height function itself.
Our strategy is to first work under the assumption of _vanishing characteristic direction_, i.e. when
\[x=y\kappa\left(\frac{1+\kappa^{-1}\beta_{1}}{1+\beta_{1}}\right)^{2}+\mathcal{ O}(1). \tag{6.1}\]
The extension to non-vanishing characteristic directions will be done using stationarity of the model of Lemma 3.1 and appears later in the section.
### Upper bound for right tail
We first obtain an upper bound for the right tail in the case of vanishing characteristic direction.
**Proposition 6.1**.: _Let \(1>\delta_{1}>\delta_{2}>0\) and let \(\delta_{2}<\frac{1}{2}\). Let \(b_{1},b_{2}\) satisfy (2.3). Assume there is an \(\mathfrak{a}>0\) so that \(\mathfrak{a}<b_{i}<1-\mathfrak{a}\) for \(i=1,2\) and \(\kappa>\mathfrak{a}\). Assume that_
\[\left|x-y\kappa\left(\frac{1+\kappa^{-1}\beta_{1}}{1+\beta_{1}}\right)^{2} \right|\leq 2, \tag{6.2}\]
_and that \(y(1-\kappa)\geq u\geq(y(1-\kappa))^{1/3}.\) Then, there are \(C>0,c>0\) so that_
\[\mathbb{P}\left[H^{(b_{1},b_{2})}(x,y)-\mathbb{E}\left[H^{(b_{1},b_{2})}(x,y) \right]>u\right]\leq C\mathrm{e}^{-c\theta u}+C\mathrm{e}^{-cu^{3/2}/(y(1- \kappa))^{1/2}}, \tag{6.3}\]
_where \(\theta\) is as in (5.1)._
**Proof.** Let \(0<\varepsilon<1\) and let \(\alpha_{1}:=\mathrm{e}^{-\varepsilon}\beta_{1}\). We will couple our \((b_{1},b_{2})\) system to some sparser S6V models that have \(a_{1}\) Bernoulli initial data along different portions of the \(y\)-axis.
Fix an \(L\in\mathbb{Z}\) with \(L\geq 10\) to be determined. Let \(\xi^{(0)}\) be a S6V model with the following boundary data. Along the \(x\) and \(y\) axes, we place incoming arrows independently with probability \(b_{2}\) and \(a_{1}\), respectively, except for the vertex \((0,L)\). Here we demand that there is no arrow incoming from \((0,L)\) to \((1,L)\) with probability one.
Let \(0<\chi<1\) satisfy,
\[(1-\chi)(1-a_{1})=1-b_{1}. \tag{6.4}\]
Note that \(\chi\asymp\varepsilon\). Construct now a S6V model \(\xi^{(L)}\) from \(\xi^{(0)}\) as follows. For any vertex \((0,i)\) with \(1\leq i<L\), add an arrow as follows. If this vertex already has an arrow, do nothing. If there is no arrow, then independently with probability \(\chi\), add an incoming grey arrow from \((0,i)\) to \((1,i)\). Then, allow the ensemble of grey arrows to evolve as a collection of second class particles as discussed in Section 4.2 and shown in Figure 4. We then define \(\xi^{(L)}\) as the union of the added grey arrows and the arrows of \(\xi^{(0)}\). Then \(\xi^{(L)}\) has the following distribution. It is a S6V model with boundary data on the \(x\) axis being Bernoulli \(b_{2}\), and along the \(y\) axis being Bernoulli \(b_{1}\) for vertices \((0,i)\) with \(i<L\), no incoming arrow from \((0,L)\), and Bernoulli \(a_{1}\) for vertices \((0,i)\) with \(i>L\).
Finally, we construct a third S6V model, \(\xi^{(y)}\) from \(\xi^{(L)}\) as follows. For all vertices \((0,i)\) with \(i>L\), we add an incoming arrow with probability \(\chi\) if there is no existing arrow there.
We also add an arrow incoming from \((0,L)\) to \((1,L)\) (recall that \(\xi^{(L)}\) has no incoming arrow here). Then, we allow the new arrows to evolve as second class particles, as discussed in Section 4.2 and shown in Figure 4. This is essentially the same construction as how we obtained \(\xi^{(L)}\) from \(\xi^{(0)}\). We take \(\xi^{(y)}\) to be the union of the new grey paths and \(\xi^{(L)}\). Then \(\xi^{(y)}\) has the distribution of a S6V model with \((b_{1},b_{2})\) Bernoulli initial data, except there is always an arrow incoming to the vertex \((1,L)\) from \((0,L)\).
Let us call the height functions of the models \(\xi^{(0)},\xi^{(L)},\xi^{(y)}\) as \(H_{0},H_{L}\) and \(H_{y}\), respectively. Since there is a coupling in which \(|H_{y}(x,y)-H^{(b_{1},b_{2})}(x,y)|\leq 1\) almost surely, it suffices to prove the tail bound of the proposition for \(H_{y}(x,y)\). In what follows we will omit the argument of the height functions, writing \(H_{y}=H_{y}(x,y)\), etc.
We decompose,
\[H_{y}=H_{0}+(H_{y}-H_{L})+(H_{L}-H_{0}), \tag{6.5}\]
and so (recall (3.12)),
\[\mathbb{P}\left[H_{y}-(yb_{1}-b_{2}x)>3u\right] \leq\mathbb{P}\left[H_{0}>(yb_{1}-b_{2}x)+u\right]\] \[+\mathbb{P}\left[(H_{y}-H_{L})>u\right]+\mathbb{P}\left[(H_{L}-H_{ 0})>u\right] \tag{6.6}\]
Since there is a coupling in which \(|H_{0}(x,y)-H^{(a_{1},b_{2})}(x,y)|\leq 1\) we may estimate,
\[\mathbb{P}\left[H_{0}>u+yb_{1}-xb_{2}\right]\leq C\mathrm{e}^{- \varepsilon(u+yb_{1}-xb_{2})}\mathbb{E}\left[\exp\left(\varepsilon H^{(a_{1}, b_{2})}(x,y)\right)\right]. \tag{6.7}\]
Now, by Lemma 3.2 and the Taylor expansion (5.10) we have,
\[\log\mathbb{E}\left[\exp\left(\varepsilon H^{(a_{1},b_{2})}(x,y) \right)\right]\] \[= y\left(\log(1+\kappa\beta_{2})-\log(1+\mathrm{e}^{-\varepsilon }\kappa\beta_{2})\right)+x\left(\log(1+\mathrm{e}^{-\varepsilon}\beta_{2})- \log(1+\beta_{2})\right)\] \[= \varepsilon(yb_{1}-xb_{2})+\frac{\varepsilon^{2}}{2}\left(x\frac{ \beta_{2}}{(1+\beta_{2})^{2}}-y\frac{\beta_{1}}{(1+\beta_{1})^{2}}\right)\] \[+ \frac{\varepsilon^{3}}{6}\left(y\frac{\kappa\beta_{2}(1-\kappa \beta_{2})}{(1+\kappa\beta_{2})^{3}}-x\frac{\beta_{2}(1-\beta_{2})}{(1+\beta_ {2})^{3}}\right)+\varepsilon^{4}(xf(\beta_{*})-yf(\kappa\beta_{*})) \tag{6.8}\]
where \(f\) is as in (5.9). Now, using (6.2) we see that the quadratic term on the second last line of (6.8) is \(\mathcal{O}(\varepsilon^{2})\), and the cubic term on the last line is,
\[\frac{\varepsilon^{3}}{6}\left(y\frac{\kappa\beta_{2}(1-\kappa\beta_{2})}{(1+ \kappa\beta_{2})^{3}}-x\frac{\beta_{2}(1-\beta_{2})}{(1+\beta_{2})^{3}}\right) =\frac{\varepsilon^{3}}{6}\frac{2y\kappa\beta_{2}^{2}(1-\kappa)}{(1+\kappa \beta_{2})^{3}(1+\beta_{2})}+\mathcal{O}(\varepsilon^{3}), \tag{6.9}\]
where we used (6.2). Finally, the quartic term on the last line of (6.8) is \(\mathcal{O}(\varepsilon^{4}y(1-\kappa))\). Therefore, assuming \(\varepsilon\leq 1\) we have,
\[\mathbb{P}\left[H_{0}>u+yb_{1}-xb_{2}\right]\leq\exp\left(-\varepsilon u+C( \varepsilon^{2}+\varepsilon^{3}y(1-\kappa))\right)\leq C\mathrm{e}^{- \varepsilon u+C\varepsilon^{3}y(1-\kappa)}. \tag{6.10}\]
Taking \(\varepsilon=c_{1}u^{1/2}(y(1-\kappa))^{-1/2}\) for sufficiently small \(c_{1}>0\) allows us to conclude,
\[\mathbb{P}\left[H_{0}>u+yb_{1}-xb_{2}\right]\leq C\exp\left(-cu^{3/2}(y(1- \kappa))^{-1/2}\right). \tag{6.11}\]
We now consider \(H_{L}-H_{0}\). This difference is equal to the number of grey arrows that we added in construction \(\xi^{(L)}\) from \(\xi^{(0)}\) that exit the box \(\Delta_{xy}\) along the east boundary. Therefore, this difference is bounded above by the total number of arrows added. Let \(B_{i}\) be the random variable that is \(1\) if an incoming arrow was added from \((0,i)\) to \((1,i)\). Then by construction, the \(B_{i}\) are iid Bernoulli \((1-a_{1})\chi\) random variables, and
\[H_{L}-H_{0}\leq\sum_{i=1}^{L-1}B_{i}. \tag{6.12}\]
Choose \(L=c_{2}u^{1/2}(y(1-\kappa))^{1/2}\) where \(c_{2}>0\) is chosen sufficiently small so that
\[L\chi<\frac{u}{10}. \tag{6.13}\]
It follows then that
\[\mathbb{P}\left[H_{L}-H_{0}\geq u\right]\leq\mathbb{P}\left[\sum_{i=1}^{L-1}B_ {i}\geq 2L\chi\right]\leq C\mathrm{e}^{-cL\chi^{2}}\leq C\mathrm{e}^{-cu^{3/2} (y(1-\kappa))^{-1/2}} \tag{6.14}\]
with the second inequality following from Hoeffding's inequality.
We consider now the difference \(H_{y}-H_{L}=\Phi_{G}\) where \(\Phi_{G}\) is the flux of grey arrows exiting the box \(\Delta_{xy}\) out the east boundary. Recall that the grey arrows, measuring the discrepancies between \(\xi^{(y)}\) and \(\xi^{(L)}\) enter \(\Delta_{xy}\) at the following locations. The southern most path enters at \((1,L)\), and all other paths enter at some random locations above \((1,L)\). We now argue that the event that \(H_{y}-H_{L}>u\) can be related to the event that a _single_ second class particle entering at the site \((1,L)\) in an otherwise \((b_{1},b_{2})\) doubly Bernoulli random environment exits out the eastern boundary of \(\Delta_{xy}\). The latter probability will be bounded by Proposition 5.6. The construction relating these events is very similar to that given in the proof of Proposition 5.1.
Let us label the non-intersecting grey paths by integers \(i\in\mathbb{Z}\) s.t. \(i\leq 0\). That is, we will label the path entering from \((1,L)\) by \(0\) and then the next path by \(-1\), the next by \(-2\), etc. Let \(X_{i}(n)\) be the vertex that the \(i\)th path touches along the line \(\{(k,j):k+j=n\}\). We now generate a random walk \(a(n)\) on the labels \(i\) such that the path formed by the edges \(\{(X_{a(n)}(n-1),X_{a(n)}(n)\}_{n}\) form a grey path that has the distribution of a second class particle in a doubly Bernoulli \((b_{1},b_{2})\) environment.
Let \(n_{0}=L\) and set \(a(n_{0})=a(n_{0}+1)=0\). Now, given the label \(a(n)\) we describe how to generate \(a(n+1)\). First, if there is only one incoming grey arrow to the vertex indicated by \(a(n)\), then set \(a(n+1)=a(n)\). If there are two incoming grey arrows to this vertex, then there are two cases:
1. If the arrow indicated by \(a(n)\) enters from the left, then set \(a(n+1)=a(n)+1\) with probability \(\delta_{1}\) and \(a(n+1)=a(n)\) otherwise.
2. If the arrow indicated by \(a(n)\) enters from the bottom, then set \(a(n+1)=a(n)-1\) with probability \(\delta_{2}\) and \(a(n+1)=a(n)\) otherwise.
As in the proof of Proposition 4.1, this produces the following distribution. If we take the model \(\xi^{(y)}\) and color grey the path traced out by \(a(n)\), then we have a S6V model with
doubly-Bernoulli boundary data \((b_{1},b_{2})\) except there is a second class particle entering from \((0,L)\).
Now, the random walk \(a(n)\) described above is in the setting of Lemma A.1. Let \(n_{1}=x+y\). We therefore have \(\mathbb{P}[a(n_{1})\leq-k]\leq\mathrm{e}^{-\theta k}\). Take \(k=u/2\). Now, if \(H_{y}-H_{L-1}>u\), then this must mean that at least \(u\) of the grey arrows start from the \(y\) axis exit out the east boundary of the rectangle \(\Delta_{xy}\). That is, the coordinate \(X_{u}(n_{1})\) must lie to the southeast of the point \((x,y)\) on the line \(\{(j,k):j+k=x+y\}\). If also \(a(n_{1})>-u/2\), then the path traced out by \(a(n)\) must exit the box \(\Delta_{xy}\) on the eastern boundary.
Therefore,
\[\mathbb{P}\left[H_{y}-H_{L}>u\right]\leq\mathrm{e}^{-c\theta u}+\mathbb{P} \left[Q_{L}\text{ exits out eastern boundary}\right] \tag{6.15}\]
where \(Q_{L}\) is the second class particle entering at \((0,L)\).
Consider now the collection of random variables \(\{\varphi^{(v)}(j,L):j\geq 1\}\), where, as in Section 3, \(\varphi^{(v)}(n,m)\) is the indicator function of there being an incoming vertical arrow to the vertex \((n,m)\) in \(\xi^{(y)}\). The random variables \(\{\varphi(j,L):j\geq 1\}\), depend only on the boundary data of \(\xi^{(y)}\) along the entire \(x\) axis and on the portion of the \(y\) axis \(\{(1,j):1\leq j\leq L-1\}\). By construction, these are Bernoulli \(b_{2}\) and \(b_{1}\), respectively. Therefore, by Lemma 3.1, the collection of random variables \(\{\varphi^{(v)}(j,L):j\geq 1\}\) are i.i.d. Bernoulli with probability \(b_{2}\).
Therefore, the event that \(Q_{L}\) exits out the eastern boundary has the same probability as the event that a second class particle in a doubly Bernoulli \((b_{1},b_{2})\) environment entering at \((0,1)\) exits the box \(\Delta_{xy}\) at height less than \(y-L\). Therefore, by Proposition 5.6 we have,
\[\mathbb{P}\left[Q_{L}\text{ exits out eastern boundary}\right]\leq C\mathrm{e}^ {-\theta cL^{2}/(y(1-\kappa))}+C\mathrm{e}^{-cL^{3}/(y^{2}(1-\kappa)^{2})}. \tag{6.16}\]
The claim now follows from our choice of \(L\).
### Lower bound for right tail
In this section we complement the upper bound for the right tail with a lower bound on the moment generating function (which will later be used to obtain a lower bound for the tail itself).
**Proposition 6.2**.: _Let \(1>\delta_{1}>\delta_{2}>0\). Let \(b_{1},b_{2}\in(0,1)\). Assume that,_
\[\left|x-\kappa y\left(\frac{1+\kappa^{-1}\beta_{1}}{1+\beta_{1}}\right)^{2} \right|\leq 2. \tag{6.17}\]
_and that \(y(1-\kappa)\geq 1\). There is a \(c>0\) so that for \(0<\varepsilon<c\) that,_
\[\mathbb{E}\left[\exp\varepsilon\left(H^{(b_{1},b_{2})}(x,y)-\mathbb{E}[H^{(b _{1},b_{2})}(x,y)]\right)\right]\geq\exp\left(cy(1-\kappa)\varepsilon^{3} \right). \tag{6.18}\]
**Proof.** Let \(a_{1}<b_{1}\) satisfy \(\alpha_{1}=\mathrm{e}^{-\varepsilon}\beta_{1}\). Under the basic coupling between stochastic six vertex models with \((a_{1},b_{2})\) and \((b_{1},b_{2})\) Bernoulli initial data we have
\[H^{(a_{1},b_{2})}(x,y)\leq H^{(b_{1},b_{2})}(x,y). \tag{6.19}\]
We apply Lemma 3.2 to the height function on the LHS. Arguing as in the proof of Proposition 6.1 we have,
\[\log\mathbb{E}\left[\exp\left(\varepsilon H^{(a_{1},b_{2})}(x,y) \right)\right]\] \[= \varepsilon(yb_{1}-xb_{2})+\frac{\varepsilon^{3}}{6}\frac{2y\kappa \beta_{2}^{2}(1-\kappa)}{(1+\kappa\beta_{2})^{3}(1+\beta_{2})}+\mathcal{O}( \varepsilon^{2})+\mathcal{O}(y(1-\kappa)\varepsilon^{4}), \tag{6.20}\]
from which we conclude the desired estimate.
### Left tail estimate for height functions
We can also derive an upper bound for the left tail of the height function. The proof of the following is very similar to that of Proposition 6.1 and so not all details are given.
**Proposition 6.3**.: _Let \(1>\delta_{1}>\delta_{2}>0\) and let \(0<\delta_{2}<\frac{1}{2}\). Let \(b_{1},b_{2}\in(0,1)\) satisfy (2.3). Let \(x,y>0\) satisfy,_
\[\left|x-y\kappa\left(\frac{1+\kappa^{-1}\beta_{1}}{1+\beta_{1}}\right)^{2} \right|\leq 2. \tag{6.21}\]
_For \(u\) satisfying \(y(1-\kappa)\geq u\geq(y(1-\kappa))^{1/3}\) we have that,_
\[\mathbb{P}\left[H^{(b_{1},b_{2})}(x,y)-\mathbb{E}[H^{(b_{1},b_{2})}(x,y)]<-u \right]\leq C\mathrm{e}^{-c\theta u}+C\mathrm{e}^{-cu^{3/2}/(y(1-\kappa))^{1/ 2}}. \tag{6.22}\]
**Proof.** Let \(0<\varepsilon<1\) and \(\alpha_{1}=\mathrm{e}^{\varepsilon}\beta_{1}\). We will proceed similarly to the proof of Proposition 6.1. However, this time we are coupling the \((b_{1},b_{2})\) model to denser models that have \(a_{1}\) Bernoulli initial data along portions of the \(y\)-axis.
Fix \(L\in\mathbb{Z}\) satisfying \(L\geq 10\). Let \(\xi^{(y)}\) be a stochastic six vertex model with doubly-sided \((b_{1},b_{2})\) Bernoulli initial data except there is never an arrow incoming from \((0,L)\) to \((1,L)\). Then let \(\xi^{(L)}\) be obtained from \(\xi^{(y)}\) as follows. Let \(0<\chi<1\) satisfy,
\[\chi(1-b_{1})=a_{1}-b_{1}. \tag{6.23}\]
At every site \((0,i)\) with \(i>L\), if \(\xi^{(y)}\) contains no incoming arrow, we add one independently with probability \(\chi\). We also add an incoming arrow at the empty site \((0,L)\). We then allow the incoming arrows to evolve as second-class particles. We let \(\xi^{(L)}\) be the union of the second class particles and \(\xi^{(y)}\). The distribution of \(\xi^{(L)}\) is then that of a stochastic six vertex model with boundary data that is Bernoulli \(b_{2}\) on the \(x\)-axis, Bernoulli \(a_{1}\) for vertices \((0,i)\) with \(i>L\), and Bernoulli \(b_{1}\) for vertices \((0,i)\) with \(i<L\) and always has an arrow incoming at \((0,L)\).
We now obtain \(\xi^{(0)}\) from \(\xi^{(L)}\) as follows. At each vertex \((0,i)\) with \(i<L\), if there is no incoming arrow we add one independently with probability \(\chi\). The resulting arrows are then allowed to evolve as second class particles. We form \(\xi^{(0)}\) by taking the union of the new paths together with \(\xi^{(L)}\). It follows that \(\xi^{(0)}\) has the distribution of a stochastic six vertex model with \((a_{1},b_{2})\) doubly-sided Bernoulli initial data, but there is always an arrow incoming at \((0,L)\).
Denote the height functions of \(\xi^{(y)},\xi^{(L)},\xi^{(0)}\) by \(H_{y},H_{L}\) and \(H_{0}\), respectively. Since there is a coupling for which we have \(|H^{(b_{1},b_{2})}(x,y)-H_{y}(x,y)|\leq 1\) it suffices to prove the proposition for \(H_{y}\).
We have,
\[\mathbb{P}\left[H_{y}-(yb_{1}-b_{2}x)<-3u\right] \leq\mathbb{P}\left[H_{0}-(yb_{1}-b_{2}x)<-u\right]\] \[+\mathbb{P}\left[H_{L}-H_{y}>u\right]+\mathbb{P}\left[H_{0}-H_{L}>u \right]. \tag{6.24}\]
Since there is a coupling in which \(|H_{0}(x,y)-H^{(a_{1},b_{2})}(x,y)|\leq 1\), we have \(\mathbb{E}\left[\mathrm{e}^{-\varepsilon H_{0}}\right]\leq C\mathbb{E}\left[ \mathrm{e}^{-\varepsilon H^{(a_{1},b_{2})}(x,y)}\right]\) and, we calculate,
\[\log\mathbb{E}\left[\exp\left(-\varepsilon H_{0}\right)\right] =\log\mathbb{E}\left[\exp\left(-\varepsilon H^{(a_{1},b_{2})}(x,y )\right)\right]\] \[=y\left(\log(1+\kappa\beta_{2})-\log(1+\mathrm{e}^{\varepsilon} \kappa\beta_{2})\right)+x\left(\log(1+\mathrm{e}^{\varepsilon}\beta_{2})-\log (1+\beta_{2})\right)\] \[=-\varepsilon(yb_{1}-xb_{2})+\frac{\varepsilon^{2}}{2}\left(x \frac{\beta_{2}}{(1+\beta_{2})^{2}}-y\frac{\kappa\beta_{2}}{(1+\kappa\beta_{ 2})^{2}}\right)\] \[+\frac{\varepsilon^{3}}{6}\left(x\frac{\beta_{2}(1-\beta_{2})}{( 1+\beta_{2})^{3}}-y\frac{\kappa\beta_{2}(1-\kappa\beta_{2})}{(1+\kappa\beta_{ 2})^{3}}\right)+\varepsilon^{4}(xf(\beta_{*})-yf(\kappa\beta_{*})) \tag{6.25}\]
The quartic error term is \(\mathcal{O}(\varepsilon^{4}y(1-\kappa))\). The cubic term is negative. The quadratic term is \(\mathcal{O}(\varepsilon^{2})\). So,
\[\log\mathbb{E}\left[\exp\left(-\varepsilon H^{(a_{1},b_{2})}(x,y)\right) \right]\leq-\varepsilon\mathbb{E}[H_{y}]+C\varepsilon^{2}-cy(1-\kappa) \varepsilon^{3}. \tag{6.26}\]
Therefore, taking
\[\varepsilon=c_{1}u^{1/2}/(y(1-\kappa))^{1/2} \tag{6.27}\]
for some sufficiently small \(c_{1}>0\), we obtain,
\[\mathbb{P}\left[-H_{0}+(yb_{1}-b_{2}x)>u\right]\leq C\mathrm{e}^{-cu^{3/2}(y(1 -\kappa))^{-1/2}}. \tag{6.28}\]
Later we will need to take \(c_{1}>0\) still possibly smaller. Consequently, the \(c>0\) on the RHS above will get smaller, but this does not affect the proof.
We make the choice
\[L=c_{2}u^{1/2}(y(1-\kappa))^{1/2}/c_{1} \tag{6.29}\]
for \(c_{2}>0\) sufficiently small so that
\[L\chi<\frac{u}{10}. \tag{6.30}\]
By taking \(c_{1}>0\) possibly smaller we can enforce that \(L\geq 10\). Since \(\chi\asymp\varepsilon\) this choice can be made independently of the size of \(c_{1}>0\). Now, for the difference \(H_{0}-H_{L}\) we have,
\[H_{0}-H_{L}\leq\sum_{i=1}^{L-1}B_{i} \tag{6.31}\]
where \(B_{i}\) is the random variable that is \(1\) iff an arrow was added incoming from \((0,i)\) in obtaining \(\xi^{(0)}\) from \(\xi^{(L)}\) and \(0\) otherwise. Then \(B_{i}\) are independent Bernoulli with probability \((1-b_{1})\chi\). By Hoeffding's inequality,
\[\mathbb{P}\left[H_{0}-H_{L}>u\right]\leq\mathbb{P}\left[\sum_{i=1}^{L-1}B_{i}>2 L\chi\right]\leq C\mathrm{e}^{-cL\chi^{2}}\leq C\mathrm{e}^{-cu^{3/2}(y(1- \kappa))^{-1/2}}. \tag{6.32}\]
We consider now the difference \(H_{L}-H_{y}\). Recall that here, contrary to the right tail case, \(H_{L}\) corresponds to the denser system and \(H_{y}\) to the sparser system. Coloring the discrepancies between the systems \(\xi^{(L)}\) and \(\xi^{(y)}\) grey, we see that this is an ensemble of non-intersecting paths evolving as second-class particles in the presence of the background \(\xi^{(y)}\). There is always an arrow incoming at \((0,L)\). As in the proof of Proposition 6.1, label the non-intersecting paths by intgers \(i\leq 0\) with the path incoming from \((0,L)\) labelled by \(0\), the next highest by \(-1\), etc. We let \(X_{i}(n)\) be the location of the \(i\)th path on the line \(\{(j,k):j+k=n\}\). We now generate a random walk \(a(n)\) on the labels \(i\) such that the path formed by the edges \(\{(X_{a(n)}(n-1),X_{a(n)}(n)\}_{n}\) form a grey path that has the distribution of a second class particle in an environment that is Bernoulli \(b_{2}\) on the \(x\) axis, Bernoulli \(b_{1}\) for \((0,i)\), \(i<L\) and Bernoulli \(a_{1}\) for \((0,i)\) with \(i>L\). That is, it is a second-class particle for the denser system \(\xi^{(L)}\).
Let \(n_{0}=L\) and set \(a(n_{0})=a(n_{0}+1)=0\). Now, given the label \(a(n)\) we describe how to generate \(a(n+1)\). First, if there is only one incoming grey arrow to the vertex indicated by \(a(n)\), then set \(a(n+1)=a(n)\). If there are two incoming grey arrows to this vertex, then there are two cases:
1. If the arrow indicated by \(a(n)\) enters from the left, then set \(a(n+1)=a(n)+1\) with probability \(\delta_{1}\) and \(a(n+1)=a(n)\) otherwise.
2. If the arrow indicated by \(a(n)\) enters from the bottom, then set \(a(n+1)=a(n)-1\) with probability \(\delta_{2}\) and \(a(n+1)=a(n)\) otherwise.
As in the proof of Proposition 4.1, this produces the desired distribution.
Now, the random walk \(a(n)\) described above is in the setting of Lemma A.1. Let \(n_{1}=x+y\). Therefore, \(\mathbb{P}[a(n_{1})\leq-k]\leq\mathrm{e}^{-\theta k}\). Take \(k=u/2\). If \(H_{L}-H_{y}>u\), then \(X_{-u}(n_{1})\) lies to southeast of the point \((x,y)\) on the line \(\{(j,k):j+k=x+y\}\). If \(a(n_{1})\geq-u\), then the path traced out by \(a(n_{1})\) exits the box \(\Delta_{xy}\) along the eastern boundary. Label this second class particle \(Q_{L}\). We there have,
\[\mathbb{P}\left[H_{L}-H_{y}>u\right]\leq\mathrm{e}^{-c\theta u}+\mathbb{P} \left[Q_{L}\text{ exits out eastern boundary}\right]. \tag{6.33}\]
Due to the translation invariance of Section 3 it follows that the probability on the RHS coincides with the probability of a second class particle entering from the vertex \((0,1)\) in \((a_{1},b_{2})\)-doubly sided Bernoulli initial data exiting the box \(\Delta_{xy}\) out the eastern boundary at a height less than \(y-L+1\).
By Proposition 4.1, this probability is bounded above by a second class particle exiting the eastern boundary at height less than \(y-L+1\) in \((a_{1},a_{2})\) Bernoulli initial data with \(\alpha_{2}=\mathrm{e}^{\varepsilon}\beta_{2}\).
Let \(y_{0}\in\mathbb{Z}\) satisfy,
\[\left|y_{0}-\kappa^{-1}x\left(\frac{1+\kappa\mathrm{e}^{\varepsilon}\beta_{2} }{1+\mathrm{e}^{\varepsilon}\beta_{2}}\right)^{2}\right|\leq 1. \tag{6.34}\]
By Taylor expansion, \(|y_{0}-y|\leq C(1-\kappa)y\varepsilon\) for some \(C>0\). Choose now \(c_{1}>0\) sufficiently small so that
\[L>10|y_{0}-y|. \tag{6.35}\]
Then we have that \(|y_{0}-L|\asymp|y-L|\). Therefore, by Proposition 5.6 it follows that
\[\mathbb{P}\left[Q_{L}\text{ exits out eastern boundary}\right]\leq C\mathrm{e}^{ -cL^{2}/(y(1-\kappa)+C\mathrm{e}^{-cL^{3}/(y(1-\kappa))^{2}}}. \tag{6.36}\]
This completes the proof.
### Off characteristic direction
**Proposition 6.4**.: _Let \(1>\delta_{1}>\delta_{2}>0\) and let \(\delta_{2}<\frac{1}{2}\). Let \(b_{1},b_{2}\in(0,1)\) satisfy (2.3). Let \(y\geq 1\) and let \(x_{0}\in\mathbb{Z}\) satisfy,_
\[\left|x_{0}-y\kappa\left(\frac{1+\kappa^{-1}\beta_{1}}{1+\beta_{1}}\right)^{2} \right|\leq 1. \tag{6.37}\]
_Then for any \(x\geq 1\) and choice of \(\pm\) we have,_
\[\mathbb{P}\left[\pm\left(H^{(b_{1},b_{2})}(x,y)-\mathbb{E}[H^{(b_ {1},b_{2})}(x,y)]\right)>u\right]\] \[\leq \mathbb{P}\left[\pm\left(H^{(b_{1},b_{2})}(x_{0},y)-\mathbb{E}[H^ {(b_{1},b_{2})}(x_{0},y)]\right)>\frac{u}{2}\right]+C\mathrm{e}^{-cu^{2}/|x-x_ {0}|} \tag{6.38}\]
**Proof.** If \(x\geq x_{0}\), then using the representation \(H=W-N\) we have,
\[H^{(b_{1},b_{2})}(x,y)=H^{(b_{1},b_{2})}(x_{0},y)-\sum_{i>x_{0}}^{x}\varphi^{( v)}(i,y+1). \tag{6.39}\]
By translation invariance, the sum on the RHS is a sum of iid Bernoulli \(b_{2}\) random variables. Therefore by Hoeffding's inequality,
\[\mathbb{P}\left[\left|\sum_{i>x_{0}}^{x}\varphi^{(v)}(i,y+1)-(x-x_{0})b_{2} \right|>u\right]\leq 2\mathrm{e}^{-cu^{2}/(x-x_{0})}. \tag{6.40}\]
If \(x<x_{0}\), then using instead the representation \(H=E-S\) we have,
\[H^{(b_{1},b_{2})}(x_{0},y)=\left(H^{(b_{1},b_{2})}(x_{0},y)+\sum_{i=1}^{x_{0}- x}\varphi^{(v)}(i,1)\right)+\sum_{i=1}^{x_{0}-x}\varphi^{(v)}(i,1) \tag{6.41}\]
The first factor on the RHS has the same distribution as \(H^{(b_{1},b_{2})}(x,y)\). The second factor is a sum of iid Bernoulli \(b_{1}\) random variables and we conclude similarly to the other case.
### Large deviations regime
In this section we quickly give how to deduce tail estimates for the six vertex model for \(u\geq y(1-\kappa)\).
**Lemma 6.5**.: _Let \(n\delta_{1}\geq 1\). For any choice of initial data we have,_
\[\mathbb{P}\left[|H(n,n)|>k\right]\leq 2\mathrm{e}^{-k(1+\log(k/n\delta_{1}))}. \tag{6.42}\]
_for \(k\geq 10n\delta_{1}\)._
**Proof.** We first estimate the event \(\{H(n,n)>k\}\). This event is contained in the event that the \(k\)th lowest non-intersecting path originating on the \(y\) axis crosses the line \(\{(i,j):j=n\}\) to the right of the point \((n,n)\). Let \(X_{i}\) be the event that the path on first entering the \(i\)th column of vertices, passes through the vertex directly without turning upwards. Then, on the event \(\{H(n,n)>k\}\) we have that at least \(k\) of the \(X_{i}\)'s must be \(1\), because the path
must enter the first column no lower than at position \((1,k)\). Then, similarly to the proof of Proposition 5.8 we have,
\[\mathbb{P}\left[H(n,n)>k\right]\leq\binom{n}{k}\delta_{1}^{k}\leq\exp\left(k(1+ \log(n\delta_{1})-\log(k))\right). \tag{6.43}\]
For the event \(\{H(n,n)<-k\}\) we argue similarly, instead tracking the position of the particle as it moves between rows of vertices instead of columns.
**Proposition 6.6**.: _Let \(n\delta_{1}\geq 1\). Let \(m\) be such that \(|m-n|\leq C_{1}n\delta_{1}\) for some \(C_{1}>0\). Then there is a \(C_{2}>0\) so that for \(k\geq C_{2}n\delta_{1}\) we have,_
\[\mathbb{P}\left[|H(n,m)|>k\right]\leq 2\mathrm{e}^{-\frac{k}{2}(1+\log(k/n \delta_{1}))}. \tag{6.44}\]
**Proof.** This follows from the fact that \(|H(n,m)-H(n,n)|\leq|n-m|\) for any \(m,n\) and the previous lemma.
**Corollary 6.7**.: _Let \(1>\delta_{1}>\delta_{2}>0\) satisfy Assumption 2.2. Let \(b_{1},b_{2}\) satisfy (2.3). Let \(x,y\) satisfy_
\[\left|x-y\kappa\left(\frac{1+\kappa^{-1}\beta_{1}}{1+\beta_{1}}\right)^{2} \right|\leq 2. \tag{6.45}\]
_Suppose that \(y(1-\kappa)\geq 10\). There are \(C,c>0\) so that for \(u\) satisfying \(u\geq Cy(1-\kappa)\) we have,_
\[\mathbb{P}\left[|H^{(b_{1},b_{2})}(x,y)-\mathbb{E}[H^{(b_{1},b_{2})}(x,y)]|>u \right]\leq C\mathrm{e}^{-cu}, \tag{6.46}\]
_as well as for \(0<\varepsilon<c\) that,_
\[\mathbb{E}\left[\exp\left\{\varepsilon\left(\left[H^{(b_{1},b_{2})}(x,y)- \mathbb{E}[H^{(b_{1},b_{2})}(x,y)]\right]\right)\right\}\right]\leq C\mathrm{ e}^{C\varepsilon^{3}y(1-\kappa)}. \tag{6.47}\]
**Proof.** Under our assumptions we have that \(\delta_{1}\asymp(1-\kappa)\), as well as that
\[|x-y|\leq Cy(1-\kappa),\qquad|\mathbb{E}[H^{(b_{1},b_{2}}(x,y)]|\leq Cy(1- \kappa). \tag{6.48}\]
The first estimate (6.46) then follows immediately from Proposition 6.6. The second estimate follows from the layer cake representation applied to the function \(\mathrm{e}^{\varepsilon s}\), and the estimates (6.46) and Proposition 6.1.
### Proof of Theorem 2.3
The upper bounds follow immediately from Propositions 6.1, 6.3, 6.4 and (6.46). For the lower bound, fix \(y\) and let \(x_{0}\) satisfy
\[\left|x_{0}-y\kappa\left(\frac{1+\kappa^{-1}\beta_{1}}{1+\beta_{1}}\right)^{2} \right|\leq 1. \tag{6.49}\]
By Proposition 6.2 and (6.47) and Proposition A.1 of [35] we have that
\[\mathbb{P}\left[H^{(b_{1},b_{2})}(x_{0},y)-\mathbb{E}[H^{(b_{1},b_{2})}(x_{0}, y)]>u\right]\geq c\mathrm{e}^{-Cu^{3/2}/(y(1-\kappa))^{1/2}} \tag{6.50}\]
for \(0<u<cy(1-\kappa)\). Similar to the proof of Proposition 6.4 we then have,
\[\mathbb{P}\left[H^{(b_{1},b_{2})}(x,y)-\mathbb{E}[H^{(b_{1},b_{2})}(x,y)]>u\right] \geq c\mathrm{e}^{-Cu^{3/2}/(y(1-\kappa))^{1/2}}-C\mathrm{e}^{-cu^{2}/|x-x_{0}|}. \tag{6.51}\]
The LHS is greater than \(c\mathrm{e}^{-Cu^{3/2}/(y(1-\kappa))^{1/2}}/2\) as long as \(u\) satisfies,
\[u\geq C_{1}\left(|x-x_{0}|^{2}/(y(1-\kappa))+(y(1-\kappa))^{1/3}\right) \tag{6.52}\]
for some large \(C_{1}>0\). Under the assumption that \(|x-x_{0}|\leq A(y(1-\kappa))^{2/3}\) this simplifies to \(u\geq C_{1}A(y(1-\kappa))^{1/3}\), and so the estimate is obtained for such \(u\). Adjusting the constants in the resulting lower bound yields
\[\mathbb{P}\left[H^{(b_{1},b_{2})}(x,y)-\mathbb{E}[H^{(b_{1},b_{2})}(x,y)]>u \right]\geq c^{\prime}\mathrm{e}^{-C^{\prime}u^{3/2}/(y(1-\kappa))^{1/2}} \tag{6.53}\]
for all \(u\) satisfying \((y(1-\kappa))^{1/3}\leq u\leq c^{\prime}y(1-\kappa)\).
## 7 Results for the ASEP
### Convergence of stochastic six vertex model to the ASEP
In order to deduce our results for the ASEP, we require the following convergence results. They are slight modifications of analogous results of [1]. Consider the stochastic six vertex model with some boundary data \(\{\varphi(i):i\in\mathbb{Z}\backslash\{0\}\}\). That is, a particle enters at site \((i,0)\) iff \(\varphi(i)=1\) and enters from site \((0,i)\) iff \(\varphi(-i)=1\). We tag the particles of the model as follows. The leftmost particle entering along the \(x\) axis will be labelled by \(1\), the next particle from the \(x\) axis by \(2\), etc. The lowest particle entering along the \(y\) axis will be labelled by \(0\), the next by \(-1\), etc. For any \(t>0\) we then let \(p_{i}(t)\) be the location on the horizontal line \(\{(j,t):j\geq 0\}\) at which the particle \(p_{i}(t)\) passes from height \(t\) to \(t+1\), i.e., wherever the path has a vertical arrow that exits this vertical line.
Given the boundary data \(\{\varphi(i):i\in\mathbb{Z}\backslash\{0\}\}\), we construct ASEP with this initial data by placing particles at site \(i\geq 1\) iff \(\varphi(i)=1\) and at \(i\leq 0\) iff \(\varphi(i-1)=1\). We can label the particles by integers \(\mathbb{Z}\) such that particle \(1\) is the first one starting from \(\{n:n\geq 1\}\), and then the particle to the right of it is labelled by \(2\), and the particle to the left is labelled by \(0\), etc.
**Proposition 7.1**.: _Fix, \(L,R>0\). Let \(\xi\) be a stochastic six vertex model with parameters \(\delta_{1}=\varepsilon L\) and \(\delta_{2}=\varepsilon R\), with initial data being \((b_{1},b_{2})\) doubly Bernoulli. We let \(b_{2}\in(0,1)\) fixed and then choose \(b_{1}\) so that_
\[\frac{b_{1}}{1-b_{1}}=\frac{1-\delta_{1}}{1-\delta_{2}}\frac{b_{2}}{1-b_{2}}. \tag{7.1}\]
_Note that \(b_{1}\) depends on \(\varepsilon>0\) through \(\delta_{1},\delta_{2}\). Let \(p_{i}(t)\) denote the particles in the stochastic six vertex model and let \(X_{i}(t)\) denote the particles in the ASEP with jump rates \(L,R\) and initial data iid Bernoulli with probability \(b_{2}\). Define \(q_{i}(t)=p_{i}(t)-t\). Then, for any finite \(S\subseteq\mathbb{Z}^{n}\), \(i_{1},i_{2},\ldots i_{n}\in\mathbb{Z}\), \(0<t_{1},t_{2},\ldots,t_{n}\in\mathbb{R}\) we have,_
\[\lim_{\varepsilon\to 0}\mathbb{P}\left[q_{i_{1}}([\varepsilon^{-1}t_{1}]), \ldots,q_{i_{n}}([\varepsilon^{-1}t_{n}])\in S\right]=\mathbb{P}\left[X_{i_{1 }}(t_{1}),\ldots,X_{i_{n}}(t_{n})\in S\right]. \tag{7.2}\]
**Corollary 7.2**.: _Under the above assumptions, we have for all \(r\in\mathbb{Z}\) that,_
\[\lim_{\varepsilon\to 0}\mathbb{P}\left[H^{(b_{1},b_{2})}(x+\lfloor\varepsilon^{-1 }t\rfloor,\lfloor\varepsilon^{-1}t\rfloor)>r\right]=\mathbb{P}\left[J_{t}(x) \geq r\right] \tag{7.3}\]
_where \(J_{t}(x)\) is the current of particles in the ASEP with Bernoulli \(b_{2}\) initial data._
Proposition 7.1 and Corollary 7.2 are modifications of Theorem 3 and Corollary 4 of [1]. The only difference is that in our case, the distribution of the boundary data of the stochastic six vertex model depends on \(\varepsilon>0\), whereas in [1] the distribution is fixed. However, it is not too hard to see that the proof given in [1] extends without much difficulty to our case. First, the deduction of Corollary 7.2 from Proposition 7.1 is the same in our case as in [1] and so one only needs to prove Proposition 7.1. In the next subsection we detail the minor modifications of [1] required.
#### 7.1.1 Proof of Proposition 7.1
As noted in [1], the proof of Proposition 7.1 is relatively straightforward in the case that there are only finitely many particles in the system. The point of the proof then is to show that with probability at least \(1-\delta\), there is a large interval \([-M,N]\) so that if one restricts the model (i.e., the ASEP and the off-set stochastic six vertex model particles \(q_{i}(t)\)) to this interval, then the \(X_{i_{k}}(t_{k})\), \(q_{i_{k}}(\lfloor\varepsilon t_{k}\rfloor)\) of the Proposition statement coincide with their truncated versions. One must be able to choose the \([\![M,N]\!]\) uniformly in \(\varepsilon>0\).
This truncation or restriction is fostered through the notion of a _time graph_. We will not give the complete definition here of the time graph, and refer the interested reader to [1] for the complete definition. We simply state that the time graph can be thought of as giving the "jump instructions" of the ASEP or offset stochastic six vertex model. In the case of the ASEP, the time graph is simply subset of \(\mathbb{R}_{>0}\times\mathbb{Z}\times\mathbb{Z}\), where edges are of the form \((t,i,i+1)\) and \((t,i,i-1)\). The presence of an edge \((t,i,i\pm 1)\) means that, at time \(t\), a particle at site \(i\) attempts to jump to \(i\pm 1\) if it is allowed. By generating this graph using Poisson processes with rates \(R,L\) one gives a distributionally equivalent way of generating the ASEP. The time graph for the process \(q_{i}(t)\) is more complicated but analogous.
The bulk of the proof of Theorem 3 of [1] consists of showing that if one truncates the time graphs to an interval \([-M,N]\) - that is, if one removes all jumps that enter or leave the interval - then with high probability, there is some compact interval, containing, say, \([-cM,cN]\) for some \(c>0\), so that all of the particles that start inside this interval coincide at later times \(t>0\) under both the truncated and full dynamics. Considering that this kind of argument is essentially independent of the initial data it is therefore not surprising that Proposition 7.1 holds.
The proof of Theorem 3 of [1] consists of three Propositions, numbered 6, 7, 8. Proposition 6 states that the truncated ASEP converges to the full ASEP. This is of course unchanged in our set-up.
Proposition 7 implies that, uniformly in \(\varepsilon>0\), the truncated offset stochastic six vertex model converges to the full model as \(M,N\to\infty\). Reading through the proof of this Proposition given in Section 6.2 of [1], the only location in which there is any dependence in the argument on the initial data is in the very last sentence of this section. What is required is that, uniformly in \(\varepsilon>0\), that,
\[\lim_{M,N\to\infty}\mathbb{P}\left[q_{i_{k}}(0)\in[-M/2,N/2],k=1,\ldots n\right] =1. \tag{7.4}\]
But this obviously holds in our case because as \(\varepsilon\to 0\), the parameters in our Bernoulli initial data are bounded away from \(0\) and \(1\).
Proposition 8 of [1] is then the statement that the truncated offset stochastic six vertex model converges to the truncated ASEP. In this proof, an auxiliary process \(\tilde{q}_{i}(t)\) is introduced, for which the convergence to the ASEP is more straightforward: for the \(\tilde{q}_{i}\), jumps with range more than \(1\) are deleted, as are jumps of multiple particles in the same time step. The convergence of the \(\tilde{q}_{i}\) to the ASEP is then the same as in our case as in [1]. Only finitely many particles can ever move in either process, and so the changing boundary data does not play a role, as the convergence only needs to deal with finitely many different initial conditions.
Finally, the proof that the difference between \(\tilde{q}_{i}\) and the original offset process \(q_{i}\) is dealt with in Section 7.2 of [1]. Here, the only requirement on the initial data in this part of the proof is that as \(\varepsilon\to 0\), the probability that \(\mathbb{P}\left[q_{i_{r}}(0)\leq-\lceil\varepsilon t_{k}\rceil\right]\) tends to \(0\). But this is a consequence again of the fact that our Bernoulli parameters are bounded away from \(0\) and \(1\).
### Proof of Theorem 2.4
For small \(\varepsilon>0\) let \(\delta_{1}=\varepsilon L\) and \(\delta_{2}=\varepsilon R\) for \(L>R>0\). For all sufficiently small \(\varepsilon>0\) we see that Assumption 2.2 holds. Fix \(b\in(0,1)\). We will apply Theorem 2.3 with \(b_{2}=b\) and then \(b_{1}\) defined by \(\beta_{1}=\kappa\beta_{2}\). For \(T\geq 10\), substituting in \(x=\lfloor\varepsilon^{-1}T\rfloor+X\) and \(y=\lfloor\varepsilon^{-1}T\rfloor\) we see that,
\[y\kappa\left(\frac{1+\kappa^{-1}\beta_{1}}{1+\beta_{1}}\right)^{2}=\varepsilon ^{-1}T+T(R-L)(1-2b)+\mathcal{O}(1)+\mathcal{O}(\varepsilon T), \tag{7.5}\]
as well as that
\[\mathbb{E}\left[H^{(b_{1},b_{2})}(x,y)\right]=yb_{1}-xb=b(1-b)T(R-L)-bX+ \mathcal{O}(1)+\mathcal{O}(\varepsilon T). \tag{7.6}\]
Therefore, from (2.6) we have for \(\varepsilon\leq T^{-1}\), \(T\) sufficiently large depending on \(L-R\) and \((T(L-R))^{1/3}\leq u\leq T(L-R)\),
\[\mathbb{P}\left[\left|H^{(b_{1},b_{2})}(x,y)-b(1-b)T(R-L)+bX\right|>u\right] \leq C\mathrm{e}^{-cu^{3/2}/(T(L-R))^{1/2}}+C\mathrm{e}^{-cu^{2}/(1+|X-x_{1}|)} \tag{7.7}\]
where \(x_{1}:=T(R-L)(1-2b)\). From Corollary 7.2, we conclude the upper bound of Theorem 2.4, since \(H^{(b_{1},b_{2}}(x,y)\) converges to the current of the ASEP with Bernoulli \(b\) initial condition. The lower bound follows in a similar manner.
### Second class particles for the ASEP
We first give a convergence result of the second class particle in the stochastic six vertex model to the second class particle in the ASEP.
**Proposition 7.3**.: _Fix \(L,R>0\). Let \(\xi\) be a stochastic six vertex model with parameters \(\delta_{1}=\varepsilon L\) and \(\varepsilon R\) with doubly sided \((b_{1},b_{2})\) Bernoulli initial data except there is no arrow incoming from \(v_{0}:=(0,1)\). We let \(b_{2}\in(0,1)\) and \(b_{1}\) depending on \(\varepsilon>0\) so that (2.3) holds. Let \(P(t)\) be the location of a second class particle started at \(v_{0}\), and \(Q(t)=P(t)-t\). Then, for any finite set \(S\subseteq\mathbb{Z}\) we have,_
\[\lim_{\varepsilon\to 0}\mathbb{P}\left[Q(\lfloor\varepsilon^{-1}t\rfloor)\in S \right]=\mathbb{P}\left[\tilde{Q}(t)\in S\right] \tag{7.8}\]
_where \(\tilde{Q}\) is a second class particle in the ASEP started from \(0\) with outerwise Bernoulli \(b_{2}\) initial data. As a consequence,_
\[\lim_{\varepsilon\to 0}\mathbb{P}\left[Q([\varepsilon^{-1}t])>u\right]=\mathbb{P} \left[\tilde{Q}(t)>u\right] \tag{7.9}\]
_for all \(u\in\mathbb{Z}\)._
The proof of the above is not hard given the concrete nature of the proof of convergence of the stochastic six vertex model to the ASEP in [1]. We discuss some details in Appendix B.
Given the above convergence result, Theorem 2.5 is deduced from Propositions 5.3 and 5.7 in the same way that Theorem 2.4 was deduced from Theorem 2.3.
## 8 Step initial data
Recall that we say that a S6V model has step initial condition if no arrows enter along the \(y\)-axis and every vertex along the \(x\)-axis has an incoming arrow. Recall the notation (2.16) for \(\sigma(x,y)\) and \(\mathcal{H}(x,y)\). The estimate (2.19) follows immediately from the following proposition, and (2.20) follows from (2.19) by convergence of the S6V to the ASEP, which proves Theorem 2.7.
**Proposition 8.1**.: _Let \(H(x,y)\) be the height function of the S6V with step initial condition, and \(1>\delta_{1}>\delta_{2}>0\). Suppose there is an \(\frac{1}{2}>\mathfrak{a}>0\) so that,_
\[\kappa+\mathfrak{a}(1-\kappa)<\frac{y}{x}\leq\frac{1-(1-\kappa)\mathfrak{a}}{ \kappa},\qquad\kappa>\mathfrak{a}. \tag{8.1}\]
_For any \(0<u<cy(1-\kappa)\) we have,_
\[\mathbb{P}\left[H(x,y)>\mathcal{H}(x,y)+u\right]\leq\exp\left(-\frac{4}{3} \frac{u^{3/2}}{\sigma(x,y)^{3/2}}+C\frac{u^{2}}{y(1-\kappa)}\right) \tag{8.2}\]
**Proof.** In the basic coupling we have \(H(x,y)\leq H^{(a_{1},a_{2})}(x,y)\) for any \(0<a_{i}<1\). Therefore,
\[\mathbb{P}\left[H(x,y)>u\right]\leq\mathrm{e}^{-\varepsilon u}\mathbb{E} \left[\mathrm{e}^{\varepsilon H^{(a_{1},a_{2})}(x,y)}\right] \tag{8.3}\]
We choose \(a_{i}\) such that \(\alpha_{1}\mathrm{e}^{\varepsilon}=\kappa\alpha_{2}\). Applying Lemma 3.2 and (5.10) we find,
\[\log\mathbb{E}\mathrm{e}^{\varepsilon H^{(a_{1},a_{2})}(x,y)} =\varepsilon\left(y\frac{\kappa\alpha_{2}}{1+\kappa\alpha_{2}}-x \frac{\alpha_{2}}{1+\alpha_{2}}\right)\] \[+\frac{\varepsilon^{2}}{2}\left(-y\frac{\kappa\alpha_{2}}{(1+ \kappa\alpha_{2})^{2}}+x\frac{\alpha_{2}}{(1+\alpha_{2})^{2}}\right)\] \[+\frac{\varepsilon^{3}}{6}\left(y\frac{\kappa\alpha_{2}(1-\kappa \alpha_{2})}{(1+\kappa\alpha_{2})^{3}}-x\frac{\alpha_{2}(1-\alpha_{2})}{(1+ \alpha_{2})^{3}}\right)+\mathcal{O}(\varepsilon^{4}y(1-\kappa)). \tag{8.4}\]
using that \(|y-x|\leq C(1-\kappa)y\), which is a consequence of (8.1). Let, \(\alpha_{*}\) and \(\alpha_{m}\) be the (unique) solutions to,
\[\frac{y\kappa}{x}=\frac{(1+\mathrm{e}^{-\varepsilon}\alpha_{m}\kappa)(1+ \alpha_{m}\kappa)}{(1+\mathrm{e}^{-\varepsilon}\alpha_{m})(1+\alpha_{m})}, \qquad\frac{y\kappa}{x}=\left(\frac{1+\alpha_{*}\kappa}{1+\alpha_{*}}\right) ^{2}. \tag{8.5}\]
Due to (8.1), these solutions are bounded above and away from \(0\) and we have \(\alpha_{m}-\alpha_{*}=\frac{\varepsilon}{2}\alpha_{*}+\mathcal{O}(\varepsilon^{2})\). We then choose \(\alpha_{2}=\alpha_{m}\) in (8.4) and then expand the resulting expression around \(\alpha_{*}\). After this somewhat tedious calculation, one obtains
\[\log\left(\mathbb{P}\left[H(x,y)-\mathcal{H}(x,y)>u\right]\right)\leq- \varepsilon u+\sigma(x,y)^{3}\frac{\varepsilon^{3}}{12}+C\varepsilon^{4}y(1- \kappa). \tag{8.6}\]
The claim follows after optimizing the first two terms over \(\varepsilon\).
## Appendix A Biased random walk estimate
In this section we establish a discrete time version of Lemma 4.1 of [11], which is an estimate for a certain biased random walk.
The set-up is as follows. We will consider a random walk \(Z(n)\) on the integers \(\mathbb{Z}\). There is a fixed deterministic function \(c(x,n):\mathbb{Z}^{2}\to\{0,1\}\) with the property that for all \(n\) and \(x\), it is never the case that \(c(x,n)=c(x+1,n)=1\). The function \(c(x,n)\) determines whether or not a jump is possible at time \(n\) from site \(x\) to \(x+1\) or from \(x+1\) to \(x\).
With \(c(x,n)\) fixed, \(Z(n)\) evolves as follows. Let \(1>\delta_{1}>\delta_{2}>0\). If at time \(n\) we have \(Z(n)=x\) and \(c(x,n)=c(x-1,n)=0\) then we set \(Z(n+1)=x\). If \(c(x,n)=1\) (and so necessarily \(c(x-1,n)=0\) by assumption), then we set \(Z(n+1)=x+1\) with probability \(\delta_{1}\) and \(Z(n+1)=x\) otherwise. If \(c(x-1,n)=1\) then set \(Z(n+1)=x-1\) with probability \(\delta_{2}\) and \(Z(n+1)=x\) otherwise.
**Lemma A.1**.: _Let \(Z(n)\) and \(1>\delta_{1}>\delta_{2}>0\) be as above and assume \(\delta_{2}<0.5\). Assume \(Z(0)=0\). Let \(\theta:=\frac{\delta_{1}\wedge 0.5-\delta_{2}}{\delta_{1}\wedge 0.5+\delta_{2}}>0.\) Then, for all integers \(k\geq 0\),_
\[\mathbb{P}\left[Z(n)\leq-k\right]\leq\mathrm{e}^{-\theta k}.\] (A.1)
**Proof.** We couple \(Z(n)\) to another random walk \(Y(n)\) such that \(Y(n)\leq Z(n)\) for all \(n\). Let \(Y(0)\) be distributed according to any distribution on \(\mathbb{Z}\cap(-\infty,0]\). Let \(\delta:=\delta_{1}\wedge 0.5\). We will construct \(Y(n)\) so that its transition probabilities are the same as \(Z(n)\) described above, except we replace \(\delta_{1}\) by \(\delta\), and jumps from the site \(0\) to the site \(1\) are suppressed (that is, it jumps to the right with probability \(\delta\) and to the left with probability \(\delta_{2}\), and it only jumps across the edge \((x,x+1)\) if \(c(x,n)=1\)). We couple these walks as follows.
First, if \(Y(n)\) and \(Z(n)\) are not at the same or adjacent sites, then allow them to jump following the dynamics described above for \(Z(n)\), except if \(Y(n)\) tries to jump from \(0\) to \(1\), we just \(Y(n+1)=Y(n)=0\). If they are at the same site \(x\) and \(c(x,n)=c(x-1,n)=0\) then set them both equal to \(x\) at time \(n+1\). For the other cases:
* If they are at the same site \(x\) and \(c(x-1,n)=1\), then have them both jump to \(x-1\) with probability \(\delta_{2}\) and stay put with probability \(1-\delta_{2}\).
* If they are at the same site \(x<0\) and \(c(x,n)=1\), then: allow them both to jump to site \(x+1\) with probability \(\delta\); let \(Y(n+1)=x\) and \(Z(n+1)=x+1\) with probability \(\delta_{1}-\delta\); and let them both stay at the site \(x\) with probability \(1-\delta_{1}\).
* If they are at site \(0\) and \(c(0,n)=1\), then allow \(Z(n)\) to evolve as usual but set \(Y(n+1)=0\) (recall necessarily that \(c(-1,n)=0\)).
* If they are such that \(Y(n)=Z(n)-1=x-1\) and \(c(x-1,n)=0\) allow them to evolve as usual, as \(Y(n)\) may be allowed to jump left if \(c(x-2,n)=1\) and \(Z(n)\) may be allowed to jump right if \(c(x,n)=1\).
* If they are such that \(Y(n)=Z(n)-1=x-1\) and \(c(x-1,n)=1\), and \(x\leq 0\) then: let \(Y(n+1)=Z(n+1)=x-1\) with probability \(\delta_{2}\); let \(Y(n+1)=x-1\) and \(Z(n+1)=x\) with probability \(1-\delta-\delta_{2}>0\); let \(Y(n+1)=Z(n)=x\) with probability \(\delta\).
* If they are such that \(Y(n)=0\) and \(Z(n)=1\) and \(c(0,n)=1\), allow \(Z(n)\) to evolve as usual and set \(Y(n+1)=0\) (as necessarily \(c(-1,n)=0\)).
Note that by the assumption \(\delta_{2}<\frac{1}{2}\), all the probabilities specified above are nonnegative, and by induction we see that \(Y(n)\leq Z(n)\) for all \(n\) and furthermore, \(Y(n)\leq 0\) for all \(n\). Clearly the marginal probabilities of jumping coincide with claimed transition probabilities for \(Y(n)\) and \(Z(n)\).
Let \(P_{xy,n}=\mathbb{P}[Y(n+1)=y|Y(n)=x]\). Let now \(v_{x}:=(\delta/\delta_{2})^{x}\) for \(x\leq 0\). For \(x,y\leq 0\) clearly for all \(n\) we have,
\[v_{x}P_{xy,n}=v_{y}P_{yx,n},\] (A.2)
because this is equivalent to
\[v_{x}c(x,n)\delta=v_{x+1}c(x,n)\delta_{2}\] (A.3)
holding for all \(x\leq-1\) and all \(n\). Therefore \(v_{x}\) defines an invariant measure for the random walk described by \(Y(n)\). Set \(Y(0)\) to have distribution proportional to this invariant measure. Similarly to [11] we then find,
\[\mathbb{P}\left[Z(n)\leq-k\right] \leq\mathbb{P}\left[Y(n)\leq-k\right]=(\delta_{2}/\delta)^{k}\] \[=\exp\left(k\log\left(\frac{1-\theta}{1+\theta}\right)\right) \leq\mathrm{e}^{-2\theta k}\] (A.4)
which yields the claim.
## Appendix B Convergence of second class particle to ASEP
In this short appendix we discuss adapting the proof of the main results of [1] to proving convergence of the second class particle of the stochastic six vertex model to that of the ASEP. Given the direct method of proof of [1] and the definitions of the second class particle, only minor comments are required. Indeed, recall that the second class particle in both models is generated by fixing two initial distributions \(\eta_{0},\xi_{0}\) of particles such that one dominates the other, and there is a single discrepancy between the distributions. Then, we generate the same set of jump instructions for both models (the basic coupling) and allow them to evolve; the location of the discrepancy is the location of the second class particle. Given that [1] proves that the jump instructions of the stochastic six vertex model converge to those of the ASEP via the notion of time graphs, the proof is straightforward.
As discussed in Section 7.1.1, the proof of Theorem 2 of [1] is based on three Propositions, labelled 6, 7, 8. Convergence of the second class particles can be proven exactly along the same lines; by truncating the jump instructions/time graphs of both the ASEP and the stochastic
six vertex model (Propositions 6 and 7) and then showing convergence of the truncated stochastic six vertex model to the ASEP (Proposition 8).
First, in the proofs of Propositions 6 and 7 of [1], it is shown that if one considers truncated time graphs of the ASEP and stochastic six vertex model that are obtained by deleting any jump instructions involving particles outside the interval \([-M,N]\), then for any \(\varepsilon>0\), for \(M,N\) sufficiently large, the truncated processes agrees with the untruncated processes on some interval contained in \([-M/2,N/2]\). Applying this argument to our set-up shows that this statement holds simultaneously for the evolutions associated to each of the initial distributions \(\eta_{0}\) and \(\xi_{0}\). Hence, the second class particles generated by the truncated systems converge to the untruncated ones, in both the ASEP and the stochastic six vertex model, with the convergence for the second being uniform in \(\varepsilon>0\).
Finally, in Proposition 8 of [1], a further modified time graph of the truncated graph for the stochastic six vertex model is introduced. Due to the fact that this modified time graph directly converges to that of the ASEP, we can conclude that the second class particle obtained from the modified time graph of the stochastic six vertex model converges to a second class particle in the ASEP.
The remainder of the proof of Proposition 8 of [1] shows as \(\varepsilon\to 0\), the probability that the particle evolutions through the modified and truncated time graphs coincide with probability tending to \(1\). Hence; the location of the second class particle in the modified and truncated time graphs also coincide with probability tending to \(1\) as \(\varepsilon\to 0\).
## Appendix C Deterministic estimates
The following elementary estimates for some quantities are required throughout the paper.
**Lemma C.1**.: _Let \(\mathfrak{a}<\kappa<1\), for some \(\mathfrak{a}>0\). Let \(\beta>0\) and let \(y\geq 1\). Let,_
\[x_{0}=y\kappa\left(\frac{1+\kappa^{-1}\beta}{1+\beta}\right)^{2}\] (C.1)
_Let \(y\kappa<x_{1}<x_{0}\) and let \(\hat{\beta}>0\) solve,_
\[x_{1}=y\kappa\left(\frac{1+\kappa^{-1}\hat{\beta}}{1+\hat{\beta}}\right)^{2}.\] (C.2)
_There is a constant \(c_{1}>0\) so that if_
\[|x_{1}-x_{0}|\leq c_{1}y(1-\kappa)\] (C.3)
_then,_
\[\hat{\beta}\geq\frac{\beta}{2}\] (C.4)
_and if we define \(\varepsilon>0\) by \(\hat{\beta}=\mathrm{e}^{-\varepsilon}\beta\) then,_
\[\varepsilon\asymp\frac{x_{0}-x_{1}}{y(1-\kappa)},\] (C.5)
_where the implicit constants depend only on \(\beta,\mathfrak{a}\)._
**Proof.** By direct calculation,
\[\frac{\hat{\beta}}{\beta} =\frac{(\sqrt{x_{1}}-\sqrt{\kappa y})(\sqrt{y\kappa^{-1}}-\sqrt{x_{0 }})}{(\sqrt{y\kappa^{-1}}-\sqrt{x_{1}})(\sqrt{x_{0}}-\sqrt{y\kappa})}\] \[=\frac{(x_{1}-\kappa y)(y\kappa^{-1}-x_{0})}{(y\kappa^{-1}-x_{1}) (x_{0}-y\kappa)}\times\frac{(\sqrt{y\kappa^{-1}}+\sqrt{x_{1}})(\sqrt{x_{0}}+ \sqrt{y\kappa})}{(\sqrt{x_{1}}+\sqrt{y\kappa})(\sqrt{y\kappa^{-1}}+\sqrt{x_{0 }})}.\] (C.6)
The second factor of the second line of (C) is clearly \(1+\mathcal{O}(|x_{0}-x_{1}|y^{-1})\). We have,
\[x_{0}-\kappa y\asymp\kappa^{-1}y-x_{0}\asymp y(1-\kappa).\] (C.7)
Therefore, if we take \(c_{1}>0\) sufficiently small in (C) we see that \(\hat{\beta}\geq\beta/2\) as the first factor of the second line of (C) is seen to be \(1+\mathcal{O}(|x_{0}-x_{1}|(y(1-\kappa))^{-1})\). Using now the first line of (C) we see that,
\[\frac{\hat{\beta}}{\beta} =1-(\sqrt{x_{0}}-\sqrt{x_{1}})\left(\frac{1}{\sqrt{x_{0}}-\sqrt{ \kappa y}}+\frac{1}{\sqrt{y\kappa^{-1}}-\sqrt{x_{1}}}\right)\] \[+\frac{\left(\sqrt{x_{0}}-\sqrt{x_{1}}\right)^{2}}{(\sqrt{y \kappa^{-1}}-\sqrt{x_{1}})(\sqrt{x_{0}}-\sqrt{\kappa y})}.\] (C.8)
From (C.7) and the fact that \(x_{0}\asymp x_{1}\asymp y\) we deduce, by taking \(c_{1}>0\) smaller if necessary,
\[\sqrt{y\kappa^{-1}}-\sqrt{x_{1}}\asymp\sqrt{x_{0}}-\sqrt{\kappa y}\asymp(1- \kappa)y^{1/2}\] (C.9)
and \(\sqrt{x_{0}}-\sqrt{x_{1}}\asymp(x_{0}-x_{1})y^{-1/2}\). Therefore, taking \(c_{1}>0\) smaller if necessary we see that,
\[1-\frac{\hat{\beta}}{\beta}\asymp\frac{x_{0}-x_{1}}{y(1-\kappa)}.\] (C.10)
The claim now follows.
**Acknowledgements.** The work of B.L. is partially supported by NSERC and a Connaught New Researcher Award. The work of P.S. is partially supported by NSF grants DMS-1811093 and DMS-2154090. B.L. thanks Amol Aggarwal for useful discussions. The authors thank Ivan Corwin for suggesting this problem.
|
2309.08905 | Spin dependence in the $p$-wave resonance of
${^{139}\vec{\rm{La}}+\vec{n}}$ | We measured the spin dependence in a neutron-induced $p$-wave resonance by
using a polarized epithermal neutron beam and a polarized nuclear target. Our
study focuses on the 0.75~eV $p$-wave resonance state of $^{139}$La+$n$, where
largely enhanced parity violation has been observed. We determined the partial
neutron width of the $p$-wave resonance by measuring the spin dependence of the
neutron absorption cross section between polarized $^{139}\rm{La}$ and
polarized neutrons. Our findings serve as a foundation for the quantitative
study of the enhancement effect of the discrete symmetry violations caused by
mixing between partial amplitudes in the compound nuclei. | T. Okudaira, R. Nakabe, S. Endo, H. Fujioka, V. Gudkov, I. Ide, T. Ino, M. Ishikado, W. Kambara, S. Kawamura, R. Kobayashi, M. Kitaguchi, T. Okamura, T. Oku, J. G. Otero Munoz, J. D. Parker, K. Sakai, T. Shima, H. M. Shimizu, T. Shinohara, W. M. Snow, S. Takada, Y. Tsuchikawa, R. Takahashi, S. Takahashi, H. Yoshikawa, T. Yoshioka | 2023-09-16T07:13:15Z | http://arxiv.org/abs/2309.08905v1 | # Spin dependence in the \(p\)-wave resonance of \({}^{139}\overline{\text{La}}+\overline{n}\)
###### Abstract
We measured the spin dependence in a neutron-induced \(p\)-wave resonance by using a polarized epithermal neutron beam and a polarized nuclear target. Our study focuses on the 0.75 eV \(p\)-wave resonance state of \({}^{139}\)La+\(n\), where largely enhanced parity violation has been observed. We determined the partial neutron width of the \(p\)-wave resonance by measuring the spin dependence of the neutron absorption cross section between polarized \({}^{139}\)La and polarized neutrons. Our findings serve as a foundation for the quantitative study of the enhancement effect of the discrete symmetry violations caused by mixing between partial amplitudes in the compound nuclei.
neutron induced compound nuclei, polarized epithermal neutrons, polarized nuclear target pacs: 12.30.-k, 12.30.-k, 12.30.-k, 12.30.-k, 12.30.-k
## I Introduction
The spin dependence of the strong interaction between a neutron and a nucleus can lead to a spin-dependent cross section proportional to \(\mathbf{\sigma}\cdot\mathbf{I}\), where \(\mathbf{\sigma}\) and \(\mathbf{I}\) are unit vectors parallel to the spins of the neutron and the nucleus, respectively. This spin-dependent cross section can be observed through a spin-dependent transmission through a polarized nuclear target. At a neutron-nucleus resonance, this observable can directly determine the spin of compound resonance states. Consequently, it has been employed in measuring \(s\)-wave resonances for a select few nuclides, utilizing both a polarized neutron beam and a polarized target [1; 2; 3]. In the case of \(p\)-wave resonances, the spin-dependent cross section imparts valuable information not only regarding the spin of the resonance but also the partial neutron widths.
These widths enable the exploration of symmetry violation enhancement effects in the compound nucleus. Enhancements in parity violation, exceeding magnitudes of nucleon-nucleon interactions by a factor of \(10^{6}\), have been observed in \(p\)-wave resonances of nuclei with a mass number of medium-heavy or heavy nuclei [4]. These enhancements can be understood as a result of parity mixing between \(s\)- and \(p\)- compound nuclear resonances. This is referred to as the \(s\)-\(p\) mixing model [4; 5; 6]. Theory suggests that this mechanism can lead to an enhancement of fundamental time reversal violation, which could be utilized to search for beyond the Standard Model physics by measuring a T-odd cross section at the \(p\)-wave resonance using a polarized target and a polarized neutron beam [7]. We can quantify the enhancement factors associated with both P- and T-violations by determining the partial neutron width [8; 9; 10; 11; 12; 13; 14; 15].
This paper presents the first measurement of the spin-dependent cross section at the \(p\)-wave compound resonance, employing a polarized epithermal neutron beam and a polarized nuclear target. As our target nucleus, we selected \({}^{139}\)La, which displays an exceedingly large enhanced parity violation at the 0.75 eV \(p\)-wave resonance [16].
## II Experiment
### Experimental setup
The experiment was performed with a pulsed epithermal neutron beam at the RADEN beamline of the Material and Life Science Experimental Facility (MLF) at the Japan Proton Accelerator Research Complex (JPARC) [17]. The experimental setup is depicted in Fig. 1. The La target is placed 23.0 m from the moderator surface. A 2.0 cm cubic lanthanum metal cooled with a dilution refrigerator was used as the target. A 6.8 Tesla transverse magnetic field was applied using a superconducting magnet to polarize the target nuclei. The neutron beam, collimated to a 3 cm by 3 cm size, was stripped of thermal neutrons using a cadmium filter upstream of the beamline to reduce the heat load on the
La target induced by the neutron beam. The beam was polarized with a neutron polarizer using polarized \({}^{3}\)He gas (\({}^{3}\)He spin filter), located 4.3 m upstream of the polarized target. The \({}^{3}\)He spin filter was polarized using the spin exchange optical pumping method (SEOP) with a 110 W laser system constructed outside of the beamline and then installed on the beamline with a coil and a double magnetic shield to maintain the \({}^{3}\)He polarization [18]. The \({}^{3}\)He cell was 45 mm in diameter by 70 mm in length and the pressure was 0.31 MPa. The neutron beam, longitudinally polarized by the \({}^{3}\)He spin filter, was guided using a guide magnet. The spin direction was adiabatically rotated to the transverse direction utilizing the stray magnetic field of the superconducting magnet. The neutron spin was flipped every 30 minutes by flipping the spin of \({}^{3}\)He gas using adiabatic fast passage (AFP) NMR. The loss of the \({}^{3}\)He polarization was \(4\times 10^{-5}\) per flip, which was negligibly small. Transmitted neutrons were recorded in list mode using a 256-pixel lithium glass scintillator detector located at 24.71 m from the moderator surface [19]. Downstream of the La target, another collimator was installed to reduce the beam divergence. The neutron energy \(E_{n}\) was determined using the neutron time-of-flight (TOF) and the flight path length. The proton beam power was 750 kW during the experiment.
Figure 2 illustrates the configuration around the La target. The La target was held in place between upper and lower copper holders, fastened using copper screws. The upper holder was connected to the cold head of the dilution refrigerator, enabling cooling of the La target through thermal conduction. The temperature \(T\) was monitored with a ruthenium oxide thermometer installed in the cold head. We performed the experiment in two conditions: (a) low temperature condition (\(T=67\) mK ) and (b) high temperature condition (\(T=1\) K). The measurement times were 22 hours and 6 hours, respectively. In the condition (a), the temperature increase by the beam irradiation to the La target was approximately 1 mK, indicating that the temperature difference between the cold head and the La target can be considered negligible. The temperature fluctuation was also around 1 mK, which was caused by beam interruptions in the accelerator due to malfunctions.
### Measurement of the asymmetry
The cross sections for parallel and antiparallel polarized neutron and nucleus can be written with the spin-independent cross section \(\sigma_{0}\) and spin-dependent cross section \(\sigma_{\rm S}\) as
\[\sigma_{\pm}=\sigma_{0}\pm\sigma_{\rm S}, \tag{1}\]
where \(+\) and \(-\) denote the parallel and antiparallel spins, respectively. The asymmetry of neutron counts for parallel and anti-parallel spins transmitted through the polarized lanthanum target, defined as
\[\varepsilon_{\rm S}=\frac{N_{-}-N_{+}}{N_{-}+N_{+}}, \tag{2}\]
where \(N_{-}\) and \(N_{+}\) are the neutron counts for parallel and anti-parallel spins, was measured. The neutron counts \(N_{\pm}\) are expressed using the neutron polarization \(P_{n}\) and nuclear vector polarization \(P_{I}\) as
\[N_{\pm}=\frac{1\pm P_{n}}{2}N\epsilon\exp\left((\sigma_{0}\pm P_{I}\sigma_{\rm S })\rho d\right), \tag{3}\]
where \(N\), \(\epsilon\), \(\rho\), and \(d\) are number of incident neutrons, detection efficiency of a neutron detector, number density of the nuclear target, and the thickness of the nuclear target, respectively. The spin-dependent asymmetry \(\varepsilon_{\rm S}\) can be described using Eq.3 as
\[\varepsilon_{\rm S}=P_{n}\tanh\left(P_{I}\sigma_{\rm S}\rho d\right). \tag{4}\]
In this paper, the measurement and analysis were performed using resonance parameters of La+n reactions listed in Table 1, which were recently measured by Endo _et al._[20] using both neutron transmission and (\(n\), \(\gamma\)) reaction with an intense pulsed neutron beam at J-PARC.
Figure 1: Experimental setup.
Figure 2: Configuration around the La target. The thermometer was installed in the cold head.
Figure 3 shows the TOF spectra of the transmitted neutrons and asymmetry \(\varepsilon_{\rm S}\) in conditions (a) and (b). We observed a significant asymmetry in condition (a), corresponding to a high nuclear polarization, while the asymmetry disappeared in condition (b) due to the lower nuclear polarization. The peak and dip structures were observed at the 2.99 eV and 0.75 eV resonances. The global structure observed less than 0.3 ms is attributed to the spin-dependent cross section of the negative \(s\)-wave resonance.
### Neutron polarization
The neutron polarization was obtained using the \({}^{3}\)He polarization of the \({}^{3}\)He spin filter. The \({}^{3}\)He polarization was determined with the ratio of the transmitted neutrons for polarized and unpolarized \({}^{3}\)He spin filter. The ratio of the transmitted neutrons is described as
\[\frac{N_{\rm pol}}{N_{\rm unpol}}=\cosh(P_{\rm He}(t)\rho_{\rm He}d_{\rm He} \sigma_{\rm He}), \tag{5}\]
where, \(P_{\rm He}\), \(\sigma_{\rm He}\), and \(\rho_{\rm He}d_{\rm He}\) are the \({}^{3}\)He polarization, neutron absorption cross section of \({}^{3}\)He, and areal density of \({}^{3}\)He gas, respectively. Here, \(N_{\rm pol}\) is defined as \(N_{+}+N_{-}\) for cancelling the spin-dependent asymmetry derived from the polarization of the La target. The areal density \(\rho_{\rm He}d_{\rm He}\) was obtained from the measurement of the ratio of transmitted neutrons for unpolarized \({}^{3}\)He spin filter and empty glass cell as 21.4 atm\(\cdot\)cm. The \({}^{3}\)He polarization was obtained for each flip by fitting the TOF dependence of \(N_{\rm pol}/N_{\rm unpol}\) using Eq. 5 with a fit parameter of \(P_{\rm He}\) as shown in Fig 4. Figure 5 shows the time dependence of the \({}^{3}\)He polarization. The relaxation time of the \({}^{3}\)He polarization \(\tau\), which was obtained by fitting with \(P_{\rm He}(t)=P_{\rm He}(0)\exp(-t/\tau)\), was 161 h. The averaged \({}^{3}\)He polarization \(\bar{P}_{\rm He}\) during the measurement was \((68\pm 1)\%\).
The neutron polarization \(P_{n}\) transmitted through the \({}^{3}\)He spin filter is determined as
\[P_{n}(t)=-\tanh(P_{\rm He}(t)\rho_{\rm He}d_{\rm He}\sigma_{\rm He}). \tag{6}\]
Figure 6 shows an averaged neutron polarization \(\bar{P}_{n}\) as a function of the neutron energy calculated from the averaged \({}^{3}\)He polarization. The averaged neutron polarization at 0.75 eV was \((36.1\pm 0.5)\%\).
### Nuclear polarization determined by spin-dependent asymmetry
The \({}^{139}\)La nuclear polarization was determined utilizing the spin-dependent asymmetry at the 2.99 eV \(s\)-wave resonance of \({}^{138}\)La. The spin-dependent asymmetry at the 2.99 eV resonance, after subtracting of the negative \(s\)-wave component, was obtained as
\[\varepsilon_{\rm S}=(5.1\pm 0.7)\times 10^{-4}. \tag{7}\]
The spin-dependent cross section of the 2.99 eV resonance \(\sigma_{\rm S,s}^{\rm thoe}\) can be theoretically described using the resonance parameters listed in Table 1 as
\[\sigma_{\rm S,s}^{\rm thoe}=\frac{5\pi}{11k^{2}}\frac{\Gamma_{s}^{n}\Gamma_{s} }{(E-E_{s})^{2}+(\Gamma_{s}/2)^{2}}, \tag{8}\]
where \(E_{s}\), \(\Gamma_{s}^{n}\), and \(\Gamma_{s}\) are the resonance energy, neutron width, and total width of the 2.99 eV s-wave resonance, respectively. The nuclear polarization of \({}^{138}\)La can be determined using Eqs. 4, 7 and 8, taking into account its natural abundance and the neutron polarization at 2.99 eV, yielding a value of 4.9\(\pm\)0.7%. The target temperature \(T_{\rm La}\) was calculated based on a Boltzmann distribution and using the magnetic moment and nuclear spin listed in Table 2, resulting in \(T_{\rm La}=75.7^{+10.2}_{-8.9}\) mK, which is consistent with the temperature measured at the cold head of 67 mK. Under the assumption that the spin temperature of \({}^{139}\)La is the same as that of \({}^{138}\)La, the corresponding \({}^{139}\)La nuclear polarization was determined to be \(3.9\pm 0.5\)%.
### Spin-dependent cross sections at the resonances
The experimental value of the spin-dependent cross section \(\sigma_{\rm S}^{\rm exp}\) was obtained from the asymmetry \(\varepsilon_{\rm S}\) using Eq. 4. The resonance component \(\sigma_{\rm S,r}^{\rm exp}\) was isolated by fitting the global structure attributed to the negative \(s\)-wave component with a third order polynomial function. The resonance regions listed in the Table 1 are excluded from the fitting. Figure 7 shows the TOF dependence of \(P_{I}\sigma_{\rm S}^{\rm exp}\) and \(P_{I}\sigma_{\rm S,r}^{\rm exp}\). Note that Fig. 7 was calculated using the areal density of \({}^{139}\)La. A \(p\)-value, defined as \(p=(1-{\rm C.L.})/2\), where C.L. is the confidence level of the non-zero asymmetry, is also depicted to show the significance of \(P_{I}\sigma_{\rm S}^{\rm exp}\) in Fig. 7. The \(p\)-value indicates the
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline Isotope & \(I^{\prime\prime}\) & Abundance & \(\mu_{0}\) \\ \hline \({}^{139}\)La & 7/2\({}^{+}\) & 99.91\% & 2.78 \\ \({}^{138}\)La & 5\({}^{+}\) & 0.09\% & 3.71 \\ \hline \end{tabular}
\end{table}
Table 2: Parameters of lanthanum isotopes. Nuclear spin and parity \(I^{P}\), natural abundance, and nuclear magnetic moment \(\mu_{0}\) are listed. The unit of the nuclear magnetic moment is nuclear magneton.
Figure 4: Ratio of the counts of transmitted neutrons for the polarized \({}^{3}\)He spin filter. The curved line shows the best fit.
Figure 5: \({}^{3}\)He polarization versus elapsed time from the beginning of the measurement. The curved line shows the fit result by an exponential function. The measurement was not conducted from 16 h to 22 h due to a liquid He transfer for the superconducting magnet.
Figure 6: Neutron polarization obtained from the averaged \({}^{3}\)He polarization.
probability to observe a non-zero value of \(P_{I}\sigma_{{\rm S},r}^{\rm exp}\) in the hypothesis of no asymmetry. A confidence level of over 99.7% corresponds to a \(p\)-value less than \(1.35\times 10^{-3}\). The spin-dependence cross section was first observed at the \(p\)-wave resonance with over 99.7% C.L. as shown in Fig 7.
The spin-dependent cross section in the \(p\)-wave resonance region of \(E_{p}-3\Gamma_{p}<E_{n}<E_{p}+3\Gamma_{p}\) after the subtraction of the negative \(s\)-wave component, defined as \(\sigma_{{\rm S},p}^{\rm exp}\), is obtained using the nuclear polarization in Section II.4 as
\[\sigma_{{\rm S},p}^{\rm exp}=-0.26\pm 0.08\ {\rm barn}, \tag{9}\]
where \(E_{p}\) and \(\Gamma_{p}\) are the resonance energy and total width of the \(p\)-wave resonance, shown in Table 1. Here, the total width is defined as \(\Gamma_{p}=\Gamma_{p}^{\gamma}+\Gamma_{p}^{n}\). The asymmetry of the spin-dependent cross section relative to the spin-independent cross section of the \(p\)-wave component was also obtained as
\[A_{\rm S} = \frac{\sigma_{+}^{p}-\sigma_{-}^{p}}{\sigma_{+}^{p}+\sigma_{-}^{ p}}=\frac{\sigma_{{\rm S},p}^{\rm exp}}{\sigma_{0,p}^{\rm theo}} \tag{10}\] \[= -0.36\pm 0.11.\]
The spin-independent cross section \(\sigma_{0,p}^{\rm theo}\) was theoretically calculated with a Breit-Wigner formula, defined as,
\[\sigma_{0,p}^{\rm theo}=\frac{9\pi}{16k^{2}}\frac{\Gamma_{p}^{n}\Gamma_{p}}{(E- E_{p})^{2}+(\Gamma_{p}/2)^{2}}. \tag{11}\]
When using the nuclear polarization calculated from the temperature measured at the cold head, the differences of \(\sigma_{{\rm S},p}^{\rm exp}\) and \(A_{\rm S}\) from the values in Eq. 9 and Eq. 10 were +0.03 barn and +0.04, respectively. These differences were smaller than the statistical error.
## III Analysis
Under the experimental conditions, the spin-dependent assymetry can be approximated as
\[\varepsilon_{\rm S}\simeq P_{I}P_{n}\rho d\frac{4\pi}{k}{\rm Im}B^{\prime}, \tag{12}\]
as described in the Appendix A, where \(B^{\prime}\) is the coefficient in Eq.(10) in Ref. [22] representing the spin-spin interaction in the forward angle scattering amplitude. The following subsections will discuss the implications of the experimental results to the partial neutron width of the \(p\)-wave resonance and the spins of the \(s\)-wave resonances.
### Determination of partial neutron width using spin-dependent cross section
The partial neutron width can also be extracted from the angular correlations of \(\gamma\)-rays emitted from \(p\)-wave resonances, which arise from interference between \(s\)- and \(p\)-wave amplitudes [10; 11; 12; 13; 14]. The advantage of using the spin-dependent cross section is that the neutron partial width can be directly determined without assuming the interference between partial amplitudes and the final state spin after the \(\gamma\) decay.
The spin-dependent cross section at the \(p\)-wave resonance can be calculated using the explicit theoretical
Figure 7: (a) TOF dependence of \(P_{I}\sigma_{\rm S}^{\rm exp}\). The curved line is the best fit of the global structure derived from the negative \(s\)-wave resonance. (b) Resonance component of spin-dependent cross section. (c) \(p\)-value for \(P_{I}\sigma_{{\rm S},r}^{\rm exp}\). The dotted line shows 99.7% confidence level.
expression of \(B^{\prime}\) as [22]
\[\sigma_{\mathrm{S,}p}^{\mathrm{theo}} = \frac{4\pi}{k}\mathrm{Im}B^{\prime}=\frac{\pi}{16k^{2}}\frac{ \Gamma_{p}^{n}\Gamma_{p}}{(E-E_{p})^{2}+(\Gamma_{p}/2)^{2}} \tag{13}\] \[\times\left(-\frac{39}{4}x_{\mathrm{s}}^{2}+\frac{9}{2}\sqrt{ \frac{7}{5}}x_{\mathrm{s}}y_{\mathrm{s}}+\frac{63}{20}y_{\mathrm{s}}^{2} \right),\]
where \(x_{s}\) and \(y_{s}\) are ratios of the neutron partial width of the channel spin, defined as
\[x_{\mathrm{s}} = \frac{1}{2\sqrt{3}}(-\sqrt{7}x-\sqrt{5}y) \tag{14}\] \[y_{\mathrm{s}} = \frac{1}{2\sqrt{3}}(\sqrt{5}x-\sqrt{7}y). \tag{15}\]
The neutron partial widths of the neutron total angular momentum \(j=1/2\) and \(3/2\) components, denoted as \(\Gamma_{p,j=1/2}^{n}\) and \(\Gamma_{p,j=3/2}^{n}\), are expressed by \(x\) and \(y\) defined as
\[x^{2}=\frac{\Gamma_{p,j=1/2}^{n}}{\Gamma_{p}^{n}},\;\;y^{2}=\frac{\Gamma_{p,j= 3/2}^{n}}{\Gamma_{p}^{n}}, \tag{16}\]
where \(x\) and \(y\) satisfy \(x^{2}+y^{2}=1\). The corresponding mixing angle \(\phi\) can be defined as
\[x=\cos\phi,\;y=\sin\phi, \tag{17}\]
as discussed in Ref [22]. The broadening effect by the pulse shape of the neutron beam at 0.75 eV was negligibly small compared with the total width of the \(p\)-wave resonance and the statistical error, and therefore, the spin-dependent cross section obtained in Eq. 9 can be directly compared with the theoretical calculation. By calculating the Breit-Wigner function over the region \(E_{p}-3\Gamma_{p}<E_{n}<E_{p}+3\Gamma_{p}\) in Eq. 13, we obtained the following equation.
\[-0.26\pm 0.08=0.079\left(-7x^{2}-2\sqrt{35}xy+\frac{2}{5}y^{2}\right) \tag{18}\]
Using Eq. 17 and Eq. 18, we find the solutions for \(\phi\) as
\[\phi= (74\pm 4)^{\circ},\;(164\pm 4)^{\circ}, \tag{19}\] \[(254\pm 4)^{\circ},\;(344\pm 4)^{\circ}.\]
The corresponding \(x\) and \(y\) values are also obtained as
\[(x,y)= (0.28\pm 0.06,\;0.96\pm 0.02), \tag{20}\] \[(-0.96\pm 0.02,\;0.28\pm 0.06),\] \[(-0.28\pm 0.06,\;-0.96\pm 0.02),\] \[(0.96\pm 0.02,\;-0.28\pm 0.06).\]
The visualization of \(\phi\) is shown in Fig. 8. Equation 18 is described as the curved line in the \(xy\) plane. The intersections of the curved lines and unit circle show the solutions of \(\phi\).
The above analysis was also performed using the resonance parameters reported by other groups in Appendix B. The differences in the analysis results that arose from differences in the resonance parameters were within the statistical error. We confirmed that these differences stemming from the resonance parameters do not affect the conclusions of this paper.
### Spin of \(s\)-wave resonances
For the \(s\)-wave resonances, the spin \(J\) can be directly determined from the asymmetry. The positive (negative) sign of the asymmetry indicates that neutrons with parallel (anti-parallel) spin are likely to be absorbed by nuclei. The sign of the asymmetry in Fig. 7 (a) and (b) implies: \(J=4\) for the negative \(s\)-wave resonance of \({}^{139}\)La, whose spin is 7/2; \(J=11/2\) for the 2.99 eV \(s\)-wave resonance of \({}^{138}\)La, whose spin is 5; and \(J=3\) for the 72.3 eV \(s\)-wave resonance of \({}^{139}\)La. The spins of the \(s\)-wave resonances determined in this experiment are consistent with the reference values in Table 1.
## IV Conclusion
We observed the spin-dependent cross section at the 0.75 eV \(p\)-wave of \({}^{139}\)La+\(n\) using a polarized lanthanum target and a polarized pulsed neutron beam. The partial neutron width of the \(p\)-wave resonance was determined. In a separate paper, these results will be compared with other experimental results of (\(n\),\(\gamma\)) reactions [10; 11; 12; 13; 14; 15] in terms of the \(s\)-\(p\) mixing model and will be used for improving a quantitative understanding the symmetry violation enhancement mechanism.
Figure 8: Visualization of the \(\phi\) values on the \(xy\) plane. The curved lines, shaded areas, and dashed lines show Eq. 18 and its \(1\sigma\) region of the statistic error.
###### Acknowledgements.
The authors would like to thank the staff of beamline 22 for the maintenance, the low temperature sample environment team for the operation of the superconducting magnet and the dilution refrigerator, and MLF and J-PARC for operating the accelerators and the neutron production target. T. Okudaira would like to especially thank S. Ohira-Kawamura and M. Matsuura for their assistance designing the La holder. The neutron scattering experiment was approved by the Neutron Scattering Program Advisory Committee of IMSS and KEK (Proposals Nos. 2018S12). The neutron experiment at the Materials and Life Science Experimental Facility of the J-PARC was performed under a user program (Proposal No. 2022A0101). This work was supported by JSPS KAKENHI Grant Nos. 20K14495, 23K13122, JST SPRING Grant No. JPMJSP2125, and the US National Science Foundation PHY-1913789 and PHY-2209481. R. Nakabe acknowledges support from the Interdisciplinary Frontier Next-Generation Researcher Program of the Tokai Higher Education and Research System. W. M. Snow acknowledges support from the Indiana University Center for Spacetime Symmetries. J. G. Otero Munoz acknowledges support from the National GEM consortium. V. Gudkov acknowledges support from the U.S. Department of Energy Office of Science, Office of Nuclear Physics program under Award No. DE-SC0020687.
## Appendix A Neutron spin behavior in the polarized target and its approximation
We employ the optical description of the neutron spin behavior in polarized target material as described in Ref. [23; 24] to describe the measured asymmetry \(\varepsilon_{\rm S}\) as
\[\varepsilon_{\rm S}=P_{n}\frac{\mathrm{Tr}(\mathfrak{S}^{\dagger}\sigma_{x} \mathfrak{S})}{\mathrm{Tr}(\mathfrak{S}^{\dagger}\mathfrak{S})}=P_{n}\frac{2 \mathrm{Re}A^{*}B+2\mathrm{Im}C^{*}D}{\left|A\right|^{2}+\left|B\right|^{2}+ \left|C\right|^{2}+\left|D\right|^{2}}. \tag{10}\]
The coefficients \(A\), \(B\), \(C\), and \(D\) are related to the forward angle scattering amplitude given in Ref. [22] as
\[A =e^{i\alpha}\cos\beta,\quad B=ie^{i\alpha}\frac{\sin\beta}{\beta }\beta_{x}\] \[C =ie^{i\alpha}\frac{\sin\beta}{\beta}\beta_{z},\quad D=ie^{i\alpha }\frac{\sin\beta}{\beta}\beta_{y} \tag{11}\]
where
\[\frac{\alpha}{Z} =A^{\prime}+P_{1}H^{\prime}(\mathbf{k}\cdot\mathbf{I})+P_{2}E^{\prime}(( \mathbf{k}\cdot\mathbf{I})^{2}-\frac{1}{3})\] \[\frac{\beta_{x}}{Z} =P_{1}B^{\prime}+\frac{\mu_{n}m_{n}}{2\pi\hbar^{2}\rho}B_{\rm ext }+P_{2}F^{\prime}(\mathbf{k}\cdot\mathbf{I})+P_{3}\frac{B_{3}^{\prime}}{3}((\mathbf{k} \cdot\mathbf{I})^{2}-1)\] \[\frac{\beta_{y}}{Z} =P_{1}D^{\prime}+P_{2}G^{\prime}(\mathbf{k}\cdot\mathbf{I})\] \[\frac{\beta_{z}}{Z} =C^{\prime}+P_{1}K^{\prime}(\mathbf{k}\cdot\mathbf{I})-P_{2}\frac{F^{ \prime}}{3}+P_{3}\frac{2B_{3}^{\prime}}{3}(\mathbf{k}\cdot\mathbf{I})\] \[\beta^{2} =\beta_{x}^{2}+\beta_{y}^{2}+\beta_{z}^{2},\quad Z=\frac{2\pi\rho d }{k}. \tag{12}\]
Here, \(B_{\rm ext}\), \(\mu_{n}\), and \(m_{n}\) denote the external magnetic field, neutron magnetic moment, and mass, respectively. \(\mathbf{k}\) and \(\mathbf{I}\) are unit vectors parallel to the neutron momentum and the nuclear spin. The \(P_{1}\), \(P_{2}\) and \(P_{3}\) represent the target nuclear polarization of 1st-rank (vector), 2nd-rank, and 3rd-rank spherical tensors, respectively, and amounts \(P_{1}=P_{I}=3.9\pm 0.5\%\), \(P_{2}=0.10^{+0.03\%}_{-0.02\%}\), \(P_{3}=(2.1\pm 1.0)\times 10^{-3}\%\) in the present experimental conditions. Non-zero values of the \((\mathbf{k}\cdot\mathbf{I})\) originate from the beam divergence up to the maximum value of \(2\times 10^{-3}\). The valuables \(A^{\prime}\)-\(G^{\prime}\) are the coefficients of correlation terms in the forward scattering amplitude for polarized \({}^{139}\)La nuclei and polarized neutrons defined as [22]:
\[f= A^{\prime}+P_{1}H^{\prime}(\mathbf{k}\cdot\mathbf{I})+P_{2}E^{\prime} \left((\mathbf{k}\cdot\mathbf{I})^{2}-\frac{1}{3}\right)\] \[+(\mathbf{\sigma}\cdot\mathbf{I})\left(P_{1}B^{\prime}+P_{2}F^{\prime}( \mathbf{k}\cdot\mathbf{I})+P_{3}\frac{B_{3}^{\prime}}{3}\left((\mathbf{k}\cdot\mathbf{I})^{2} -1\right)\right)\] \[+(\mathbf{\sigma}\cdot\mathbf{k})\left(C^{\prime}+P_{1}K^{\prime}(\mathbf{k }\cdot\mathbf{I})-P_{2}\frac{F^{\prime}}{3}+P_{3}\frac{2B_{3}^{\prime}}{3}(\mathbf{k} \cdot\mathbf{I})\right)\] \[+\mathbf{\sigma}\cdot(\mathbf{k}\times\mathbf{I})\left(P_{1}D^{\prime}+P_{2}G ^{\prime}(\mathbf{k}\cdot\mathbf{I}).\right) \tag{13}\]
The magnitude of coefficients \(H^{\prime}\), \(E^{\prime}\), \(F^{\prime}\), \(P^{\prime}_{3}\), \(K^{\prime}\) and \(G^{\prime}\) are of the same order or less than \(A^{\prime}\) and \(B^{\prime}\) on the basis of the explicit expressions in Eq. (28)-(37) of Ref. [22]. The magnitudes of P-odd, T-even term \(C^{\prime}\) and P-odd, T-odd term \(D^{\prime}\) are smaller than \(A^{\prime}\) and \(B^{\prime}\) by more than two orders of magnitudes. Consequently, the value of \(\beta\) can be approximated as \(\beta\simeq\beta_{x}\) and we obtain
\[\varepsilon_{\rm S}\simeq P_{n}\tanh(2\mathrm{Im}\beta_{x}). \tag{14}\]
The numerical value of \(\mathrm{Im}\beta_{x}\) is about \(10^{-3}\), which leads to
\[\varepsilon_{\rm S}\simeq P_{I}P_{n}\rho d\frac{4\pi}{k}\mathrm{Im}B^{\prime}. \tag{15}\]
## Appendix B Analysis using resonance parameters reported in other references
For the 0.75 eV \(p\)-wave resonance, the measurements using neutron transmission or (\(n\), \(\gamma\)) reaction have been
reported by several groups listed in Table 3[25, 26, 27, 28]. The details of each measurement of the resonance parameters are summarized in Ref. [20].
Tables 4 shows \(\sigma_{\mathrm{S,}p}^{\mathrm{exp}}\), \(A_{\mathrm{S}}\), and \(\phi\) values obtained using resonance parameters reported by each group. The central values for \(A_{\mathrm{S}}\) show agreement within an accuracy of 10% or less with the exception of that based on resonance parameters reported by Terilzzi _et al._. The central value of \(A_{\mathrm{S}}\) using this resonance parameters exhibit a difference of around 30%, which is attributed to Terilzzi _et al._'s \(g\Gamma_{n}\) being reported as approximately 30% larger than in other references. Consequently, the \(\phi\) values obtained using resonance parameters reported by Terilzzi _et al._ show difference compared to the analysis using other resonance parameters, as illustrated in Fig. 9. However, these differences remain consistent within the statistical errors obtained in the present experiment.
|
2309.03850 | Positive definite functions on semi-homogeneous trees and spherical
representations | We consider the group $\mathrm{Aut}(T)$ of isometries of a semi-homogeneous
tree $T=T_{q_+,q_-}$ with valencies $q_+ +1$ and $q_- +1$ and its two orbits
$V_+$, $V_-$ respectively. We make use of the action of $\mathrm{Aut} (T)$ to
equip the spaces of finitely supported radial functions on each of $V_\pm$ with
convolution products, hence with a notion of positive definite functions. The
$\ell^1$-functions radial around a root vertex $v_0\in V_+$ form an abelian
convolution algebra. We study its multiplicative functionals, called spherical
functions, given by eigenfunctions of the nearest-neighbor isotropic transition
operator (the Laplace operator on $T$, and determine which of them are positive
definite. Each positive definite function gives rise to a unitary
representation of $\mathrm{Aut}(T)$; in this way, we produce a series of
unitary spherical representations. For $q_+<q_-$, the representation whose
spherical function has eigenvalue 0 is square-integrable. | Massimo A. Picardello | 2023-09-07T17:07:05Z | http://arxiv.org/abs/2309.03850v1 | # Positive definite functions
###### Abstract.
We consider the group \(\mathcal{G}\) of isometries of a semi-homogeneous tree \(T=T_{q_{+},q_{-}}\) with valencies \(q_{+}+1\) and \(q_{-}+1\) and its two orbits \(V_{+}\), \(V_{-}\) respectively. We make use of the action of \(\mathcal{G}\) to equip the spaces of finitely supported radial functions on each of \(V_{\pm}\) with convolution products, hence with a notion of positive definite functions. The \(\ell^{1}\)-functions radial around a root vertex \(v_{0}\in V_{+}\) form an abelian convolution algebra. We study its multiplicative functionals, called spherical functions, given by eigenfunctions of the nearest-neighbor isotropic transition operator (the Laplace operator on \(T\), and determine which of them are positive definite. Each positive definite function gives rise to a unitary representation of \(\mathcal{G}\); in this way, we produce a series of unitary spherical representations. For \(q_{+}<q_{-}\), the representation whose spherical function has eigenvalue \(0\) is square-integrable.
Key words and phrases:Homogeneous and semi-homogeneous trees, Laplace operators, spherical functions, positive-definite functions 2020 Mathematics Subject Classification: Primary 44A12; Secondary 05C05, 43A85 Partially supported by MIUR Excellence Departments Project awarded to the Department of Mathematics, Tor Vergata University of Rome, MatModTOV
It was shown in [10] that the spectrum of \(\mu_{1}\) on \(\ell^{p}(V)\) is the closure of the set \(\{\gamma\in\mathbb{C}\colon\phi(\,\cdot\,,v_{0}\,|\,\gamma)\in\ell^{r}(V)\text{ for every }r>p\}\). Moreover, since \(\mu_{1}\) is invariant under \(\mathcal{G}\), its eigenspaces give rise to spaces of representations of \(\mathcal{G}\), called _spherical representations_. The representation at the eigenvalue \(\gamma\) is unitary if \(\gamma\) belongs to the \(\ell^{2}\)-spectrum of \(\mu_{1}\), and unitarizable if \(\phi(\,\cdot\,,v_{0}\,|\,\gamma)\) is positive definite.
A recent article [6] considers trees that are _semi-homogeneous_, i.e., with two alternating homogeneity degrees \(q_{+}\) and \(q_{-}\). The semi-homogeneous Laplace operator \(\mu_{1}\) on functions on \(V\) is again as the average operator on neighbors, and gives rise to a nearest-neighbor transition operator. Its spherical functions are obtained as generalized Poisson transforms, by means of an explicit realization of the Poisson kernel for each eigenvalue \(\gamma\), derived by an explicit computation of the first visit probability associated to the random walk generated by \(\mu_{1}\).
On the other hand, the semi-homogeneous and the homogeneous settings are considerably different. Indeed, the group \(\mathcal{G}\) has two orbits \(V_{+}\) and \(V_{-}\), hence it is not transitive and does not give rise to a convolution product. For the same reason, the natural definition of positive definite function does not make sense. As a consequence, we do not have the convolution estimates of [12] that have been used in the literature to compute the spectrum of \(\mu_{1}\) (see [10, Chapter 3]), and it is much harder to build an analytic series of spherical representations of \(\mathcal{G}\) by means of spherical functions.
Indeed, the basic problem raised by the existence of two orbits under \(\mathcal{G}\) is the lack of a convolution product, hence of a suitable convolution algebra \(\mathcal{R}\) of radial functions (see also [4]). In the homogeneous setting, the spectrum of \(\mu_{1}\) is obtained from the multiplicative functionals on the Banach algebra generated by \(\mathcal{R}\) in the \(\ell^{1}\)-norm. Here we need a new approach. The operator \(\mu_{1}\), being a nearest-neighbor transition operator, induces jumps only between vertices of different homogeneities. An idea developed in [6] with the goal of computing the \(\ell^{p}\)-spectrum of \(\mu_{1}\) focuses onto its square \(\mu_{1}^{2}\), that preserves the homogeneity, hence acts on the two orbits separately, and is transitive on each of them. On the other hand, \(\mu_{1}^{2}\) is related to the step-2 isotropic transition operator \(\mu_{2}\), indeed, up to a scale factor, it coincided with \(\mu_{2}\) plus a positive multiple of the identity.
On the other hand, the notion of adjacency induced by \(\mu_{2}\) on \(V_{+}\), that corresponds to distance \(2\) in \(V\), transforms \(V_{+}\) into the polygonal graphs \(\Gamma_{+}\), studied in [14], that consists of infinitely many complete polygons with \(q_{-}+1\) vertices attached in a tree-like fashion, with \(q_{+}+1\) polygons joining at each vertex; a similar description holds for the polygonal graph \(\Gamma_{-}\) that corresponds to the action on \(V_{-}\). The action is transitive and gives rise on, say, \(V_{+}\) to a nice convolution algebra of functions radial in this graph around a fixed reference vertex \(v_{0}\in V_{+}\). For every \(p\), the \(\ell^{p}\)-spectrum of \(\mu_{2}\) on \(V_{\pm}\) is known [14]. By making use of this, the \(\ell^{p}\)-spectrum of \(\mu_{1}\) has been computed in [6]
An unusual fact occurs in the semi-homogeneous setting [6]: there is a spherical function that belongs to \(\ell^{2}(V)\), and also to \(\ell^{p}(V)\) for some \(p<2\). Indeed, the spherical function at the eigenvalue \(0\) belongs to \(\ell^{p}(V)\) for every \(p>1+\ln q_{+}/\ln q_{-}\), that is smaller than \(2\) if and only if \(q_{+}<q_{-}\). Every other bounded spherical function \(\phi(\,\cdot\,,v_{0}\,|\,\gamma)\) belongs to \(\ell^{p}(V)\) for some \(p>2\) depending on \(\gamma\).
The lack of transitivity of \(\mathcal{G}\) gives rise to some interesting questions. Since \(\mu_{1}\) is \(\mathcal{G}\)-invariant, i.e., it commutes with \(\mathcal{G}\), its eigenspaces are also invariant under \(\mathcal{G}\)
hence they give rise to representations of this group, namely, the spherical representations. The representation space at the eigenvalue \(\gamma\) is the span of all translates of \(\phi(\,\cdot\,,v_{0}\,|\,\gamma)\). In the homogeneous setting, this representation is unitary or unitarizable if and only if \(\phi(\,\cdot\,,v_{0}\,|\,\gamma)\) is positive definite (thereby providing the necessary Hilbert norm via the GNS construction) [9, 10]. In the semi-homogeneous setting, the notion of positive definite makes sense for functions defined on the group \(\mathcal{G}\), but not for functions on \(V\), that cannot be lifted to \(\mathcal{G}\) since the group action is not transitive. Even though there are only two orbits, it is not known how to equip the space of functions on \(V\) with a suitable definition of positive definite function, or at least a Hilbert norm on the linear span of all translates of \(\phi(\,\cdot\,,v_{0}\,|\,\gamma)\). Therefore it is not clear which spherical representations are unitary.
For the same reason, \(\mathcal{G}\) does not induce a convolution product on functions on \(V\). On the other hand, both \(V_{+}\) and on \(V_{-}\) are homogeneous spaces for \(\mathcal{G}\), indeed \(V_{\pm}=\mathcal{G}/\mathcal{G}_{v_{\pm}}\) for any choice of reference vertices \(v_{\pm}\in V_{\pm}\), one for each homogeneity. Between two functions defined on each of these homogeneous spaces \(\mathcal{G}\) induces a convolution product, but only if one of the two functions is bi-\(\text{Aut}_{v_{\pm}}\)-invariant, as follows.
For simplicity, let us restrict attention to functions on \(V_{+}\) and denote again by \(v_{0}\) the reference vertex in \(V_{+}\). If \(v\in V\) and \(\lambda\in\mathcal{G}\) is such that \(v=\lambda v_{0}\), the convolution product induced by \(\mathcal{G}\) on its orbit \(\mathcal{G}/\mathcal{G}_{v_{0}}\approx V_{+}\) should be defined as \(f\ast g(v)=\int_{\mathcal{G}}f(\tau^{-1}\lambda v_{0})\,g(\tau v_{0})\,d\tau\), where the measure \(d\tau\) is the Haar measure of the (unimodular) group \(\mathcal{G}\). On the other hand, the result is invariant on right cosets only if \(f\) is two-sided invariant, that is, radial around \(v_{0}\) (see Section 3 for more details). In particular, we obtain a radial convolution algebra on \(V_{+}\) (and another on \(V_{-}\)). This actually leads to two different convolution products, one for each orbit (see [2]). It is not clear how to define the convolution of functions not supported on a single orbit in such a way that it coincides with the usual definition when the tree is homogeneous. Moreover, an analogous of Haagerup's convolution estimate [12], that is the main tool for computing the spectra of the Laplacian in the well-established approach of [10, Chapter 3], is complicated in this setting [13].
The spherical representation \(\pi_{\gamma}\) of a group \(G\) acting simply transitively on a homogeneous tree is known to be irreducible for \(\gamma\in\text{sp}_{\ell^{1}}(\mu_{1})\), by [10, Chapter 5] and [8]. The argument, later extended to non-isotropic nearest-neighbor transition operators [11], proceeds by proving that the projector onto a cyclic vector is the weak limit, as \(n\to\infty\), of the average of \(\pi_{\gamma}(\tau)\) over all the elements \(\tau\in\mathcal{G}\) that move \(v_{0}\) to vertices of length \(n\). Here again, the lack of transitivity makes it unhandy to reproduce the same argument. Instead of an irreducible representation, we obtain a representations reducible as the direct sum of two components, one for each orbit.
In this short note, we study unitary representation of \(\mathcal{G}\) by means of its action on \(V_{+}\approx\mathcal{G}/\mathcal{G}_{v_{0}}\) (or the analogous action on \(V_{-}\)) and the convolution induced by this action on the radial space \(\ell^{1}_{\#}(V_{+})\), that turns out to be the abelian convolution algebra generated by the Laplacian. The group \(\mathcal{G}\) can be factorized as \(\mathcal{G}=\mathcal{F}\mathcal{G}_{v_{0}}\), where \(\mathcal{F}\) is any discrete subgroup acting simply transitively on \(V_{+}\). The convolution product is much simpler when considered on \(\mathcal{F}\), whose Cayley graph is the polygonal graph \(\Gamma_{+}\) associated to \(V_{+}\). A representation theory for \(\mathcal{F}\) follows in the same way with more readable statements and proof, but we prefer to work on the larger group \(\mathcal{G}\) that is naturally associated to the whole tree; observe that any isometry of \(T\) restricts to isometries of \(V_{\pm}\), and conversely, every isometry of \(V_{+}\) (or \(V_{-}\)) extends
uniquely to an isometry of \(T\). Besides, the choice of \(\mathcal{F}\) is not unique; for instance, the choice of \(\mathcal{F}\) in [14] is the free product of \(q_{+}+1\) copies of \(\mathbb{Z}_{q_{-}+1}\); another choice is \(\mathbb{Z}_{q_{+}+1}*\mathbb{Z}_{q_{-}+1}\), that is the discrete group naturally associated with the dual graph of \(\Gamma_{+}\). For an easier understanding, the reader is invited to rephrase our proofs for \(\mathcal{F}\), and compare our results with those of [14], that yield the \(\ell^{p}\)-spectrum of the Laplacian on \(\Gamma_{+}\). For the \(\ell^{p}\)-spectrum of the Laplacian on( \(\mathrm{a}\) semi-homogeneous tree, see [6]; here we do not study spectra, only positive definite functions and group representations.
We make use of the action of \(\mathcal{G}\) on \(V_{+}\) and its natural involution to introduce positive definite functions on \(V_{+}\). Moreover, we study the multiplicative functionals on the algebra \(\ell^{1}_{\#}(V_{+})\), that correspond to the normalized bounded radial eigenfunctions of the Laplacian _(spherical functions)_, prove that the spherical functions are positive definite if and only if they are real-valued and bounded, and conclude that the spherical functions that correspond to real eigenvalues in the \(\ell^{1}\)-spectrum of the Laplacian give rise, via the GNS construction, to a family of unitary representations of \(\mathcal{G}\) that are irreducible on \(\ell^{2}(V_{\pm})\), whereas they decompose as direct sum of two irreducible representations on \(\ell^{2}(V)=\ell^{2}(V_{+})\oplus\ell^{2}(V_{-})\). For \(q_{+}<q_{-}\), the spherical function at the eigenvalue \(0\) belongs to \(\ell^{2}(V_{+})\) and is positive definite, and the representation is square-integrable, that is, it is a unitary subrepresentation of the regular representation of \(\mathcal{G}\). It is worth noting that, for \(q_{+}<q_{-}\), a discrete subrepresentation of the regular representation of the free product of \(q_{+}+1\) copies of \(\mathbb{Z}_{q_{-}+1}\) (the discrete group acting simply transitively on \(\Gamma_{+}\) introduced above) was found in [7, 15].
The author acknowledges many enlightning conversations with Enrico Casadio-Tarabusi. An alternative approach to the results of this article will appear in a forthcoming joint paper.
## 2. Semi-homogeneous trees and the Laplace operator
A _tree_\(T\) is a connected, countably infinite, locally finite graph without loops. The nodes of \(T\) are called _vertices_; the set of all vertices is denoted by \(V\). Two distinct vertices \(v,v^{\prime}\) are _adjacent_, or _neighbors_, if they belong to the same edge: we shall write \(v\sim v^{\prime}\). Let us fix a reference vertex \(v_{0}\). The number of neighbors of \(v\) is \(q_{v}+1\), where \(q_{v}\) is the _homogeneity degree_, that is, the number of outward neighbors of \(v\) with respect to \(v_{0}\) if \(v\neq v_{0}\). Let us fix a _parity_, that is, an alternating function \(\epsilon\colon V\to\{\pm 1\}\). The level sets of \(\epsilon\) are denoted by \(V_{+}\) and \(V_{-}\).
A _semi-homogeneous_ tree \(T=T_{q_{+},\,q_{-}}\) has two alternating homogeneity degrees \(q_{+}\) on \(V_{+}\) and \(q_{-}\) on \(V_{-}\). If \(q_{+}=q_{-}\) then the tree is _homogeneous_. We assume \(q_{+},q_{-}>1\) and choose \(v_{0}\in V_{+}\), that is,
\[q_{v_{0}}=q_{+}. \tag{2.1}\]
Since it has no loops, a homogeneous tree is the Cayley graph of the free product \(\mathfrak{F}\) of \(q+1\) copies of the two-element group \(\mathbb{Z}_{2}\). Indeed, this free product embeds in the group \(\mathcal{G}\) of _automorphisms_ of \(T\), the invertible self-maps of \(V\) that preserve adjacency: the generators of the factors are automorphisms that reverse the edges that contain \(v_{0}\) (see, for instance, [10, Chapter 3, Section V]), and we have the factorization \(\mathcal{G}=\mathfrak{F}\,\mathcal{G}_{v_{0}}\). In particular, \(\mathfrak{F}\) acts simply transitively on \(T\) and induces a convolution product on functions on \(V\). Therefore \(\mathcal{G}\) is transitive on \(V\) if \(T\) is homogeneous. On the space of functions on \(\mathcal{G}\) that are two-sided-invariant under
the stability subgroup \(\mathcal{G}_{v_{0}}\subset\mathcal{G}\) of \(v_{0}\), the convolution operation induced by the group \(\mathcal{G}\) is a lifting of the convolution induced by \(\mathfrak{F}\) (for homogeneous trees, see [5, Appendix] and [3, Subsection 3.1.1] ). On the other hand, if \(q_{+}\neq q_{-}\), then there are two orbits of \(\mathcal{G}\) on \(V\), namely \(V_{+}\) and \(V_{-}\).
At each vertex \(v\), the Laplace operator applied to a function \(f\) on \(V\) yields the average of the values of \(f\) at the neighbors of \(v\). The number of neighbors depends on the parity of the vertex. That is,
\[\mu_{1}f(v)=\frac{1}{q_{\epsilon(v)}+1}\ \sum_{w\sim v}f(w). \tag{2.2}\]
_Remark 2.1_.: Following [6], let us consider the space \(\mathfrak{R}\) of summation operators with kernel, acting on \(\ell^{1}(V)\), that is, of the type \(Rf(v)=\sum_{w\in V}r(v,w)f(w)\), whose kernel \(r(v,w)\) is of finite range (that is, it vanishes if \(\operatorname{dist}(v,w)>N\) for some \(N\in\mathbb{N}\) depending on \(r\)). Then \(\mu_{1}\) belongs to \(\mathfrak{R}\). More generally, for \(n\in\mathbb{N}\) we set
\[\mu_{n}f(v)=\frac{1}{|\{w\colon\operatorname{dist}(w,v)=n\}|}\sum_{ \operatorname{dist}(w,v)=n}f(w). \tag{2.3}\]
An elementary computation shows that for every \(n>0\),
\[\mu_{1}\mu_{n}f(v)=\mu_{n}\mu_{1}f(v)=\frac{1}{q_{v}+1}\ \mu_{n-1}+\frac{q_{v}}{q_{v}+1}\ \mu_{n+1}\,. \tag{2.4}\]
It follows from (2.4) that the vector space \(\mathfrak{R}\) is a commutative algebra generated by \(\mu_{1}\), and on every ray \([v_{0},v_{1},\dots)\) each \(\gamma\)-eigenfunction of \(\mu_{1}\) radial around \(v_{0}\) satisfies for every \(n\in\mathbb{N}\) the recurrence relation
\[\begin{split}\gamma\,f(v_{0})&=f(v_{1})\qquad\text{ if }|v_{1}|=1;\\ \gamma\,f(v_{n})&=\frac{1}{q_{v_{n}}+1}\ f(v_{n-1}) +\frac{q_{v_{n}}}{q_{v_{n}}+1}\ f(v_{n+1})\qquad\text{if }|v_{n}|=n>0.\end{split} \tag{2.5}\]
In particular, for each eigenvalue \(\gamma\) there is exactly one \(\gamma\)-eigenfunction \(f\) radial around any given vertex \(v_{0}\) and satisfying the initial condition \(f(v_{0})=1\). If \(\gamma=0\), this eigenfunction must vanish on each \(v\) with \(|v|=1\) by the first identity in (2.5), henceafter on all of \(V_{-}\) by the second identity; it is now easy to compute this eigenfunction, that we denote by \(\phi(v,v_{0}\,|\,0)\), in the usual case \(v_{0}\in V_{+}\):
\[\phi(v,v_{0}\,|\,0)=\begin{cases}(-1)^{\frac{|v|}{2}}q_{-}^{-\frac{|v|}{2}}& \text{if }|v|\text{ is even},\\ 0&\text{if }|v|\text{ is odd}.\end{cases} \tag{2.6}\]
Let
\[p_{\text{crit}}=\frac{\ln(q_{+}q_{-})}{\ln q_{-}}. \tag{2.7}\]
If \(C(n)=\{v\colon|v|=n\}\), it is immediately seen that, for every \(n\), \(|C(2n)|=(q_{+}q_{-})^{n}\). This yields the following \(\ell^{p}\)-behavior:
\[\phi(\,\cdot\,,v_{0}\,|\,0)\in\ell^{p}(V)\text{ if and only if }p>p_{\text{crit}}. \tag{2.8}\]
In particular, \(\phi(\,\cdot\,,v_{0}\,|\,0)\in\bigcap_{p>p_{\text{crit}}}\ell^{p}(V)\). Note that \(p_{\text{crit}}=2\) in the homogeneous setup \(q_{+}=q_{-}\), and \(p_{\text{crit}}>2\) if and only if \(q_{+}>q_{-}\). Therefore
\[\phi(\,\cdot\,,v_{0}\,|\,0)\in\ell^{2}(V)\Longleftrightarrow q_{+}<q_{-}. \tag{2.9}\]
As a consequence, if (and only if) \(q_{+}<q_{-}\), then \(0\) belongs not only to \(\operatorname{sp}(\mu_{1};\,\ell^{2}(V))\), but also to the discrete spectrum of \(\mu_{1}\) as an operator on \(\ell^{2}\) and more generally on \(\ell^{p}\) for every \(p\geqslant 2\).
**Definition 2.2**.: For \(\gamma\in\mathbb{C}\), the \(\gamma\)-eigenfunction of \(\mu_{1}\) radial around \(v_{0}\) with value \(1\) at \(v_{0}\) is called _spherical function_ with eigenvalue \(\gamma\) and is denoted by \(\phi(\,\cdot\,,v_{0}\,|\,\gamma)\).
## 3. The convolution algebra of radial functions and its multiplicative functionals
Given a locally compact group \(G\) and a compact subgroup \(K\), the pair \((G,K)\) is a _Gelfand pair_ if the convolution algebra \(L^{1}(K\backslash G/K)\) is commutative. Here we are interested in the set-up where \(G=\mathcal{G}\) is the group of automorphisms of an infinite homogeneous or semi-homogeneous tree.
The isotropy subgroup \(\mathcal{G}_{v}\) of \(\mathcal{G}\) at any \(v\in V\) is compact, and \(\mathcal{G}/\mathcal{G}_{v}\) is discrete. If \(T\) is homogeneous, \(\mathcal{G}\) acts transitively upon \(V\), and \(\mathcal{G}/\mathcal{G}_{v}\) is in bijection with \(V\). Moreover, \(\mathcal{G}_{v}\) acts transitively upon the circle \(C_{n}\) of vertices at any distance \(n\) from \(v\), or equivalently, the action of \(G\) is doubly transitive. On the other hand, if \(T\) is semi-homogeneous but not homogeneous, \(G\) has the two orbits \(V_{\pm}\subsetneq V\), and for any \(v_{+}\in V_{+}\), \(v_{-}\in V_{-}\), \(n\in\mathbb{N}\), \(K_{v_{\pm}}\) acts transitively upon the circle of vertices in \(V_{\pm}\) at distance \(n\) from \(v_{\pm}\), respectively. Therefore \(\mathcal{G}_{v_{\pm}}\) in in bijection with \(V_{\pm}\). We shall restrict attention to the semi-homogeneous not homogeneous setting, and the orbit \(V_{+}\), with reference vertex \(v_{0}\).
By this bijection, summable functions on the discrete space \(V_{+}\) lift to summable functions on \(G\) (with respect to its Haar measure). Hence, the convolution product on \(G\) gives rise to a convolution product on \(\ell^{1}(V_{+})\), and in particular on finitely supported functions therein; we shall often restrict attention to this space. Note that the liftings from \(G/\mathcal{G}_{v_{0}}\) to \(G\) identify right \(\mathcal{G}_{v_{0}}\)-invariant functions on \(G\) with functions on \(V_{+}\). Write \(K=\mathcal{G}_{v_{0}}\): then the two-sided \(K\) invariant functions on \(G\) are _radial_ functions on \(V_{+}\), in the sense that they depend only on the distance from \(v_{0}\). We denote the corresponding \(\ell^{1}\) space by \(\ell^{1}_{\#}(V_{+})\). We shall show that \((\mathcal{G},\mathcal{G}_{v_{0}})\) is a Gelfand pair, that is, \(\ell^{1}_{\#}(V_{+})\) is an abelian convolution algebra. All this also works word by word for \(V_{-}\). In the special case of homogeneous trees, the convolution induced by \(\mathcal{G}\) was studied in [3, 5]; some preliminary facts about convolution in the semi-homogeneous settings are in [2, 4],
Consider a function \(f:G\mapsto\mathbb{C}\), and the Haar measure \(\mu\) on \(\mathcal{G}\) normalized on \(K\). The usual definition of convolution for functions on \(\mathcal{G}_{v}\) is \(u_{1}*u_{2}(\tau)=\int_{\mathcal{G}_{v_{0}}}u_{1}(\lambda^{-1}\tau)\,u_{2}( \lambda)\,d\mu(\lambda)\), since \(\mathcal{G}_{v_{0}}\) is unimodular; this product is associative. Let \(\lambda\mapsto\widetilde{\lambda}\) be the canonical projection of \(\mathcal{G}_{v_{0}}\) to \(\mathcal{G}/\mathcal{G}_{v_{0}}\), \(\widetilde{\mu}\) the quotient measure of \(\mu\) (that is, the counting measure) on \(\mathcal{G}/\mathcal{G}_{v_{0}}\). Let us assume that \(u_{2}\) is right-\(\mathcal{G}_{v_{0}}\)-invariant and \(u_{1}\) bi-\(\mathcal{G}_{v_{0}}\)-invariant. Then \(u_{1}*u_{2}\) is bi-\(\mathcal{G}_{v_{0}}\)-invariant. Indeed, the right-invariance is clear, and the left-invariance follows from unimodularity: for every \(\kappa\in\mathcal{G}_{v_{0}}\), one has \(u_{1}*u_{2}(\kappa\tau)=\int_{G}u_{1}((\kappa^{-1}\lambda)^{-1}\tau)\,u_{2}( \lambda)\,d\mu(\lambda)=\int_{\mathcal{G}}u_{1}(\lambda^{-1}\tau)\,u_{2}( \kappa\lambda)\,d\mu(\lambda)=u_{1}*u_{2}(\tau)\). Moreover, the convolution can be regarded as a product between \(u_{1}\) on
\(\mathcal{G}_{v_{0}}\backslash\mathcal{G}/\mathcal{G}_{v_{0}}\) and \(u_{2}\) on \(\mathcal{G}/\mathcal{G}_{v_{0}}\) and functions as follows:
\[u_{1}*u_{2}(\widetilde{\tau}) =\int_{\mathcal{G}_{v_{0}}}\int_{\mathcal{G}_{v_{0}}}u_{1}(\kappa^ {-1}\lambda^{-1}\tau)\,u_{2}(\lambda\kappa)\,d\mu(\kappa)\,d\widetilde{\mu}( \widetilde{\lambda})\] \[=\int_{\mathcal{G}_{v_{0}}}u_{1}\left(\lambda^{-1}\tau\right)\,u_ {2}(\lambda)\,d\widetilde{\mu}(\widetilde{\lambda})=\int_{\mathcal{G}_{v_{0}} }u_{1}\left(\widetilde{\lambda^{-1}\tau}\right)\,u_{2}(\widetilde{\lambda}) \,d\widetilde{\mu}(\widetilde{\lambda}).\]
Because of the bijections introduced above, this defines a convolution product between functions on \(V_{+}\) and radial functions on \(V_{+}\). Now assume that also \(u_{1}\) be bi-\(\mathcal{G}_{v_{0}}\)-invariant.
If \(f\) and \(g\) are regarded as functions on \(V_{+}\), with \(f\) radial around \(v_{0}\), their convolution on \(V_{+}\) becomes
\[f*g(v)=\sum_{v^{\prime}\in V}f(\operatorname{dist}(v,w))\,g(w). \tag{3.1}\]
Every \(\tau\in\mathcal{G}_{v_{0}}\) extends to an operator on functions on \(V_{+}\) by the rule \(\tau g(v)=g(\tau^{-1}v)\). For \(V\in V_{+}\) choose \(\tau[v]\in\mathcal{G}_{v_{0}}\) such that \(\tau[v]\left(v_{0}\right)=v\); the choice of \(\tau[v]\) is determined only up to elements in its right coset modulo \(K\); we set \(\tau[v]\delta_{w}(u)=\delta_{w}(\tau[v]u)\). Then the definition (3.1) of convolution of \(f\) and \(g\) with \(f\) radial is equivalent to
\[f*g(v)=\langle\tau[v]\,f,\,g\rangle, \tag{3.2}\]
where \(\langle\,\cdot\,,\,\cdot\,\rangle\) denotes the inner product in \(\ell^{2}(V_{+})\). Of course, this implies
\[f*g(v_{0})=\langle f,\,g\rangle.\]
and, for \(f,g,h\in\ell^{1}_{\#}(V_{+})\)
\[\langle f,\,g*h\rangle=f*(g*h)(v_{0})=(f*g)*h(v_{0})=\langle f*g,h\rangle. \tag{3.3}\]
Now, if \(f\) and \(g\) are both bi-\(K\)-invariant, i.e., radial around \(v_{0}\), and \(v\in V_{+}\),
\[f*g(v)=f*g(\operatorname{dist}(v,v_{0}))=\sum_{w\in V_{+}}f(\operatorname{dist }(v,w))\,g(\operatorname{dist}(w,v_{0})). \tag{3.4}\]
So, the convolution of radial functions is radial, hence \(\ell^{1}_{\#}(V_{+})\) is a convolution algebra. This algebra is the closure in the \(\ell^{1}\) norm of the algebra \(\mathfrak{R}_{\#}(V_{+})\) of radial finitely supported functions. From now on, we shall use the term _radial_ function on \(V_{+}\) around \(v_{0}\) instead of bi-\(\mathcal{G}_{v_{0}}\)-invariant function on \(\mathcal{G}_{v_{0}}\).
For all functions \(f,g\) on \(V_{+}\) with \(f\) radial, and \(\tau\in\mathcal{G}_{v_{0}}\), one has \(\tau(f*g)=f*\tau g\), where \(\tau g\) is defined by \(\tau g(v)=g(\tau^{-1}v)\). Indeed,
\[\tau(f*g)(v) =\sum_{v^{\prime}\in V_{+}}g(v^{\prime})\,f(\operatorname{dist}( \tau^{-1}v,\,v^{\prime}))=\sum_{v^{\prime\prime}\in V_{+}}g(\tau^{-1}v^{\prime \prime})\,f(\operatorname{dist}(\tau^{-1}v,\,\tau^{-1}v^{\prime\prime}))\] \[=\sum_{v^{\prime\prime}}g(\tau^{-1}v^{\prime\prime})\,f( \operatorname{dist}(v,\,v^{\prime\prime}))=f*\tau g(v). \tag{3.5}\]
Denote by \(\mathcal{E}\) the radialization operator around \(v_{0}\) on finitely supported functions, that is, \(\mathcal{E}g(v)=\frac{1}{\#C(|v|)}\sum_{w\in C(|v|)}g(w)\). Then, by (3.5), for all \(f,g\in\ell^{1}(V)\),
\[\mathcal{E}(f*g)=f*\mathcal{E}g\quad\text{ if $f$ is radial}. \tag{3.6}\]
Let \(\mu_{n}\) be the radial function that is non-zero only if \(|v|=n\) and with \(\ell^{1}\)-norm 1, that is \(\mu_{n}=\mathcal{E}\delta_{v}\) for \(|v|=n\). Note that \(\mu_{n}(v)=p_{n}(v_{0},v)\), where \(p_{n}\) are the \(n\)-step isotropic transition probabilities associated to the operator \(M_{n}\) of (2.3). Therefore, by (3.1),
\[\mu_{2}*\mu_{2n}=\frac{1}{(q_{+}+1)q_{-}}\,\big{(}\mu_{2n-2}+(q_{-}-1)\mu_{2n}+ q_{+}q_{-}\mu_{2n+2}\big{)}.\]
It follows that the radial convolution algebra \(\ell^{1}_{\#}(V_{+})\) is generated by \(\mu_{2}\), hence it is abelian.
**Corollary 3.1**.: _For every \(n\in\mathbb{N}\), there exists a (unique) polynomial \(P_{n}\) of degree \(n\) such that \(\mu_{2n}=P_{n}(\mu_{2})\)._
**Lemma 3.2**.: _On a a semi-homogeneous tree, the following properties of a function \(\phi\not\equiv 0\) on \(V_{+}\) are equivalent:_
1. \(\phi\) _on_ \(V_{+}\) _is a spherical function;_
2. _for all_ \(v,w\in V_{+}\)_,_ \(\mathcal{E}(\tau[v]\,\phi)(w)=\phi(v)\phi(w)\)_;_
3. _the functional_ \(L_{\phi}(f)=\langle f,\,\phi\rangle\) _is multiplicative on the convolution algebra_ \(\ell^{1}_{\#}(V_{+})\)_._
Proof.: Since the radial algebra is generated by \(\mu_{1}\), for every \(n\) there exists a polynomial \(Q_{n}\) (of degree \(n\)) such that \(\mu_{n}=Q_{n}(\mu_{1})\). Let \(\phi=\phi(\,\cdot\,,v_{0}\,|\,\gamma)|_{V_{+}}\), \(u,v\in V_{+}\) and \(n=|w|\). Then \(\mu_{n}*\phi=Q_{n}(\mu_{1})\phi=Q_{n}(\gamma)\phi\). By (3.6), \(\mathcal{E}(\tau[v]\,\phi)(w)=\langle\tau[v]\,\phi,\mu_{n}\rangle=\phi*\mu_{n} (v^{-1}=Q_{n}(\gamma)\phi(v^{-1})=Q_{n}(\gamma)\phi(v)\). On the other hand, since \(\phi\) is radial, \(\phi(w)=\langle\mu_{n},\,\phi\rangle=\mu_{n}*\phi(v_{0})=Q_{n}(\gamma)\phi(v_ {0})=Q_{n}(\gamma)\). Therefore \((i)\) implies \((ii)\).
Now let \(\phi\) be a function on \(V_{+}\) that satisfies \((ii)\) and choose \(v\) such that \(\phi(v)\neq 0\). If \((ii)\) holds, for every \(w\in V_{+}\) we have \(\phi(w)=\mathcal{E}(\tau[v]\,\phi)(w)/\phi(v)\). Therefore \(\phi\) is radial; moreover, for all radial \(f,g\) on \(V_{+}\), by (3.2) and (3.6),
\[L_{\phi}(f*g) =\langle f*g,\phi\rangle=\big{\langle}\langle\tau[\,\cdot\,]\,f,g \rangle,\,\phi(\,\cdot\,)\big{\rangle}=\big{\langle}f,\,\langle g,\tau[\, \cdot\,]\phi\rangle\big{\rangle}=\sum_{v}f(v)\langle\tau[v]^{-1}\phi,g\rangle\] \[=\sum_{v}f(v)\mathcal{E}(\langle\tau[v]^{-1}\phi,g\rangle)\sum_{ v,w\in V_{+}}f(v)\,g(w)\,\phi(v)\,\phi(w)=L_{\phi}(f)\,L_{\phi}(g), \tag{3.7}\]
since \(\phi\) is radial and \(|\tau[v]^{-1}(v_{0})|=|\tau[v](v_{0})|\). Thus \((ii)\) implies \((iii)\).
If \((iii)\) holds, \(f\in\ell^{1}_{\#}(V_{+})\) and \(\phi_{2}\) denotes the value of the radial function \(\phi\) on vertices of length 2, then
\[L_{\phi}(\mu_{2}*f)=\langle\phi,\,\mu_{2}\rangle\,\langle\phi,\,f\rangle=\phi_ {2}\,\langle\phi,\,f\rangle.\]
On the other hand, by (3.3),
\[L_{\phi}(\mu_{2}*f)=\langle\phi,\,\mu_{2}*f\rangle=\langle\phi*\mu_{2},\,f\rangle\]
Since this holds for each radial function \(f\), it follows that \(\phi\) is an eigenfunction of \(\mu_{2}\) (with eigenvalue \(\phi_{2}\)). Moreover, \(\phi(v_{0})\neq 0\), because a radial eigenfunction of \(\mu_{2}\) that vanish at \(v_{0}\) must vanish everywhere. Now,
\[\phi(v_{0})=\phi*\delta_{v_{0}}(v_{0})=\phi*(\delta_{v_{0}}*\delta_{v_{0}})(v_ {0})=(\phi(v_{0})^{2}, \tag{3.8}\]
hence \(\phi(v_{0})=1\), and \((i)\) follows.
**Corollary 3.3**.: _If \(\phi\) is a spherical function, then \(\phi(v_{0})=1\), \(\mu_{2}\ast\phi=\gamma\phi\) with \(\gamma=\phi(v)\) for \(|v|=2\), and \(\mu_{2n}\ast\phi=P_{n}(\gamma)\,\phi\), where \(P_{n}\) is the polynomial of Corollary 3.1.._
Proof.: By (3.8), \(\phi(v_{0})=L_{\phi}(\delta_{v_{0}})=1\). Moreover, \(\mu_{2}\phi=\mathcal{E}(\tau[v]\,\phi)\) for any \(v\) with \(|v|=2\). Therefore, by Lemma 3.2\((ii)\), \(\mu_{2}\phi=\phi(v)\,\phi\) for every such \(v\). The remainder of the statement follows from Corollary 3.1.
## 4. Positive definite spherical functions on \(V_{+}\)
**Definition 4.1**.: \(\ell^{1}(V_{+})\) and \(\ell^{1}_{\#}(V_{+})\) are involutive algebras equipped with the involution \(f^{*}(v)=f^{*}(\tau[v](v_{0}))=\overline{f(\tau[v]^{-1}(v_{0})}\).
A spherical function is positive definite for \(\mathcal{G}\) acting on \(V_{+}\) if it induces a positive functional on the involutive algebra \(\ell^{1}(V_{+})\).
It is well known that a right-\(\mathcal{G}_{v_{0}}\)-invariant function \(\phi\) on \(\mathcal{G}\), or equivalently a function on \(V_{+}\), is positive definite (with respect to \(\mathcal{G}\)) if and only if it is a matrix coefficient of a unitary representation \(\pi_{\phi}\) of \(\mathcal{G}\) (acting on a vector space equipped with an inner product) that with respect to this inner product is of the type
\[\phi(v)=\langle h,\pi_{\phi}(\tau[v])h\rangle. \tag{4.1}\]
Here we shall regard \(h\) as a function on \(V_{+}\), the action of \(\tau=\tau[v]\in\mathcal{G}\) being right-\(\mathcal{G}_{v_{0}}\)-invariant.
Let \(\phi\) be a positive definite function on \(V_{+}\) and \(\mathcal{V}_{\phi}\) the linear span of all translates of \(\phi\) under the action of \(\mathrm{Aut}(T)\). Then \(\phi\) induces a positive semi-definite inner product on \(\mathcal{V}_{\phi}\) by the rule
\[\langle\tau[v]\phi,\,\tau[w]\phi\rangle_{\phi}=\phi(\tau[v]^{-1}w). \tag{4.2}\]
From now on, we shall denote by \(\langle\,\cdot\,,\,\cdot\,\rangle_{\phi}\) this inner product and with \(\langle\,\cdot\,,\,\cdot\,\rangle\) the \(\ell^{2}\)-inner product.
Denote by \(\mathcal{N}_{\phi}\) the subspace of \(\mathcal{V}_{\phi}\) of all function \(f\) such that \(\langle f,\,f\rangle_{\phi}=0\), and \(\mathcal{H}_{\phi}=\mathcal{V}_{\phi}/\mathcal{N}_{\phi}\). Then \(\|f\|_{\phi}=\sqrt{\langle f,\,f\rangle_{\phi}}\) is a Hilbert space norm on \(\mathcal{H}_{\phi}\) (the so-called _GNS-norm_).
Let now \(\phi=\phi(\,\cdot\,,v_{0}\,|\,\gamma)\) and write \(\mathcal{V}_{\gamma}\) instead of \(\mathcal{V}_{\phi}\), that is the linear span of the functions \(\{v\mapsto\phi(\tau v,v_{0}\,|\,\gamma)\colon\tau\in\mathcal{G}\}\). Then \(\mathcal{V}_{\gamma}\) is invariant under \(\mathcal{G}\), and the action of \(\mathcal{G}\) gives rise to an (algebraic) representation \(\pi_{\gamma}\) of \(\mathcal{G}\) on \(\mathcal{V}_{\gamma}\). If \(\phi\) is positive definite, then \(\pi_{\gamma}\) extends to a topological representation of \(\mathcal{G}\) on the Hilbert space closure \(\overline{\mathcal{H}}_{\gamma}\) of\(\mathcal{H}_{\gamma}=\mathcal{V}_{\gamma}/\mathcal{Z}_{\gamma}\), called the spherical representation at the eigenvalue \(\gamma\).
**Corollary 4.2**.: _A spherical function defines a positive functional on the involutive algebra \(\ell^{1}(V_{+})\) if and only if it is bounded and real-valued._
Proof.: Let \(\phi\) be a bounded spherical function and \(f\in\ell^{1}_{\#}(V_{+})\). Then \(\phi\) is radial, and \(\phi(\tau[v]^{-1}(v_{0}))=\phi(v)\) because
\[|(\tau[v]^{-1}(v_{0})|=|\tau[v](v_{0})|=|v|. \tag{4.3}\]
By Lemma 3.2, or more directly by (3.7), if \(\phi\) is real,
\[L_{\phi}(f^{*}*f) =L_{\phi}(f^{*})\,L_{\phi}(f)=\Big{(}\sum_{v\in V_{+}}\overline{f}( \tau[v]^{-1}(v_{0})\,\phi(v)\Big{)}\Big{(}\sum_{w\in V_{+}}f(w)\phi(w)\Big{)}\] \[=\Big{(}\sum_{v\in V_{+}}\overline{f}(v)\,\phi(\tau[v](v_{0})) \Big{)}\Big{(}\sum_{w\in V_{+}}f(w)\phi(w)\Big{)}\] \[=\Big{(}\sum_{w\in V_{+}}\overline{f}(v)\,\phi(v)\Big{)}\Big{(} \sum_{w\in V_{+}}f(w)\phi(w)\Big{)}\] \[=\langle\overline{f},\,\phi\rangle\,\langle f,\,\phi\rangle=| \langle f,\,\phi\rangle|^{2}\geqslant 0,\]
hence \(L_{\phi}\) is a positive functional on the involutive algebra \(\ell^{1}_{\#}(V_{+})\). Now let \(h\in\ell^{1}(V_{+})\). By (3.6), \(L_{\phi}(h)=\langle h,\,\phi\rangle=\phi*h(v_{0})=\phi*\mathcal{E}h(v_{0})= \langle\mathcal{E}h,\,\phi\rangle=L_{\phi}(\mathcal{E}h)\). Therefore, by [10, Chapter 3, Lemma 1.2 and 1.3], \(L_{\phi}\) is also a positive functional on the involutive algebra \(\ell^{1}(V_{+})\). Thus \(\phi\) is positive definite.
Conversely, let \(\phi\) be a positive definite function on \(V_{+}\). Then \(\phi\) is bounded (by its value at \(v_{0}\)), and \(\phi(\tau[v]^{-1}v_{0})=\overline{\phi}(\tau[v]v_{0})=\overline{\phi}(v_{0})\) for every \(v\in V_{+}\). Since \(\phi\) is radial, (4.3) implies that it is real-valued.
**Corollary 4.3**.: _If a spherical function \(\phi\) on \(V_{+}\) belongs to \(\ell^{2}(V_{+})\), then \(\phi\) is positive definite, and every function \(f\) on \(V_{+}\), its \(\ell^{2}\)-norm and its norm \(\|\,\cdot\,\|_{\phi}\) defined by the inner product (4.2) are related by \(\|f\|_{\ell^{2}(V_{+})}=\|\phi\|_{\ell^{2}(V_{+})}^{2}\,\|f\|_{\phi}\)._
Proof.: Let \(\pi\) be the right regular representation of \(\mathcal{G}\) on functions on \(\mathcal{G}/\mathcal{G}_{v_{0}}\equiv V_{+}\), that is, \(\pi(\tau[v])h(w)=h(\tau[v]^{-1}w)\) for every function \(h\) on \(V_{+}\) and every \(v,w\in V_{+}\); clearly, this is a unitary representation. Then, by Lemma 3.2\((ii)\),
\[\langle\pi(\tau[v])\phi,\,\phi\rangle=\langle\mathcal{E}(\pi(\tau[v])\phi),\, \phi\rangle=\phi(v)\,\|\phi\|_{\ell^{2}(V_{+})}^{2}. \tag{4.4}\]
Therefore the function \(\phi\) is a (positive) multiple of a matrix coefficient of a unitary representation, hence it is positive definite by Definition 4.1. On the other hand, by (4.2)
\[\langle\pi(\tau[v])\phi,\,\phi\rangle_{\phi}=\phi(v). \tag{4.5}\]
The statement follows by comparing (4.4) and (4.5).
_Remark 4.4_.: If \(\phi=\phi(\cdot,v_{0}\,|\,\gamma)\in\ell^{2}(V_{+})\) and \(\gamma\in\operatorname{sp}_{\ell^{1}(V_{+})}\mu_{2}\), then we have \(\|f\|_{\phi}=\|\phi\|_{\ell^{2}(V_{+})}^{2}\|f\|_{\ell^{2}(V_{+})}\).
Indeed, by Corollary 4.2, a spherical function on \(V_{+}\) is positive definite if and only if it is real-valued and bounded. On the other hand, it is known [6] that a spherical function on \(V_{+}\) is real valued and bounded if (and only) if its eigenvalue belongs to \(\operatorname{sp}_{\ell^{1}(V_{+})}\mu_{2}\).
## 5. Spherical representations of \(\mathcal{G}\)
On the basis of the previous Sections, the results of [10] now open the way to the spherical representation theory of \(\mathcal{G}\):
**Theorem 5.1**.: _Denote by \(S_{1}\) the spectrum of \(\mu_{1}\) on \(\ell^{1}(V)\), and by \(S_{2}\) its spectrum on \(\ell^{2}(V)\)._
1. \(\phi(\tau v,v_{0}\,|\,\gamma)\) _is real-valued and bounded if and only if_ \(\gamma\in S_{1}\cap\mathbb{R}\)_. For these eigenvalues, the representation_ \(\pi_{\gamma}\) _is a unitary representation on the Hilbert space_ \(\overline{\mathcal{H}}_{\gamma}\)_._
2. _The Plancherel measure associated to the regular representation_ \(\rho\) _of_ \(\mathcal{G}\) _on_ \(\ell^{2}(V)\) _is absolutely continuous with respect to Lebesgue measure on_ \(S_{2}\subsetneq\mathbb{R}\)_, except for an atom at the eigenvalue_ \(0\) _if (and only if)_ \(q_{+}<q_{-}\)_. For these eigenvalues_ \(\gamma\)_, the representation_ \(\pi_{\gamma}\) _is weakly contained in the regular representation_ \(\rho\)_._
3. _If_ \(\gamma=0\) _and_ \(q_{+}<q_{-}\)_, then_ \(\phi(\tau v,v_{0}\,|\,0)\in\ell^{2}(V)\) _and, on the norm-closure of_ \(\mathcal{V}_{0}\)_,_ \(\pi_{0}\) _is a square-integrable representation, that is, a subrepresentation of the regular representation of_ \(\mathcal{G}\) _on_ \(\ell^{2}(V_{+})\)_._
4. _All the representation_ \(\pi_{\gamma}\) _on_ \(\ell^{2}(V_{+})\) _for_ \(\gamma\in\operatorname{Int}S_{1}\) _are irreducible._
5. _The representations_ \(\rho\) _and_ \(\pi_{\gamma}\) _can be defined also on spaces of functions on_ \(V_{-}\) _and on_ \(V\)_. Since the action of_ \(\mathcal{G}\) _preserves_ \(V_{\pm}\)_,_ \(\rho\) _and each_ \(\pi_{\gamma}\)_, when regarded on_ \(V)\)_, decomposes as direct sum of its restrictions to_ \(V_{+}\) _and_ \(V_{-}\)_. In particular, for_ \(\gamma\in\operatorname{Int}S_{1},\pi_{\gamma}\) _is the direct sum of two irreducible subrepresentations._
Proof.: For part \((i)\) it is enough to show that \(\phi(\tau v,v_{0}\,|\,\gamma)\) is real-valued and bounded if and only if \(\gamma\in S_{1}\), and that the norm \(\|\,\cdot\,\|_{\phi}\) is the \(\ell^{2}\)-norm if and only if \(\gamma\in S_{2}\): this follows from [6, Theorem 5.8].
For part \((ii)\), it has been observed in [6] (see also [4]) that the set \(V_{+}\), when regarded as the Cayley graph of the step-2 isotropic operator \(\mu_{2}\), becomes a polygonal graph in the sense of [14], therefore the regular representation of \(\mathcal{G}\) on \(\ell^{2}(V_{+})\) can be realized by the action on this polygonal graph. In this setting, the Plancherel measure \(m_{\rho}\) was computed in [7]. An accurate comparison of our current setting with that of [7, 14] shows that \(m_{\rho}\) is absolutely continuous with respect to Lebesgue measure, with a continuous Radon-Nykodim derivative, except for an atom at \(\gamma=0\) if \(q_{+}<q_{-}\), and \(\rho\) decomposes on \(\ell^{2}(V_{+})\) as \(\rho=\int_{S_{2}}^{\oplus}\pi_{\gamma}\,dm(\gamma)\). For each \(\gamma\in S_{2}\) and for \(\delta>0\) the matrix coefficients of \(\pi_{\gamma}\) is a uniform limit on finite sets, as \(\delta\to 0\), of coefficients of the representation \(\frac{1}{\delta}\int_{U_{\delta}}^{\oplus}\pi_{\gamma}\,dm(\gamma)\) where \(U_{\delta}\) is a neighborhood of \(\gamma\) of radius \(\delta\). Therefore \(\pi_{\gamma}\) is weakly contained in \(\rho\).
Part \((iii)\) follows from (2.9).
Part \((iv)\) is proved as in the setting of homogeneous trees [9, 10]; see also [8, 11]. Part \((v)\) is clear.
|
2309.13504 | Attention Is All You Need For Blind Room Volume Estimation | In recent years, dynamic parameterization of acoustic environments has raised
increasing attention in the field of audio processing. One of the key
parameters that characterize the local room acoustics in isolation from
orientation and directivity of sources and receivers is the geometric room
volume. Convolutional neural networks (CNNs) have been widely selected as the
main models for conducting blind room acoustic parameter estimation, which aims
to learn a direct mapping from audio spectrograms to corresponding labels. With
the recent trend of self-attention mechanisms, this paper introduces a purely
attention-based model to blindly estimate room volumes based on single-channel
noisy speech signals. We demonstrate the feasibility of eliminating the
reliance on CNN for this task and the proposed Transformer architecture takes
Gammatone magnitude spectral coefficients and phase spectrograms as inputs. To
enhance the model performance given the task-specific dataset, cross-modality
transfer learning is also applied. Experimental results demonstrate that the
proposed model outperforms traditional CNN models across a wide range of
real-world acoustics spaces, especially with the help of the dedicated
pretraining and data augmentation schemes. | Chunxi Wang, Maoshen Jia, Meiran Li, Changchun Bao, Wenyu Jin | 2023-09-23T23:58:43Z | http://arxiv.org/abs/2309.13504v3 | # Attention is all you need for blind room volume estimation
###### Abstract
In recent years, dynamic parameterization of acoustic environments has raised increasing attention in the field of audio processing. One of the key parameters that characterize the local room acoustics in isolation from orientation and directivity of sources and receivers is the geometric room volume. Convolutional neural networks (CNNs) have been widely selected as the main models for conducting blind room acoustic parameter estimation, which aims to learn a direct mapping from audio spectrograms to corresponding labels. With the recent trend of self-attention mechanisms, this paper introduces a purely attention-based model to blindly estimate room volumes based on single-channel noisy speech signals. We demonstrate the feasibility of eliminating the reliance on CNN for this task and the proposed Transformer architecture takes Gammatone magnitude spectral coefficients and phase spectrograms as inputs. To enhance the model performance given the task-specific dataset, cross-modality transfer learning is also applied. Experimental results demonstrate that the proposed model outperforms traditional CNN models across a wide range of real-world acoustics spaces, especially with the help of the dedicated pretraining and data augmentation schemes.
Chunxi Wang\({}^{1}\), Maoshen Jia\({}^{1}\), Meiran Li\({}^{1}\), Changchun Bao\({}^{1}\), Wenyu Jin\({}^{2}\). \({}^{1}\) Speech and Audio Signal Processing Laboratory, Faculty of Information Technology,
Beijing University of Technology, Beijing, China
\({}^{2}\) AcousticDSP Consulting LLC, St Paul, MN, United States
## 1 Introduction
Dynamic parameterization of acoustic environments that users evolve has become an emerging topic in recent years. Parameters that characterize local rooms or other acoustic spaces can be used to model or design audio filters for various applications, e.g. speech dereverberation for automatic speech recognition (ASR) and voice communication [1, 2], spatial sound systems with room equalization [3, 4] and etc. In particular, for the proper realization of audio augmented reality (AAR), virtual acoustic objects are required to be seamlessly integrated into the real environment, which makes a good match between acoustical properties of virtual elements and the local space a necessity [5].
Conventionally, measured room impulse responses (RIRs) can be used to directly derive room parameters such as reverberation time (RT\({}_{60}\)) and direct-to-reverberant ratio (DRR). Another position-independent parameter, which has been proposed as a key part of the so-called "reverberation fingerprint" of a room, is the geometric room volume \(V\). Under ideal diffuse sound field assumptions, the relation between these parameters is given by the widely known Sabine's equation [6]:
\[RT_{60}(b)\approx 0.16\frac{V}{\alpha(b)\cdot S}, \tag{1}\]
where \(S\) denotes the total area of the room's surfaces and \(\alpha(b)\) is the area-weighted mean absorption coefficient in octave band \(b\). In practice, in-situ measurements of RIRs and volumes of users' local acoustic spaces are typically difficult to be carried out. Alternatively, an attractive option is to blindly estimate room acoustic parameters from audio recordings using microphones. The 2015 ACE challenge [7] sets the bar for the blind estimation of RT\({}_{60}\) and DRR from noisy speech sequences. Meanwhile, room volume estimation has long been formulated as a classification problem [8, 9]. With recent advancements in DNNs, formulating the blind room volume estimation as a regression problem by taking advantage of convolutional neural network (CNN) models in conjunction with time-frequency representations has become increasingly relevant. Genovese et al. [10] deployed a CNN trained using both simulated as well as real RIRs, and results show that it can estimate a broad range of volumes within a factor of 2 on real-measured data from the ACE challenge [7]. Similar CNN-based systems were proposed to blindly estimate room acoustic parameters from single-channel [11, 12, 13] or multi-channel [14] speech signals and demonstrated promising results in terms of both estimation accuracy and robustness to temporal variations in dynamic acoustic environments. In addition to the log-energy calculation of spectro-temporal features that prior works generally relied on, Ick et al. [15] introduced a series of phase-related features and demonstrated clear improvements in the context of reverberation fingerprint estimation on unseen real-world rooms.
CNNs are widely considered in the fore-mentioned approaches due to their suitability for learning two-dimensional time-frequency signal patterns for end-to-end modelling. In order to better capture long-range global context, CNN-attention hybrid models that concatenate CNN with a self-attention mechanism have achieved cutting-edge results for numerous tasks such as acoustic event classification [16, 17] and other audio pattern recognition topics [18, 19]. Gong et al. [20] took one step further and devised purely attention-based models for audio classification. The devised Audio Spectrogram Transformer (AST) was assessed on various audio classification benchmarks with new state-of-the-art results, which shows that CNNs are not indispensable in this context.
Inspired by the work in [20], in this work we propose a convolution-free, purely attention-based model to estimate geometric room volume blindly from single-channel noisy speech signals. To the authors' best knowledge, this is the first attention-based system in the area of blind acoustic room parameter estimation. The proposed system takes Gammatone magnitude spectral coefficients as well as the low-frequency phase spectrogram as inputs and captures long-range global context, even in the lowest layers. In addition, the system performance is further boosted by applying transfer learning of knowledge from an ImageNet-pretrained transformer model. A corpus of RIRs that consists of publicly available RIRs, synthesized RIRs and in-house measurements of real-world rooms is formulated with the aim of training and testing the proposed method. Experimental results show that the proposed model significantly outperforms CNN-based blind volume estimation systems on unseen real-world rooms using single-channel recordings.
## 2 System Methodology
In this section, we provide a detailed description of the proposed attention-based system for blind room volume estimation. We start with an explanation of how we formulate input features that leverage both magnitude spectral and phase-related information. We then outline the design of the convolution-free model architecture. Finally, we highlight the use of transfer learning to enhance model performance with limited training datasets. Note that the geometric room volume is the main focus of this study. However, the proposed system can be readily extended for blindly estimating other room acoustic parameters (e.g. RT\({}_{60}\), and total surface area).
### Featurization
Before being fed into the neural network, noisy speech signals go through a featurization process to obtain a two-dimensional time-frequency representation. Various extracted features are integrated into the feature block for model training, aiming to effectively capture information about the acoustic space.
Similar to prior literature [10, 11], the Gammatone ERB filterbank is selected for generating time-frequency representation as it leads to low model complexity while preserving signal information that is relevant to this problem. Specifically, the Gammatone filterbank consists of 20 bands covering the range from 50Hz to 2000Hz. We compute the STFT of the audio using a Hann window of 64 samples and a hop size of 32 samples, followed by convolution with the filterbank. This convolution generates a spectral feature (\(20\times 1997\)). Additionally, the phase information obtained from the STFT is retained. The phase angles computed for each time-frequency bin can be used to generate a phase feature, and it is then truncated to only include the bands corresponding to frequencies below 500 Hz (i.e. \(5\times 1997\)) as lower frequency behavior generally carries more information corresponding to the room volume [6]. Furthermore, the first-order derivative of phase coefficients along the frequency axis is also concatenated (i.e. \(5\times 1997\)). The configuration of this feature set aligns with the "_+Phase_" model outlined in [15], which is shown to outperform other methods that rely solely on magnitude-based spectral features.
Overall, The proposed feature block has dimensions of \(30\times 1997\), where 30 represents the feature dimension \(F\), and 1997 represents the time dimension \(T\).
### Model Architecture
While the attention-based model demonstrates impressive performance in audio classification tasks, its application in other domains especially regression-related problems remains unexplored. In this section, we propose a purely attention-based model following the Audio Spectrogram Transformer work in [20] for conducting blind room volume estimation tasks.
#### 2.2.1 Audio Spectrogram Transformer
In the proposed system, as shown in Fig. 1, in order to better leverage the local information of the audio, feature blocks are divided into \(P\) patches, each patch having a size of \(16\times 16\). During this division process, to maintain continuity in both the feature dimension and the time dimension, each patch overlaps with its surrounding patches by 6 feature dimensions and 6 time dimensions. Consequently, the number of patches \(P\) is determined as 398, where \(P=\big{\lceil}\frac{F-16}{10}\big{\rceil}\big{\lceil}\frac{T-16}{10}\big{\rceil}\). For further processing of these patches, a linear projection layer is introduced. This layer flattens each \(16\times 16\) patch into a one-dimensional patch embedding of size 768, referred to as the patch embedding layer.
Due to the fact that traditional transformer architectures do not directly process the sequential order of input sequences and these patches are not arranged in chronological order, trainable positional embeddings (which also have a dimension of 768) are incorporated into each patch embedding. This incorporation allows the model to grasp the spatial structure of the audio spectrogram and understand the positional relationships among different patches.
Similar to [20], this paper also leverages a [CLS] token at the beginning of the sequence and feeds the feature sequence into the Transformer. In the proposed system, encoding and the feature extraction of the input sequence are achieved by utilizing only the encoder part of the original Transformer architecture [21]. We adjust the input and output dimensions of the encoder. To be more precise, the input is a sequence formed by a feature block of size 30x1997 while the output is a single label used for volume prediction. The output of the whole Transformer is then used as the feature representation of the two-dimensional audio feature block, which is subsequently mapped to labels used for volume estimation through a linear layer with sigmoid activation.
#### 2.2.2 ImageNet Pretraining
Compared to methods based on CNN architecture, one disadvantage of Transformer methods lies in their increased demand for training data [22]. One of the main challenges in blind room volume estimation problems is due to insufficient data as publicly available RIR datasets with properly labelled room volume groudtruth are highly limited. To alleviate this issue, we took the following two measures: 1) a synthetic RIR dataset based on the image-source model (which will be covered in Sec. 3.1), 2) transfer learning.
More specifically, cross-modality transfer learning was applied to the proposed Transformer-based model. In this context, we leveraged a pretrained off-the-shelf Vision Transformer (ViT) model from ImageNet [23] for application within the proposed method. Prior to the transfer, necessary adjustments were made to ensure ViT's compatibility with the proposed architecture. Firstly, the ViT model was pretrained on three-channel images, which was distinct from the single-channel feature blocks used in the proposed model. Therefore, we calculated the average of parameters across the three channels of the ViT model and then applied this averaged information. In addition, the so-called 'Cut and bi-linear interpolate' method [20] was used to adjust the input size and manage positional encoding. Lastly, to adapt ViT for the task of blind room volume estimation, the final classification layer of ViT was reinitialized.
Figure 1: Proposed system architecture with the featurization process.
## 3 Data generation and augmentation
To address the challenging task of room volume estimation, neural networks require extensive data to train and validate. In this section, we devise a multi-stage audio generation process, utilizing six publicly available real-world RIR datasets and a synthetic dataset based on room simulation. In addition, a series of data augmentation techniques are introduced to enhance the generalizability of the model.
### RIR Dataset
As shown in Fig. 2, six publicly available real-world RIR datasets recorded in 55 real rooms are considered to cover a wide range of realistic acoustic room parameters. Data collection predominantly took place in geometrically regular rooms, encompassing spaces such as elevator shafts, classrooms, auditoriums, seminar rooms, and more. These datasets include the ACE Challenge dataset [7], the Aachen Impulse Response (AIR) dataset [24], the Brno University of Technology Reverd Database (BUT ReverbDB) [25], the OpenAIR dataset [26], the C4DM Dataset (C4DM) [27] and the decorate Dataset (dechorate) [28]. To account for the natural gap of real-world acoustic spaces between the 12 \(m^{3}\) to 7000 \(m^{3}\) range of volumes, the in-house BJUT Reverb Dataset was collected. Specifically, we took RIR measurements at 11 distinct rooms within the campus of Beijing University of Technology. For each room, 5 RIRs corresponding to different source-receiver positions were recorded. All RIRs are resampled to the sampling rate of 16kHz.
Moreover, we supplemented the real-world data with an additional 30 simulated RIRs based on virtual rooms of various geometries. The purpose was to augment the dataset within less common room volume ranges, thus bringing the total volume distribution closer to a normal distribution. To generate the synthetic dataset, we utilized the pyroomacoustics [29] package that employs the image-source model to simulate RIRs for rooms with specific volumes.
### Audio preprocessing
By utilizing the RIR dataset with labelled volumes, convolution was applied to 4-second clean speech signals recorded in anechoic chambers from the ACE dataset [7], resulting in reverberant speech sequences that were characterized by corresponding RIRs. To enhance the model's adaptability to noise across various types and signal-to-noise ratios (SNR), white noise and babble noise were added. Each type of noise was applied at the following SNR levels: [+30, +20, +10, +0] dB. This formulated _Dataset I_, which was then divided into train, test, and validation sets in a 6-2-2 ratio (as shown in Table 1). Note that for the test set, only RIRs recorded in real-world environments were selected to assess the model's estimation performance on unseen non-simulated rooms.
Moreover, for the sake of enhancing the networks' generalizability in unknown rooms and noisy environments, we adapted the widely-used SpecAugment [30] augmentation scheme and added it to our data generation pipeline. Specifically, reverberant speech signals without noise addition were considered in the training set. Subsequently, these audio were transformed into log mel spectrograms, upon which time masking, frequency masking and time warping were applied, as illustrated in Fig. 3. Then, masked mel spectrograms were converted back into time-domain signals. Finally, the resulting 4800 speech sequences with masking effects were added to the original training set for neural network training, which is denoted as _Dataset II_.
## 4 Experimental results
In this section, we evaluate the effectiveness of the proposed attention-based method and compare it to the state-of-the-art approach in the realm of single-channel blind room volume estimation. First, the experimental design and setup of training sessions are introduced. Second, we present results that demonstrate the estimation results of the considered systems in two different tracks.
### Experimental Design
To assess the performance of our proposed approach, we compared it with the _+Phase_ model in [15] that leverages a CNN-based architecture, as well as phase-related feature sets. This CNN model consists of six convolutional layers, each followed by an average pooling layer. Following the convolutional blocks are a dropout layer and a fully connected layer mapping to the output dimension, forming a complete feedforward convolutional neural network. This method is considered as the state-to-the-art in terms of single-channel blind room volume estimation.
We evaluated our data on the base-10 logarithm of the volume, which implies that the estimation error would be related to its order of magnitude. A log-10 estimate is more appropriate than a linear one due to the significantly large range of room volumes in the test set as shown in Fig. 2. The following four metrics were considered in this evaluation: mean squared error (MSE), mean absolute error (MAE), the Pearson correlation coefficient (\(\rho\)) between predicted and target values, and MeanMult (_MM_). _MM_ is the mean absolute logarithm of the ratio between the estimated room volume \(\hat{V}_{n}\) and the ground truth \(V_{n}\):
\begin{table}
\begin{tabular}{c|c c c c} Data & \(\#\) of & \(\#\) of & Real & Simulated \\ Split & _Dataset I_ & _Dataset II_ & Rooms & Rooms \\ \hline Train & 19200 & 24000 & 34 & 18 \\ Validation & 6400 & 6400 & 21 & 12 \\ Test & 6400 & 6400 & 21 & 0 \\ \end{tabular}
\end{table}
Table 1: Summary of Data Splits for _Datasets I_ & _II_
Figure 3: Augmentation schemes applied to reverberant speech signals.
Figure 2: The histogram illustrates the distribution of RIRs across various datasets based on their labelled volumes
\[\textit{MM}=e^{\frac{1}{N}\sum_{n=1}^{N}|\ln\left(\frac{\tilde{V}_{n}}{v_{n}}\right)|} \tag{2}\]
where \(N\) is the number of samples. This metric provides an overview of the average error in ratios between estimated and target parameters.
During the model training phase, MSE was used as the loss function and the Adam optimizer from PyTorch was deployed for optimization. 80 epochs were run in each training session for all models as good convergence behaviors can already be observed for both training and validation. L2 regularization was applied to mitigate potential over-fitting. Additionally, an adaptive learning rate strategy was adopted to ensure efficient convergence during training. For comparison test purposes, we switched between _Dataset I_ and _Dataset II_, as well as whether or not to use the pretrained model from ImageNet, while maintaining same hyperparameters between the attention-based and CNN-based model to ensure uniformity in model configurations.
### Results
#### 4.2.1 Base Systems
We started with the experiment of comparing the base version of our proposed method with the CNN-based baseline system [15]. The goal of this experiment is to observe if we can achieve similar estimation performance by simply replacing CNN with a purely attention-based model. We trained both the CNN model and the proposed method (without ImageNet pretraining) on _Dataset I_ separately, feeding them with the same feature set as outlined in Section 2.1. Results of these two models are presented in Table 2.
It can be clearly seen that the proposed model outperforms the CNN model in terms of prediction accuracy, relationship with ground truth values and predictive capability. This indicates that neural networks purely based on attention are sufficient (or even more superior than CNN models) to accurately learn the relationship between indoor acoustic characteristics and room volumes, even with the low-layer network configuration and a relatively small number of training epochs.
#### 4.2.2 Enhanced Systems With Pretraining
To further investigate the impact of ImageNet pretraining on the performance of the proposed method, we introduced the "Proposed method w/ Pretrain" model. We conducted separate training sessions for the CNN model, the Proposed method, and the "Proposed method w/ Pretrain" model on _Dataset I_. Additionally, we also incorporated the SpecAugment data augmentation method into the three different models. Specifically, all three models were retrained on _Dataset II_ to investigate its impact on model performance. The results of the above experiments are listed in Table 3.
With _Dataset I_, the deployment of the ImageNet pretraining elevated the proposed method's performance to a new level, yielding a significantly improved room volume estimation accuracy. With the application of the SpecAugment method, _Dataset II_ facilitated to further enhance system performance for all three models. Particularly, the augmentation effect was more prominent in the proposed method w/ Pretrain, confirming the effectiveness of SpecAugment in terms of alleviating overfitting and enhancing models' generalizability. As a more illustrative example, the best-performing system, i.e. "Proposed method w/ Pretrain" model with _Dataset II_, resulted in a median and mean absolute error of 223 \(m^{3}\) and 1501 \(m^{3}\) in linear scale respectively, given that the range of test set room volumes was [12, 21000] \(m^{3}\). In contrast, the median and mean absolute error of the CNN-based system with _Dataset II_ was 524 \(m^{3}\) and 2400 \(m^{3}\), respectively.
Fig. 4 demonstrates the confusion matrices for these two systems, with the x-axis and y-axis representing log-10 indices for volume sizes. It can be clearly seen that the "Proposed method w/Pretrain" model is consistently well-distributed around the ground truth across the tested range while the CNN-based method diverges. This indicates that our proposed attention-based model captures the representation of the room volume regression problem through the effective training process and more importantly generalizes the learned patterns to unseen real-world rooms.
## 5 Conclusion and Future Work
In this study, we aim to explore the feasibility of applying a Transformer-based model in the blind room volume estimation task and to benchmark its performance using different training strategies. Experimental results based on unseen real-world rooms with realistic noise settings confirm that the proposed method exhibits more superior performance compared to traditional CNN-based methods, indicating that a neural network purely based on attention is sufficient to obtain high performance in audio-related regression problems. Future work will investigate the flexibility and robustness of the proposed system in terms of variable-length audio inputs.
\begin{table}
\begin{tabular}{c|c c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{5}{c|}{_Dataset I_} & \multicolumn{5}{c}{_Dataset II_} \\ \cline{2-9} & MSE & MAE & \(\rho\) & _MM_ & MSE & MAE & \(\rho\) & _MM_ \\ \hline CNN [15] & 0.4827 & 0.5545 & 0.5942 & 3.5874 & 0.4657 & 0.5393 & 0.6157 & 3.4683 \\ \hline Proposed method & 0.3917 & 0.4669 & 0.7425 & 2.9302 & 0.3270 & 0.4124 & 0.7500 & 2.5846 \\ \hline
**Proposed method** & 0.2465 & 0.3622 & 0.8364 & 2.3027 & **0.1892** & **0.2965** & **0.8800** & **1.9792** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison of different models with and without the application of SpecAugment.
Figure 4: Confusion matrices for the CNN model and the “Proposed method w/Pretrain” model trained on _Dataset II_. The dashed red line indicates a perfect prediction.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Method & MSE & MAE & \(\rho\) & _MM_ \\ \hline CNN [15] & 0.4827 & 0.5545 & 0.5942 & 3.5874 \\
**Proposed method** & **0.3917** & **0.4669** & **0.7425** & **2.9302** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between the CNN-based system [15] and the base version of the proposed method. |
2309.03717 | Parallel processing of radio signals and detector arrays in CORSIKA 8 | This contribution describes some recent advances in the parallelization of
the generation and processing of radio signals emitted by particle showers in
CORSIKA 8. CORSIKA 8 is a Monte Carlo simulation framework for modeling
ultra-high energy particle cascades in astroparticle physics. The aspects
associated with the generation and processing of radio signals in antennas
arrays are reviewed, focusing on the key design opportunities and constraints
for deployment of multiple threads on such calculations. The audience is also
introduced to Gyges, a lightweight, header-only and flexible multithread
self-adaptive scheduler written compliant with C++17 and C++20, which is used
to distribute and manage the worker computer threads during the parallel
calculations. Finally, performance and scalability measurements are provided
and the integration into CORSIKA 8 is commented. | A. Augusto Alves Jr, Nikolaos Karastathis, Tim Huege | 2023-09-07T13:49:48Z | http://arxiv.org/abs/2309.03717v1 | # Parallel processing of radio signals and detector arrays in CORSIKA 8
###### Abstract:
This contribution describes some recent advances in the parallelization of the generation and processing of radio signals emitted by particle showers in CORSIKA 8. CORSIKA 8 is a Monte Carlo simulation framework for modeling ultra-high energy particle cascades in astroparticle physics. The aspects associated with the generation and processing of radio signals in antennas arrays are reviewed, focusing on the key design opportunities and constraints for deployment of multiple threads on such calculations. The audience is also introduced to Gyges, a lightweight, header-only and flexible multithread self-adaptive scheduler written compliant with C++17 and C++20, which is used to distribute and manage the worker computer threads during the parallel calculations. Finally, performance and scalability measurements are provided and the integration into CORSIKA 8 is commented.
Introduction
Over the past couple of decades of research on extensive particle showers, radio detection has become a technique competitive with standard particle and fluorescence driven measurements. Due to the complexity of extensive particle showers, in air and other media, detailed particle-level simulations of the radio emissions are often needed to analyze experimental data and reconstruct the properties of the primary particles.
In this context, the two standard software tools used for radio emission simulations are CoREAS [1] as implemented in CORSIKA 7 and ZHAireS [2]. These tools implement two different formalisms for calculating the radio emission from the particle tracks in the extensive particle shower, namely the "Endpoint" [3, 4] and the "ZHS" [5] formalisms, respectively. Both algorithms have been recently implemented on CORSIKA 8 [6], which is a modern C++17 compiliant Monte Carlo simulation framework for modeling ultra-high energy particle cascades in astroparticle physics.
Additionally, proposed next-generation experiments with growing array size and channel-count pose significant challenges regarding the computational cost for calculating radio emissions, especially for ultra high-energy showers and signals propagating in media with varying properties. In order to mitigate such impacts, the radio emission module for the CORSIKA 8 (C8) framework [7] has been reimplemented in multithread friendly fashion. This contribution discusses these developments and is organized as the following. section 2 gives an overview of the radio module of CORSIKA 8. section 3, Gyges, a C++17/20 library for distribution and management of tasks on multithread systems, is presented. section 4 the parallelization strategy used of the radio module calculations is covered, including the corresponding updates on the interfaces and codes implementing the algorithms. Finally, section 5 presents the performance gains in function of the number of threads for both formalisms, measured for array detectors with different sizes. section 6, the conclusions and perspectives are drawn.
## 2 Overview of the radio module in CORSIKA 8
The top-level architecture of the radio process module is shown in Figure 1. All components in the module can be independently configured and combined with either the CORSIKA 8 built-in interface or custom C++ code, making possible construct multiple radio process instances for different scenarios. The components of the modules have been extensively presented in [7, 8]. In this contribution, the flow of the radio calculation is discussed and how performance is enhanced using multithreading.
Once the radio process has received a particle track, the track is being checked according to the _track filter_ in order to be determined if this track is relevant for the radio calculation or not. The track is then pushed forward to the _formalism_ and the track needs to be looped over all antennas existing in the antenna collection. A significant portion of the calculation happens after this step, which needs to be repeated for every antenna available and for every single particle track provided. This is precisely the part of the code we wish to accelerate with this work. Inside the loop, the particle track is fed to the propagator, which calculates the valid emission paths from the particle to the antenna. Hence, all the necessary information to calculate the electric field vector (or vector potential) is present now, and finally this information is processed and stored in the _antenna_ instance.
The load of this calculation is directly affected by the underlying complexity of the _propagator_ used. Naturally, the larger the number of antennas in the detector, the higher the runtime of the radio simulation will be. By assigning different bunches of antennas to available threads, we expect to observe a significant performance boost.
## 3 Gyges
Gyges is a lightweight C++17, or higher, header-only library to manage thread pooling, which has been developed in the context of the ongoing effort to paralellize the CORSIKA 8 framework. By deploying Gyges, the computational costs associated to creating and destroying a thread-pool, a gyges :: gang in the library's jargon, can be paid just once in the program lifetime, with threads of the pool picking-up tasks as they become available. If there are no tasks, the threads just go sleeping. Additionally, tasks can be submitted from multiple threads, with the submitter getting a std :: future object to monitor the task in-place. On the task implementation side, developers get access to a std :: stop_token that can be used to interrupt the task execution, if a request to do so arrives from gyges :: gang via the gyges :: gang :: stop ().
As default behavior, once a gyges :: gang is created, it will promptly pick up and process any submitted task. This behavior can be changed, putting the gyges :: gang in a "hold-on" state. In that case, the processing of the tasks will be postponed until it is put back on "unhold" status, while the threads will be put to sleep until the gyges :: gang :: unhold() command is sent. Among other features, Gyges provides two implementations of the gyges :: for_each algorithm, with one of than able to use an already existing gyges :: gang object.
Figure 1: A schematic diagram of the radio process currently implemented in CORSIKA 8 and how it integrates with the CORSIKA 8 framework
```
classgang { //constructtakingthe gang(unsignedintconstthread_count= std::thread::hardware_concurrency(), boolrelease=true); gang(gangconst&other)=delete; gang(gang&other)=delete; //submitataskimplementingvoidoperator(void) template<typenameFunctionType> inlinesstd::future<void>submit_task(FunctionTypef); //notifytherunningtasks(requeststop), inlinevoidstop(void); //putthegangon'hold'status inlinevoidhold(void); //revertthegangto'processing'status inlinevoidunhold(void); //checksthegangstatus inlineboolon_hold(void); //getthegangsize inlinesstd::size_tsize(void); }; //for_eachacceptingapre-createdgang template<typenameIterator,typenamePredicate> voidfor_each(Iteratorbegin,Iteratorend, Predicateconst&functor,gang&pool); //for_each template<typenameIterator,typenamePredicate> voidfor_each(Iteratorbegin,Iteratorend, Predicateconst&functor);
```
Listing 1: Interface of gypes::gang and gypes::for_each implementations.
Gyges is licensed under GPL version 3 and is currently in a stable release state. The code is available at [https://gitlab.iap.kit.edu/AAALvesJr/Gyges](https://gitlab.iap.kit.edu/AAALvesJr/Gyges).
## 4 Radio module parallelization strategy
The radio module calculates the signal corresponding to each particle, and the tracks that describe its trajectory, for each antenna of the array detector, often running as one of the final
operations in the particle simulation process sequence. In order to parallelize the radio module, the calculation of the signal over the array detector is processed using a gyes :: gang containing a specifiable number of threads, in a such way that, for each particle and its tracks, the response of the antennas and the storing of information is calculated in parallel.
Since the signal processing corresponding to a single antenna is not intensive enough to occupy efficiently a thread, each submitted task computes the response corresponding to a bunch of antennas. As it will be detailed in section 5, the number of antennas in this bunch in comparison to the Gyges gang size is a critical parameter for the overall efficiency of the radio module.
This logic is implemented with the introduction of a couple of classes, one per formalism, to encapsulate the pulse calculated for each antenna in a callable object abstracting away the implementation details of CoREAS and ZHAireS. This object is called runner, and it is the one to be distributed, together with the antenna collection that describes the array detector, to the worker threads managed by the gyes :: gang instance, which is being held by the corsika :: RadioProcess and has the same life-time of it. These developments are complemented by changes in the user interfaces easing to instate and to deploy the radio module. These changes are summarized in Listing 2.
CORSIKA 8-wise, the expected overall speed-up depends hugely on the detector size, i.e. number of antennas in the detector. For large array detectors, or computing intensive propagators, the importance of radio module operations grows, tending to dominate the particle simulation sequence. In such situations, the speed-up is larger.
## 5 Performance measurements and validation
The raw performance gains from parallelization of the radio module calculations over the antennas of the array detector have been assessed measuring the time spent, and the corresponding speed-up, to process the electromagnetic pulse from a single particle as a function of the array detector size and number of threads. Array detectors with different sizes have been tested against gyes :: gang with up to 48 worker threads. The results are summarized in Figure 2, Figure 3 and Figure 4
Figure 2 shows that for array detectors with 200 antennas, the speed-up peaks between 10 and 15 worker threads, beyond which the performance decreases due to computing tasks not being able
Figure 2: Improved interface of radio module.
Figure 4: Performance to process a single particle as a function of number of threads for an array detector containing 10,000 antennas.
Figure 3: Performance to process a single particle as a function of number of threads for an array detector containing 1000 antennas.
Figure 2: Performance to process a single particle as a function of number of threads for an array detector containing 200 antennas.
to occupy the CPU enough to hide the latency and costs associated to management of multiple threads. As it is shown in Figure 3 and Figure 4, by increasing the number of antennas, the speed-up scales mostly as predicted by Amdahl's law. Similar results would be achieved, albeit leading to performance peaking at different number of threads, when deploying propagators performing heavier calculations.
The overall impact of the parallelization of the radio module on CORSIKA 8 has been measured running a full electromagnetic shower simulation. In that scenario, due to the Gyges design, the overhead for creating, managing and submitting tasks to the thread pool is negligible in comparison to the other initialization routines called up-front in the full shower simulation. The radio module is currently the only component of the CORSIKA 8 sequence capable of performing its tasks in parallel, meaning that the maximum speed-up is limited by the amount of code running sequentially, in accordance with Amdahl's law. The total time to run the full shower is limited below by not deploying the radio module at all, and above by running this module in a single thread, that is sequentially. Figure 5 summarizes the results and confirms the profiles performed for measuring the single particle performance. In the same figure, we show for reference the runtime of the same electron induced shower with the radio emission calculation turned off.
Finally, the numerical consistence of the predictions for each algorithm has been checked for different numbers of threads. Figure 6 and Figure 7 show that there is no measurable impact of the parallelism in numerical results provided by each algorithm. The signal pulses simulated with both formalisms are identical regardless the number of threads, which confirms that the physics calculations are done consistently and accurately.
Figure 5: The parallelized radio module running on an electron induced air shower processing a detector array of 160 antennas. 2 formalism, namely CoREAS and ZHS are activated and use 160 antennas each. The performance peaks at 10 worker threads, beyond which the performance degradates.
## 6 Conclusions
The status of the effort to parallelize the calculations of the radio module implemented in CORSIKA 8 has been summarized. The implementation of the multithread dispatching mechanisms and management, which is based in Gyges, is compliant with C++17 or higher standard and allows specifying the number of worker threads without impacting any numerical result. The optimal number of threads, in which the performance peaks, depends on of the size of the antenna array. Under favorable, the performance gains are significant, with speeding-up reaching a factor 10 or superior. The code is currently under final internal review and should be integrated into CORSIKA 8 main branch in near future.
|
2306.01785 | Beyond Rankings: Exploring the Impact of SERP Features on Organic
Click-through Rates | Search Engine Result Pages (SERPs) serve as the digital gateways to the vast
expanse of the internet. Past decades have witnessed a surge in research
primarily centered on the influence of website ranking on these pages, to
determine the click-through rate (CTR). However, during this period, the
landscape of SERPs has undergone a dramatic evolution: SERP features,
encompassing elements such as knowledge panels, media galleries, FAQs, and
more, have emerged as an increasingly prominent facet of these result pages.
Our study examines the crucial role of these features, revealing them to be not
merely aesthetic components, but strongly influence CTR and the associated
behavior of internet users. We demonstrate how these features can significantly
modulate web traffic, either amplifying or attenuating it. We dissect these
intricate interaction effects leveraging a unique dataset of 67,000 keywords
and their respective Google SERPs, spanning over 40 distinct US-based
e-commerce domains, generating over 6 million clicks from 24 million views.
This cross-website dataset, unprecedented in its scope, enables us to assess
the impact of 24 different SERP features on organic CTR. Through an ablation
study modeling CTR, we illustrate the incremental predictive power these
features hold. | Erik Fubel, Niclas Michael Groll, Patrick Gundlach, Qiwei Han, Maximilian Kaiser | 2023-05-31T12:01:02Z | http://arxiv.org/abs/2306.01785v1 | # Beyond Rankings: Exploring the Impact of SERP Features on Organic Click-through Rates
###### Abstract
Search Engine Result Pages (SERPs) serve as the digital gateways to the vast expanse of the internet. Past decades have witnessed a surge in research primarily centered on the influence of website ranking on these pages, to determine the click-through rate (CTR). However, during this period, the landscape of SERPs has undergone a dramatic evolution: SERP features, encompassing elements such as knowledge panels, media galleries, FAQs, and more, have emerged as an increasingly prominent facet of these result pages. Our study examines the crucial role of these features, revealing them to be not merely aesthetic components, but strongly influence CTR and the associated behavior of internet users. We demonstrate how these features can significantly modulate web traffic, either amplifying or attenuating it. We dissect these intricate interaction effects leveraging a unique dataset of 67,000 keywords and their respective Google SERPs, spanning over 40 distinct US-based e-commerce domains, generating over 6 million clicks from 24 million views. This cross-website dataset, unprecedented in its scope, enables us to assess the impact of 24 different SERP features on organic CTR. Through an ablation study modeling CTR, we illustrate the incremental predictive power these features hold.
SERP features, Click-through rates prediction, Organic search
## I Introduction
Search Engine Optimization (SEO) is a strategic process used in online marketing with the aim of enhancing a website's visibility in search engine results pages (SERPs). The objective is to attract the highest possible volume of organic traffic to a website, which is a cornerstone strategy for long-standing online marketing [1]. In the realm of e-commerce, it's estimated that 33% of total web traffic originates from organic search results [2]. Organic traffic encompasses those instances where a user inputs a keyword in a search engine and lands on a website by clicking on one of the non-ad results. Given the high relevance of organic traffic, it is crucial for website providers to optimize their strategies to maximize the likelihood of generating clicks through search engines. Generally, the clicks to a website on a search result page for a specific keyword can be formulated as follows:
\[Clicks=Impressions*CTR \tag{1}\]
where CTR refers to the click-through rate of a result [3]. To enhance the CTR, SEO efforts usually center on a website's rank on a result page, as top-ranked websites command more user attention and consequently receive more clicks [4]. To secure a high rank on a result page, a website needs to demonstrate relevance for the particular keyword. Achieving this typically involves aligning a website's content and metadata with the keyword, a focus for many SEO practitioners [5, 6, 7, 8, 9, 10].
Although introduced in the early 2000s, _SERP features_ -- a visually prominent component of
search results--have been heavily utilized by Google since the mid-2010s [11]. These SERP features are elements that encapsulate information from organic search results with the intent of making Google's result pages more engaging [12]. They can take various forms, such as providing an instant answer to a user's question, displaying a knowledge panel on the right of the result page, or including an image alongside a result [13]. Figure 1 provides an illustration of SERP features.
Introducing these visual elements can significantly influence users' clicking decisions, thus impacting CTR and the revenue funneling into e-commerce websites. Some argue that by integrating additional features into its result pages and leveraging information originally published on third-party websites, Google reduces users' need to leave its ecosystem [15]. This could potentially disadvantage the publishers of the original information who rely on website traffic. In contrast, Google asserts that inclusion in SERP features can significantly boost CTR, visits, and time spent on a website [12]. Despite the increasing prevalence of SERP features, their precise impact on CTR remains poorly understood. This gap in our knowledge raises an important research question: **To what extent do SERP features influence CTR, and how does their presence affect the importance of a website's ranking position?**
In this study, we broaden the existing analysis of the influence of a website's ranking position, which has been identified as the main direct influence on CTR, to include the characteristics of SERP features. This inclusion of SERP feature characteristics presents a novel aspect of CTR research. Furthermore, we compare the relative importance of SERP features with other result page characteristics.
This paper is structured as follows: Section II offers a comprehensive review of related works on the influences on CTR, including research on SERP features and CTR prediction models. Section III and IV provide detailed descriptions of the dataset used in this study and the methodology applied, respectively. Section V presents an exploratory data analysis of the interactions between SERP features, CTR, and other features. In Section VI, we investigate the importance of SERP features by examining their predictive power. Finally, Section VIII provides a discussion of the findings and concludes the paper.
## II Related Works
### _Determinants of CTR_
Search engine optimization (SEO) is one of the most widely adopted online advertising strategies in the present day, and it plays a crucial role in driving organic website traffic [10, 16]. A significant body of research in the field of SEO marketing has identified the ranking position of a website on a search engine results page (SERP) as the primary determinant of click-through rates (CTR), making it a central focus of SEO efforts [5, 6, 7, 8, 9, 10]. SEO strategies aim to elevate the rank of a website by meticulously tailoring its content to align with specific keywords. This might involve adding new,
Fig. 1: Exemplification of SERP features on a Google result page adapted from [14], between organic results (1) and (3), such as answer box (2), video results (4), knowledge panel (5), among others.
high-quality blog posts to improve the relevance of the website to search engines [9]. Other factors such as keyword characteristics or result characteristics have been studied predominantly for their impact on the position of a website in search results, and less so for their direct influence on CTR [17]. For instance, much of the literature on keywords aims to identify strong keywords and trends and recommends optimal keyword characteristics, content adaptation, linkage optimization, or structure adjustment to achieve higher rankings on search engine result pages [18, 19, 20, 21]. While the position of a website has been widely recognized as the primary driver of CTR, it is equally important to consider additional characteristics such as result, keyword, and SERP features that directly influence CTR.
### _SERP Features_
Although Google introduced the first SERP feature as early as 2002, it was not until the mid-2010s that their usage intensified [13]. Today, SERP features are among the most conspicuous and widely used elements on search engine result pages. Oliveira and Texeira Lopes provide a comprehensive overview of the evolution of SERP features on Google Search and Microsoft Bing, noting that these features have become more common and diverse, aggregating content from different verticals and providing more features that give direct answers [13]. However, critics argue that Google's addition of more SERP features and use of information originally published on third-party websites has reduced the need for users to leave Google's ecosystem, thereby disadvantaging the publishers of the original information who depend on traffic to their websites. According to a Semrush study, zero-click searches, where a user does not leave Google's ecosystem, account for 25.6% of all searches [15].
Despite the long-standing existence of SERP features and their visual dominance on search engine result pages, their influence on user behavior has only been sporadically studied. For instance, one study analyzes the positive effect on CTR when a website appears in the featured snippet, but this analysis relies on information found on blogs from influential SEO companies [22]. Numerous influential marketing blogs tout the benefits of appearing in SERP features, with many suggesting that SERP features can enhance the visibility of websites featured in them and reduce the CTR of websites that appear alongside SERP features without being featured [14, 23, 24]. Google advertises significant improvements in click-through rate, visits, and time spent on websites when websites are chosen to be shown in SERP features [12]. Despite these claims, the actual effect of SERP features on CTR remains largely unexplored and under-theorized. This gap in the literature indicates that there is significant potential for this work to illuminate the opaque role of SERP features in Google's search engine ecosystem and provide the first empirical research on the SERP features' impact on website performance [24].
### _CTR Prediction Models_
Generally, CTR prediction research has largely been concerned with binary classification problems, asking questions such as "Will user X click on item Y?" The items under consideration are typically ads on a search engine result page, but they can also be articles in an online shop [25]. This approach differs fundamentally from the problem formulation in this work, which treats CTR prediction as a regression problem, predicting a continuous value based on aggregated data, as opposed to making a classification for a single observation. This difference is also reflected in the nature of the datasets and models used for classification. Datasets consisting of individual users and items are often highly sparse and high-dimensional after one-hot encoding [25, 26]. As a result, the research and the resulting models have been developed to handle this high dimensionality and sparsity [26].
The field of CTR prediction has seen the development of a range of models, including multivariate statistical models [27, 28], factorization machines and field-aware factorization machines [29, 30, 31], and combinations of deep learning models and factorization machines [32, 33, 34, 35, 36, 37]. Yang provides a detailed overview of CTR prediction in the context of user-item interactions [25]. However, these models, which were developed for classification on
sparse ad-click datasets, are not suitable candidates for predicting aggregated organic CTR (see section IV-A). While recent research has focused on predicting user-item interactions, to our knowledge, only one work has proposed a model for predicting aggregated CTR [38]. However, this work is concerned with the prediction of ad CTR rather than organic results.
To summarize, although a substantial body of literature exists on CTR prediction, these works differ from the problem at hand in terms of the nature of prediction (individual vs. aggregated) and the type of clicks (organic results vs. ads). This suggests that the models proposed in these works may not be directly well-suited to the prediction problem in our study, especially in light of understanding the importance of SERP features.
## III Dataset Description
### _Data Acquisition and Preprocessing_
The primary dataset employed for this study was provided by Grips, a German startup that endeavors to generate a comprehensive map of online commerce for retailers and brands [39]. The dataset is an aggregation of historical Google search data, specifically gathered from US-based desktop searches during the period spanning May 31st, 2022, to August 18th, 2022. With more than 24 million views and over 6 million user clicks on search engine result pages, the dataset presents an extensive source of information. The data has been collected across approximately 67,000 distinct search terms, commonly referred to as keywords, and 43 different e-commerce stores representing a broad array of industries.
Each data point in the dataset provides insight about a specific URL and keyword, encapsulating metrics related to the performance of the URL for the corresponding keyword. To illustrate, the data could reflect metrics pertinent to the performance of the URL 'www.amazon.coom/' for the searched keyword 'amazon'. Thus, the data is presented in a tabular, heterogeneous format.
The primary dataset was further enhanced with additional features derived from a leading search engine marketing company and Google's Keyword Planner. These features encompass a diverse range of aspects, from the searched keyword and metadata about the result page to information specific to the result and the displayed SERP features. The dataset also includes the click-through rate for each result. In its raw form, the dataset comprises 70 features, among which is information about the presence of 24 distinct SERP features (see the source of each feature in appendix A). The dataset also provides granular data about which specific result was included in which SERP feature. As such, given its extensive set of result page features, which encompass real-world, non-publicly accessible data for numerous e-commerce stores, this dataset represents a unique and innovative resource for research.
In order to prepare the data for subsequent analysis and modeling, the guidelines for data preprocessing as outlined by Google adhered to [40, 41]. To reduce the potential for noise in the data due to the observations of result pages that received minimal user exposure, an impression threshold of 20 impressions per result was imposed. As a result, 58,898 data entries remain for analysis.
### _Feature Categorization and Engineering_
To facilitate a comprehensive comparison of the impact of SERP features on CTR, in relation to other characteristics of the result page, the variables within the dataset were divided into three distinct categories or subsets: _Position_, _Keyword_, and _SERP features_. Moreover, self-generated features were incorporated, where relevant, to augment these subsets. These engineered feature subsets are employed in both the overall data analysis and the modeling process. A brief overview of these subsets is provided below. For a comprehensive list of features within each subset, please see appendix B, and for a detailed description of all features, please refer to the data dictionary in appendix A.
* **Position**: This subset includes features that are explicitly related to the ranked position of the result. These features encompass the current result position at the time of measurement, the monthly average position, and the positional difference from the previous measurement. While the position subset is treated as
a distinct subset in the subsequent analysis, it is crucial to acknowledge that the position feature plays a pivotal role in addressing the problem at hand. As emphasized by the literature review and the findings of this study, the position is the most significant predictor of CTR. Therefore, the analysis will be conducted using an ablation study approach, using the position subset as a baseline and incorporating the additional subsets as components to assess their relative importance beyond that of position.
* **Keywords**: This subset comprises the keyword itself and all directly resulting elements from the keyword a user inputs into the search engine. This includes details about the keyword, such as the length of the keyword, information indirectly related to that keyword like the level of competition or the search volume, and finally, the search intent. To provide a numerical representation of the complexity of each keyword, the Flesch reading ease score was computed [42]. Although more recent readability measures have been developed, Flesch's reading ease score was chosen for its widespread acceptance and straightforward interpretability. This score is calculated based on the average number of words per sentence and the average number of syllables per word. A high score signifies good readability, typically characterized by short sentences and words.
* **SERP features**: The SERP features in the dataset are binary features that denote for each entry whether a specific SERP feature is present (1) or absent (0). These features can be further divided into page SERP features and positional SERP features. Page SERP features indicate whether a certain feature is present anywhere on the page. In contrast, positional SERP features specify whether a result is included in a particular SERP feature. This can manifest in various forms. For instance, a Review could be displayed beneath a URL, or a Snippet from the URL could be featured in the Knowledge Panel. Consequently, if a positional SERP feature is present, the corresponding page SERP feature must also be present, although the converse is not necessarily true. Page SERP features also include details about the type of ads displayed on a result page. We have consolidated this information into a binary feature that describes whether ads are shown. To enhance the interpretability of the dataset, it was further enriched with the total count of SERP features for both page and positional features.
## IV Methodology
### _Examined Models_
This research treats the task of predicting Click-Through Rates (CTR) as a regression problem. Based on a review of relevant literature and the characteristics of the dataset at hand, three primary categories of off-the-shelf regression models were chosen for examination: linear regression models and tree-based models, and neural network models.
Linear regression models are favored for their computational efficiency and interpretability. Ordinary least squares regression (OLS) was chosen as a baseline model, given its statistical robustness and popularity in the field. To account for potential feature interactions, an adapted version of the OLS model was utilized, incorporating interactions between two features as the product of their values (Poly2). However, to control the exponential growth in the number of features, only interactions between two features with a Pearson correlation coefficient greater than 0.05 in the training dataset were included in the model. A Ridge regression model was introduced as a third variant to address potential overfitting. The Ridge model employs L2-Norm for implicit feature selection.
Tree-based models were the second category selected, specifically, ensemble models based on decision trees. These models are advantageous as they do not require specific data distributions and perform swiftly without preprocessing while being capable of modeling feature interactions. The top three performing tree-based models for tabular data from a recent comparative benchmark--Random Forest, Gradient Boosting Decision Trees (GBDT), and XGBoost--were tested, along with CatBoost,
another tree-based model that has shown strong performance in recent studies.
The third category included neural network regression models. While Deep Neural Networks (DNN) have shown outstanding performance with homogeneous data types like images, audio, and text, they have been less successful with heterogeneous, tabular data [43] in comparison to tree-based models. However, other study reports that DNNs can match or even surpass the performance of traditional machine learning models on tabular data with over 10 million samples [44]. Given that some of the best-performing models in recent CTR prediction research incorporate neural networks, this study also tests TabNet [45], a neural network structure specifically designed for tabular data, as well as Wide&Deep [33] and DeepFM [34], which are recommended by recent CTR prediction research.
### _Subset Combinations_
To assess the predictive power of each subset, combinations of subsets were tested using the selected models. Drawing from both the literature review in section II, the position subset, highlighted in the literature review and our analysis as the most significant feature, serves as the baseline for an ablation study. When testing a model on a subset combination, all features from the combined subsets are utilized for prediction. The position subset is consistently included, with the other subsets added incrementally.
Initially, the position subset is individually combined with each of the other subsets to evaluate the additional predictive power contributed by each subset. Subsequently, the predictive power of all three subsets combined--position, keyword, and SERP features--is examined to understand their collective impact. Consequently, four subset combinations are tested on each model presented in IV-A. To maximize the potential of each model, hyperparameter tuning was conducted where reasonable. Please find the full documentation on tested parameter ranges in appendix C and the resulting parameters in appendix D.
### _Evaluation Metric_
A common evaluation metric, the Root Mean Squared Error (RMSE), is employed for a comparative analysis of the different feature subsets across models, as defined in the equation:
\[RMSE=\sqrt{\frac{1}{N}\Sigma_{i=1}^{N}(y_{i}-\hat{y}_{i})^{2}} \tag{2}\]
This metric effectively penalizes larger errors, which aligns with the objectives of the present business case where substantial errors can lead to significant misallocations of SEO resources. Moreover, the RMSE serves as a convenient optimization metric across all model categories, offering advantages over alternative metrics like Mean Absolute Error (MAE).
### _Feature Importance Evaluation_
To evaluate the significance of SERP features in predicting CTR and to compare them with position and keyword characteristics, this research employs model interpretation techniques. These techniques assign importance values to features in a machine learning model. A high importance value indicates a greater impact on the overall prediction compared to a feature with a lower importance value. By grouping features into subsets as described earlier, we can examine the collective importance of SERP features in relation to position and keyword features. The XGBoost model is utilized for assessing the feature importance, as it has shown superior performance.
Three techniques were employed: SHAP, permutation importance, and average gain in XGBoost splits. These were chosen for two reasons: their ability to compare the global relevance of features for the prediction and their effectiveness in providing a more holistic conclusion when used together. They also serve as a way to cross-check the results.
SHAP importances are based on Shapley explanations, a game-theoretic approach that assigns individual features a share of an aggregated outcome [46]. It measures the average contribution of a feature to the prediction across all observations [47, 48]. Permutation importance is calculated by randomly shuffling each feature and measuring the decrease in a model's performance caused by
the permutation [49]. Features that cause a large decrease in performance when shuffled are ones that the model heavily relies on to make predictions [48]. The average gain in XGBoost's tree splits is calculated based on the intrinsic structure of the XGBoost model by averaging the improvement of the loss function in all splits in a feature is used [50]. According to the gain metric, features with a larger average contribution to reducing the loss function are more relevant for the overall prediction.
## V Exploratory Data Analysis (EDA)
### _The Broad Impact of SERP Features_
The first stage of this study's exploration involves a high-level analysis of the influence of SERP features on CTR. A notable trend is observed when the count of positional SERP features, those attached to a single result, increases. This trend shows a positive correlation with CTR (\(\rho=0.22\)). However, an increase in the number of page SERP features appearing on the entire results page doesn't exhibit a similarly clear trend (\(\rho=-0.11\)), suggesting a potential negative influence on CTR. This overall negative trend, albeit slight, can be better understood by differentiating the unique SERP features, thereby illuminating the individual drivers of this influence illustrated in Figure 2).
The data suggest that the presence of most SERP features correlates negatively with CTR. This implies that businesses aiming to optimize their search engine performance should potentially focus on keywords associated with fewer SERP features. However, the dataset indicates that SERP features are almost ubiquitous, with 99.8% of the result pages showcasing at least one SERP feature and the majority displaying between four and six features. Given this prevalence, it becomes essential for website providers to comprehend the specific effects and dynamics of individual SERP features, especially those that could enhance their CTR. The impact of appearing in certain SERP features will be further examined in section V-C.
### _Variations in Effects Across Positions_
The general analysis in section V-A revealed a negative overall influence of the presence of SERP features on CTR. However, when incorporating the position of a result as an additional variable, intriguing trends emerge. For some SERP features, the average CTR per position remains relatively unaffected by the presence of the SERP feature. For others, the presence significantly influences the average CTR. Two primary patterns are observable: with the presence of a SERP feature, either the average CTR declines across all positions (pattern: 'Lower'), or the average CTR decreases for the initial three positions and then increases for subsequent positions (pattern: 'Lower \(\rightarrow\) Higher'). Figure 3 exemplifies these patterns and lists the SERP features to which they apply.
The reasons for the emergence of these patterns for specific SERP features aren't immediately evident. Nonetheless, it's worth noting that certain features have a considerable correlation with user intents. For instance, the presence of Image SERP features is strongly correlated with transactional search intent (\(\rho=0.29\)). Therefore, while interpreting these results, it is crucial to bear in mind that the observed effects might be influenced by other feature categories, such as the search intents.
While it is a well-established notion that a high-ranking position is vital for maximizing CTR, the detailed investigation into the impact of SERP features prompts the question of whether the importance of top-ranking positions varies in different scenarios. As discussed above, the effects of individual SERP features have been isolated for analysis. However, in real-world situations, SERP features often appear concurrently, which likely leads to inter-feature influence. To account for this, we examine the importance of positions for each combination of page SERP features. For a more comprehensive comparison, the CTR is normalized to the average value at the first position for each combination that appears more than 200 times. Subsequently, the rate of decay for each subsequent position is calculated. The analysis reveals that patterns identified for individual SERP features persist across combinations of them. Among the top three combinations by count, two show a faster decay in CTR, while the other decays much slower compared to the entire dataset. For instance, the presence of
three different SERP features can double the CTR at the third position, as demonstrated by the contrast between the green and blue combinations in Figure 4.
### _Differential Impact of SERP Features on Included and Excluded Results_
Sections V-A and V-B established that SERP features significantly affect consumer click behavior on search result pages. Initially, the investigation of SERP features was conducted at the page level, evaluating their overarching impact on all results on a page. However, it is critical to acknowledge that SERP features are not merely passive elements on a page; they actively link to some of the websites listed on the results page, thereby directly contributing to the CTR of those specific results.
To gain a comprehensive understanding of SERP features' impact, it becomes imperative to differentiate between their influence on results they link and those they do not. An illustrative example can be derived from analyzing images shown next to a result. We need to differentiate between (i) the effect the image has on the result it is linked with and (ii) the effect on a result that does not have an accompanying image, while others do. This differentiation warrants the introduction of a new set of feature categories, encapsulating whether a result is displayed within a SERP feature (termed as "Result in feature") or not included within the SERP feature ("Result not in feature"). For
Fig. 4: Loss in CTR per position. Values are the average CTR loss for each position in the percentage of average CTR on position 1. Only combinations with more than 200 occurrences are considered (n=59). Dashed lines represent the top three most frequent SERP feature combinations. Shaded areas represent the range in which 50% and 90% of all values fall.
Fig. 3: Mean CTR per position if a SERP feature is present (1) compared to when it is not present (0). The left plot shows the pattern ‘Lower’ and the right plot pattern ‘Lower \(\rightarrow\) Higher’. SERP features without a clear pattern or \(n<1000\) are not listed.
Fig. 2: Correlation of SERP features with CTR when they are generally present on a result page. The color represents the strength of the effect: dark red refers to a strong negative correlation while dark green refers to a strong positive correlation.
example, the general trend observed in Figure 2, which indicates a negative correlation, encapsulates the effect of SERP features without considering whether the results are included within them or not. A more nuanced understanding can be obtained by differentiating these two possibilities.
Figure 5 underscore the divergent effects between the two categories. Not being included in most SERP features negatively affects the CTR, while being included within a feature generally increases the CTR. This dichotomy prompts an investigation into what makes clicking on links within SERP features attractive to users, and how website providers can leverage this to their advantage. One noteworthy pattern emerges from the analysis of the Image Pack feature. This feature, representing a collection of images on the results page, exhibits a negative correlation (\(\rho=-0.3\)) with CTR when a result is not included. Conversely, when a result is part of the Image Pack, the correlation with CTR is positive (\(\rho=0.25\)). The SERP feature, "People Also Ask", exhibits a similar pattern, albeit with a less pronounced magnitude.
Further analysis reveals that the presence of SERP features tends to negatively influence the CTR of the first position, regardless of whether the result is included within the SERP feature. The positive effect of "Result in feature", as seen in Figure 5, only becomes evident from the second position onward. In contrast, the effect of "Result not in feature" remains negative across all positions.
In conclusion, SERP features can have varying impacts on the CTR depending on the position of a result, regardless of whether the result is included within them. Results in the first position are particularly susceptible to decreased CTR due to increased SERP features. For results in lower positions, being featured within SERP features tends to be beneficial, while not being included in SERP features can adversely affect a result's CTR.
## VI Evaluation and Interpretation of Model Results
### _Comparative Analysis of Models_
Table I presents a comprehensive performance breakdown of each model and subset combination, forming the basis of our analysis in this section. In the evaluation of the model performance, a conspicuous trend surfaces. Tree-based models consistently outperform linear models and neural networks. In particular, GBDT and its variants, such as CatBoost and XGBoost, produce the most impressive results. To better understand the reduction in RMSE and the importance of the features, we decided to concentrate on a single model best suited to address the problem in question. This approach ensures that our interpretations are not skewed by outliers from models that perform poorly or are unsuitable. Because both CatBoost and XGBoost perform equally well, we use XGBoost for further analysis due to its widespread recognition and familiarity in the field.
### _Assessing the Explanatory Power of Feature Subsets_
By running the models on various combinations of feature subsets, we can assess the explanatory power of each subset and infer their overall relevance for CTR prediction. When evaluating the improvement in RMSE resulting from the inclusion of additional feature subsets, we focus primarily on the results from XGBoost. To further validate these comparisons, we also conduct paired t-tests on the results of the top five models on average. This allows us to determine if the observed differences in RMSE across feature subsets are statistically significant. We observe that RMSE scores improve upon the addition of other subsets to the position subset. Notably, the inclusion of the keyword subset results in a significant reduction in RMSE (\(-22.6\%\)). This subset demonstrates a greater explanatory power than the SERP features subset, which further improves the RMSE (\(-9.8\%\)). When SERP features are added to the position + keyword combination, which was previously the best, there is an additional improvement in prediction (\(-4.9\%\)). All these differences are statistically significant according to the paired t-test.
Collectively, these findings suggest that all feature subsets are valuable, as they decrease error when added and enhance the baseline model that solely relies on the position feature. Consequently, we can deduce that the additional SERP features
do have an impact on CTR and, by extension, user behavior. The larger reduction in RMSE for the keyword subset compared to the SERP feature subset indicates that they have a more pronounced effect on CTRs. However, this observation is merely indicative, as the importances of the features are not quantified from the perspective of the model. The next section provides a quantification of feature importances.
### _Assessing the Importances of Features_
To evaluate the relevance of the feature subsets defined in section IV-B, we applied three different metrics: gain, permutation importance, and SHAP importance. Table II presents the comparison of feature subset importance as per different measures, and corroborates our previous findings. The position subset consistently demonstrates the highest importance score across all metrics, followed by the keyword and SERP feature subsets. Regarding attributing feature importances, SHAP importance offers the most balanced perspective, with all subsets having an importance value between 28.9% and 36.7%. In contrast, permutation importance presents the most variance, with the 'position' subset receiving 55.9% and 'SERP features' only 16.2%.
This difference can be explained by the methodology underlying each calculation. Both permutation importance and average gain rely heavily on RMSE as an evaluation metric, which tends
Fig. 5: Correlation of SERP features with CTR differentiating between ”Result in feature” and ”Result not in feature”. For some features, not enough results were included, which is why there are more features in the ”Result not in feature” category than ”Result in feature”.
to penalize large deviations disproportionately. For instance, the 'position' feature, which has a major impact and sets a range of plausible CTR values, will be greatly affected by permutations. To illustrate, a permutation of the 'position' feature from 1 to 10 would yield a significantly different range of expectable CTRs compared to permuting the presence of a single SERP feature, which would have a much smaller impact on RMSE.
Nonetheless, the interpretation of SHAP values indicates that, on average, SERP features contribute to 28.9% of the difference between a given prediction and the average prediction, suggesting that SERP features may have a greater significance for CTR than previously assumed. These findings, which are both data and model-driven, highlight the need for further research that could delve into the behavior of individual users and explore the significance of SERP features for non-e-commerce searches.
## VII Discussion
### _Theoretical Implications_
This study contributes to the knowledge of search engine optimization (SEO) practices by comparing the influence exerted by ranked positions, keyword-related characteristics, and SERP features on CTR. Our findings not only confirm the widely accepted view that the ranking position of a search engine page result greatly influences its click-through rate (CTR) but also reveal that other SERP characteristics -- such as the presence of specific SERP features, the nature of the searched keyword -- can considerably affect CTR.
Our analysis, which encompasses a vast array of relevant SERP characteristics, is facilitated by an extensive and novel dataset that surpasses previous studies in its scope and comprehensiveness. This study also marks the first attempt to incorporate many diverse SERP features into a single analytical framework, thereby expanding upon existing research that typically only assesses the impact of individual SERP features. Furthermore, this research provides valuable insights into the effectiveness of various off-the-shelf machine learning models for predicting aggregated CTR. Notably, we demonstrate that easily applicable tree-based models can outperform state-of-the-art CTR prediction models.
### _Implications for Practitioners_
The findings from this study hold significant implications for SEO practitioners. While securing a high ranking remains a crucial optimization goal, our analysis also emphasizes the considerable potential of other SERP characteristics, particularly SERP features, in enhancing CTR.
The implications of SERP features can be distilled into three key insights. Firstly, SERP features typically reduce the click-through rate of results, leading to an increase in zero-click searches. Consequently, website providers should anticipate that the continual introduction of higher quality and a greater quantity of SERP features may lead to increased searches where users do not leave the SERP, thereby negatively impacting providers' revenues. Secondly, the presence of certain SERP features, especially in specific combinations, can significantly affect the importance of a high ranking on a results page. Some SERP features can either divert CTR away from or concentrate it towards the top positions. Therefore, efforts to secure a high ranking should be strategically directed towards keywords that, in conjunction with the present SERP features, yield high CTR. Lastly, appearing within SERP features emerges as a new potential focus for SEO efforts. Similar to securing a high ranking, having a website linked within a SERP feature can significantly enhance CTR, irrespective of the position of the actual result.
However, optimizing for SERP feature appearances presents a unique challenge. Although website developers can increase the likelihood of their content being selected by Google's crawlers through the appropriate use of metadata descriptions, there are no guarantees. This process is largely dependent on Google's discretion, and the algorithmic process determining feature selection remains largely opaque.
Looking forward, one potential game-changer in the SERP landscape is the integration of Large Language Models (LLM) like ChatGPT. These AI models have demonstrated their ability to surpass traditional search engines in terms of user experience and problem resolution. If Google were to incorporate its own LLM as a SERP feature or in another capacity within its search engine, this could have an even more profound impact than the findings of this study suggest. Therefore, it is imperative for practitioners to stay abreast of changes in SERP features.
### _Limitations_
Despite our findings, some limitations inherent to the dataset warrant mention. The data only includes US-based desktop searches. As such, we cannot conclusively state whether these findings are applicable to mobile searches or searches conducted in other countries. Although the structure of Google's search result page is largely uniform globally, user behaviors may differ across countries, and even more so across devices. Furthermore, our dataset was of medium size and did not allow for the analysis of rare combinations of result page characteristics due to a limited sample size. Lastly, our findings are based on data collected between May and August 2022. However, Google's search result pages are subject to frequent changes, including the introduction and adjustment of SERP features. As a result, our findings may have limited applicability to result pages that have undergone substantial changes post-dataset collection.
### _Future Research_
Future research could address the limitations of the current study by extending the scope of the dataset to include searches from other countries, mobile devices, and a larger time horizon. This would enable a more comprehensive time series analysis. In addition, future studies could delve deeper into the analysis of result page characteristics other than position and SERP features, such as keyword or result characteristics, which this study has revealed to be significant. Finally, given the importance of appearing in SERP features highlighted in our study, future work could explore the factors that determine the likelihood of a result appearing in these features.
## VIII Conclusion
This paper aims to provide an in-depth analysis of the influence that Search Engine Results Page (SERP) features exert on the Click Through Rates (CTR) of organic search engine results. The results of our study confirm that SERP features, on average, exert a negative influence on CTR. Nevertheless, it has been emphasized that the specific circumstances surrounding a result - such as its position, the specific SERP features shown, and whether a website is included in particular SERP features - can significantly modulate this influence. As such, it has been demonstrated that websites that are featured in specific SERP features, or those that are ranked lower in the results, can actually derive benefits from the presence of SERP features. This paper also provides a comparative analysis of the influence of SERP features against other result page characteristics. The findings underscore that SERP features have a tangible impact on CTR, thereby holding their own against other result page characteristics. Despite this, it remains clear that the most dominant determinant of CTR is the result's position.
Our work offers several major contributions. Primarily, we conducted a comprehensive, wide-reaching analysis using a dataset that is novel in its scope and detail. With this dataset, we have examined the impact of more than 20 different SERP features, thereby offering a holistic view of the landscape of SERP features. Additionally, we have compared the importance of SERP features to virtually all other relevant result page characteris
tics, thereby offering a well-rounded perspective on the factors that influence CTR.
Our findings also have strong practical applications. The insights derived from this work can be highly valuable to SEO practitioners, shedding light on the relevance of SERP features as a new dimension to consider in website optimization. The study elucidates the SERP features it is most beneficial to be featured in, and also provides insights into the specific combinations of SERP features that can either increase or decrease the importance of high ranking on a results page.
|
2309.11475 | Creating walls to avoid unwanted points in root finding and optimization | In root finding and optimization, there are many cases where there is a
closed set $A$ one likes that the sequence constructed by one's favourite
method will not converge to A (here, we do not assume extra properties on $A$
such as being convex or connected). For example, if one wants to find roots,
and one chooses initial points in the basin of attraction for 1 root $z^*$ (a
fact which one may not know before hand), then one will always end up in that
root. In this case, one would like to have a mechanism to avoid this point
$z^*$ in the next runs of one's algorithm.
Assume that one already has a method IM for optimization (and root finding)
for non-constrained optimization. We provide a simple modification IM1 of the
method to treat the situation discussed in the previous paragraph. If the
method IM has strong theoretical guarantees, then so is IM1. As applications,
we prove two theoretical applications: one concerns finding roots of a
meromorphic function in an open subset of a Riemann surface, and the other
concerns finding local minima of a function in an open subset of a Euclidean
space inside it the function has at most countably many critical points.
Along the way, we compare with main existing relevant methods in the current
literature. We provide several examples in various different settings to
illustrate the usefulness of the new approach. | Tuyen Trung Truong | 2023-09-20T17:20:41Z | http://arxiv.org/abs/2309.11475v3 | # Creating walls to avoid unwanted points in root finding and optimization
###### Abstract.
In root finding and optimization, there are many cases where there is a closed set \(A\) one likes that the sequence constructed by one's favourite method will not converge to A (here, we do not assume extra properties on \(A\) such as being convex or connected). For example, if one wants to find roots, and one chooses initial points in the basin of attraction for \(1\) root \(x^{*}\) (a fact which one may not know before hand), then one will always end up in that root. In this case, one would like to have a mechanism to avoid this point \(z^{*}\) in the next runs of one's algorithm.
In this paper, we propose two new methods aiming to achieve this. In the first method, we divide the cost function by an appropriate power of the distance function to \(A\). This idea is inspired by how one would try to find all roots of a function in \(1\) variable. In the second method, which is more suitable for constrained optimization, we redefine the value of the function to be a big constant on \(A\). We also propose, based on this, an algorithm to escape the basin of attraction of a component of positive dimension to reach another component. As an application, we prove a rigorous guarantee for finding roots of a meromorphic function of \(1\) complex variable in a given domain.
Along the way, we compare with main existing relevant methods in the current literature. We provide several examples in various different settings to illustrate the usefulness of the new approach.
Key words and phrases:Constrained optimization; Descent iterative method; Optimization; Root finding; Unwanted points
## 1. The problem, motivation and main result
Here we present the problem and motivation, a brief literature survey, a main theoretical result on finding roots of meromorphic functions in a given domain, and the plan for the remaining of the paper.
### The problem and motivation
Solving equations is an important task one usually encounters in research and applications. Some examples are: finding periodic points of a map, finding the trajectory of an object (e.g. robot) obeying a certain system of equations coming from physical laws.
Ever since the time of Abel, Ruffini and Galois, it is clear that one cannot find precise roots of a simple polynomial in \(1\) complex variable, and hence must utilise approximative methods. In this paper, we will concentrate on iterative algorithms, which are very easy to implement and use in practice.
One can treat solving a system of equations \(F(x)=0\), where \(F:\mathbf{R}^{m}\rightarrow\mathbf{R}^{k}\), as a global optimization problem by the following common trick. Define \(f(x)=||F(x)||^{2}/2:\ \mathbf{R}^{m}\rightarrow\mathbf{R}\). Then finding roots to \(F=0\) is equivalent to the following two problems (which an iterative
method may be able to solve simultaneously): 1) show that \(\min_{x\in\mathbf{R}^{m}}f(x)=0\), and 2) find global minimizers for \(f\). Even if \(F\) has no root, finding global minimizers of \(f\) still makes sense and has important applications (e.g. in the Least Square Fit problem in statistics).
Since finding global minimizer is NP-hard, research on effective numerical methods for global optimization can be helpful with.
This paper considers the following general problem:
**Problem:** Let \(X\) be a (complete) metric space, and let \(f:X\rightarrow\mathbf{R}\) a (smooth) cost function. Assume that an iterative method IM, aiming to find global minimizers of \(f\), is used, which has the property that whenever an initial point \(x_{0}\in X\), then a sequence \(x_{n+1}=IM(x_{n},f)\) is generated, with \(x_{n}\in X\) for all \(n\). Can we have a way to construct sequences which avoid the set \(A\) while still having a big chance to converge to global minima? In other words, can we use IM to solve the global optimization problem \(f\) on the non-complete set \(X\backslash A\)?
For this problem to make sense, the method IM must itself have big chance of convergence to global minimizer on the complete set \(X\). Below we list some examples for applying this problem.
**Application 1: Avoiding known roots.** Assume that one wants to find roots to a system \(F(x)=0\), with the associated cost function \(f(x)=||F(x)||^{2}/2\). Assume that in previous runs of IM one already found several roots \(z_{1}^{*},z_{2}^{*},\ldots,z_{j}^{*}\). One chooses the initial point for to run IM in a fixed domain \(B\), and one does not know whether with probability 1, sequences constructed by IM with initial point in B will always converge to one of the known roots \(z_{1}^{*},z_{2}^{*},\ldots,z_{j}^{*}\). Then one can define the close set \(A=\{z_{1}^{*},\ldots,z_{j}^{*}\}\) and apply Problem.
In the case where the variable x is in 1 dimension, an alternative way is to divide \(F(x)\) by \((x-z_{1}^{*})\times\ldots\times(x-z_{j}^{*})\). However, one cannot do this in higher dimension.
Even in dimension 1, there are situations when one would need to use Problem. For example, one would like to find roots inside a certain domain \(C\) (e.g. \(C=\{z\in\mathbf{C}:\ |z|<1\}\)). Then one can choose \(A=\partial C\).
**Application 2: Avoiding known local minima.** Assume that one wants to solve a global optimization problem for a cost function \(f(x)\). Assume that in previous runs of IM one already found local minima \(z_{1}^{*},z_{2}^{*},\ldots,z_{j}^{*}\). One does not know whether these are global minimizers (in particular, one does not want to stuck at bad local minima). Then one can define the close set \(A=\{z_{1}^{*},\ldots,z_{j}^{*}\}\) and apply Problem.
**Application 3: Constrained optimization.** Assume that one has a function \(f:X\rightarrow\mathbf{R}\), but only wants to solve the optimization problem in a smaller closed subset \(Y\). Then one can choose \(A\) to be the boundary of \(A\) and apply Problem.
**Application 4: Constrained optimization, version 2.** Assume that one has a function \(f:X\rightarrow\mathbf{R}\), but only wants to solve the optimization problem in a smaller domain \(Y\). Different from the choice in Application 3, here one chooses \(A=X\backslash Y_{0}\), where \(Y_{0}\) is the interior of \(Y\).
**Application 5: Finding different components of a variety.** Assume the set of solutions to \(F=0\) has different connected components \(C_{1},\ldots,C_{k}\), each of them may have positive dimension. Assume that beforehand one does not know any of them, and one will choose initial points for IM in a fixed domain \(B\). After many runs of IM, one finds many roots \(z_{1}^{*},\ldots,z_{j}^{*}\) but they seem to be close together and seem to belong to the same component \(C_{i}\). (For example, this is the case if \(B\) happens to belong to the basin of attraction of \(C_{i}\)). One hopes that if \(j\) is
big enough and one can avoid all of the points \(z_{1}^{*},\ldots,z_{j}^{*}\), then one can also avoid \(C_{i}\) and finds another component of \(\{F=0\}\).
### Survey of main existing relevant methods in the current literature
The relevant methods which we are aware of are mostly in the setting of constrained optimization.
**Approach 1: New metric.** An approach is to redefine the metric on \(X\backslash A\), such that the points in \(A\) become infinity in the new metric. (This fits particularly well if \(X\) is a Riemannian manifold.) However, if indeed the minimum of \(f\) occurs in \(A\), creating this new metric does not guarantee that the constructed sequence will not converge to \(A\).
**Approach 2: Linear Programming.** If the cost function and all constraints are linear functions, then this is treated effectively in the well known Linear programming, see e.g. [2]. An experiment presented later will test whether our method can find good approximates of the optimizers for Linear Programming problems.
**Approach 3: Algebraic methods.** If the cost function and all constraints are polynomials, then there are algebraic methods aiming to solve it (e.g. [8], [11]). We have a couple of remarks here. First, these methods are usually not iterative in nature (hence, implementation them in computers can be difficult). Second, they can - usually - only find the value \(\min f(x)\) but not the points \(z^{*}\) which minimize \(f\). A combination between these and iterative methods may be more useful.
**Approach 4: Projected methods.** If \(X=\mathbf{R}^{m}\) and the constrained set \(Y\) is a closed convex subset, then for each \(x\in X\) there is a unique point \(y\in Y\), denoted \(y=pr_{Y}(x)\), so that \(||x-y||=\) the distance from \(x\) to \(Y\). One can redefine the iterative scheme as follows: \(x_{n+1}=pr_{Y}(IM(x_{n},f))\). However, if \(A=\partial Y\) does not contain any global minimum of \(f\), and if an open neighbourhood \(U\) of \(A\) belongs to the basin of attraction for a point in \(X\backslash Y\), then choosing a random initial point \(x_{0}\in U\) and apply the projected method one ends up at a sequence in \(A\) which can never converge to a global minimum of \(f\). (An explicit example relating to this point was given in [10]. We will revisit this example in the experiments presented later.)
**Approach 5: Lagrange multiplier/Karush-Kuhn-Tucker conditions.** If the constraints are given by equations \(h_{1},\ldots,h_{j}\), then Lagrange's multiplier method looks for critical points of a new function \(F(x,\lambda_{1},\ldots,\lambda_{j})=f(x)-\lambda_{1}h_{1}(x)-\ldots\lambda_{j }h_{j}(x)\). This method works well if most of the critical points of \(F\) corresponds to global minima of \(f\), otherwise a lot of the work is wasted.
Karush-Kuhn-Tuckter conditions [6][7]: this extends Lagrange's method to the case where constraints also include inequalities. Then one looks for saddle points of a similar function. Again, the same comment as above for Lagrange's multiplier method can be applied here, and in this case the issue may be more serious since saddle points are more dominant in higher dimensions [1].
**Approach 6: Interior-point/Penalty methods.** The main idea of these methods, see e.g. [16], is to consider a new unconstrained problem \(G(x)=f(x)-\epsilon\rho_{A}(x)\), where \(\rho_{A}(x)\) is a function which is infinity on the boundary \(A\), and \(\epsilon>0\) is a parameter. It is unclear which \(\epsilon\) is good to use. A usual choice of \(\rho_{A}(x)\) is \(\rho_{A}(x)=\log d(x,A)\), where \(d(.,.)\) is the distance function. The idea is that when \(\epsilon\) becomes smaller and smaller, the method IM will find (for cost function \(G(x)\)) points which are closer and closer to global minimizers of \(f(x)\).
However, when \(\epsilon\) is too small, what can happen is that it may cancel the effect of \(\rho_{A}(x)\) to a certain level that the constructed sequence by applying IM to \(F(x)\) will behave similarly to the constructed sequence by applying IM to the original function \(f(x)\). We will demonstrate this in an experiment later.
**Approach 7: Tunnelling/Deflation method.** This method deals with the case where \(A=\{z_{1}^{*},\ldots,z_{j}^{*}\}\) is a finite set. In the case of minimizing a function \(f(x)\), it considers the new function \(f(x)/[d(x,z_{1})^{N_{1}}\ldots d(x,z_{j})^{N_{j}}]\) for some appropriate choices of \(N_{1},\ldots,N_{j}\). It uses Backtracking line search to have descent property. In the case of solving a system of equations \(F=0\), it applies Newton's method directly to the system, and again uses Backtracking line search to have descent property. Hence, it can show that the sequence constructed will avoid the finite set \(A\). However, descent property alone does not guarantee strong convergence. For a constrained optimization problem, it does not work directly like our new methods below, but rather similar to the Lagrange multiplier/Karush-Kuhn-Tucker conditions. See [9][4][3] for more details, where discussions on related methods are also given. In an experiment later, we see that using this function \(f(x)/[d(x,z_{1})^{N_{1}}\ldots d(x,z_{j})^{N_{j}}]\) may make it difficult to escape a component of the solution set to another component.
Main theoretical result: Finding roots of a meromorphic function of 1 complex variable in a given domain
Here we present a result which guarantees finding roots of a meromorphic function of 1 complex variable in a given domain.
Finding roots of a meromorphic function of 1 complex variable is a topic intensively studied. Many special and interesting functions are meromorphic functions, like Gamma function, Bessel function, and Riemann zeta function. The special question of finding roots of polynomial functions is a main subject in the field of Complex Dynamics.
Often, one has the need of finding roots in a given domain \(U\subset\mathbf{C}\), for example inside the unit disk. Then, starting from a point in that domain \(U\) and using an iterative method, it is difficult to know before hand if the sequence one constructed will converge to a root inside \(U\). The reason is that, first of all the sequence constructed may not converge to any root at all, or it may converge to a root outside of \(U\). Basins of attraction of the roots can be too complicated to allow a good guess of where the sequence will go.
Here, we illustrate the use of the approach proposed in this paper towards this question. For the description and properties of Backtracking New Q-Newton's method, see [12].
**Theorem 1.1**.: _Let \(g(z)\) be a non-constant meromorphic function in 1 complex variable \(z\). Assume that \(g\) is generic, in the sense that \(\{z\in\mathbf{C}:\ g(z)g"(z)=g^{\prime}(z)=0\}=\emptyset\)._
_Let \(f(x,y)=|g(x+iy)|^{2}/2\). Let \(U\subset\mathbf{C}\) be an open subset. There is a set \(\mathcal{E}\) with Lebesgue measure 0 such that the following hold. Let \(M\) be a positive number and define a new function \(F(x,y)\) by the following formula: \(F(x,y)=f(x,y)\) if \((x,y)\in U\), and \(F(x,y)=M\) if \((x,y)\in\mathbf{C}\)._
_1) If \(z_{0}=(x_{0},y_{0})\in U\backslash\mathcal{E}\), and \(f(x_{0},y_{0})<M\), then Backtracking New Q-Newton's method applied to \(F(x,y)\) with initial point \((x_{0},y_{0})\) will produce a sequence \(\{z_{n}=(x_{n},y_{n})\}\) which must satisfy one of the three options: a) \(\{z_{n}\}\) converges to a root of \(g(z)\) inside \(U\) and the rate of convergence is quadratic, b) \(\lim_{n\to\infty}|z_{n}|=\infty\), or c) all cluster points of \(\{z_{n}\}\) are on \(\partial U\)._
_2) Assume moreover that \(U\) is bounded and \(\inf_{z\in\partial U}f(z)>f(z_{0})\). Then the constructed sequence (Backtracking New Q-Newton's method applied to F(x,y)) converges to a root of \(g(z)\) inside \(U\), with a quadratic rate of convergence._
Proof.: 1) This follows easily from Theorem 2.3 below, in combination with Theorem 3.3 in [13].
2) Since \(U\) is bounded, option b) in part 1) cannot happen, given that the sequence \(\{f(z_{n})\}\) is non-increasing. The same reason gives that \(\{z_{n}\}\) cannot have any cluster point on \(\partial U\). Hence, only option a) can happen.
### Acknowledgements and the plan of the paper
Several main ideas presented here were initiated in the inspiring environment of the ICIAM 2023 conference in Tokyo. The author would like to thank specially the organisers and participants of the minisymposium at ICIAM "Theory and applications of random/non-autonomous dynamical systems" and its satellite conference, for support and interesting discussions, in particular to Hiroki Sumi, Tayuki Watanabe and Mark Comerford. The author also would like to thank Xiao Wang on his informative presentation on convergence of iterative methods in a conference at University of Dalat in July 2023, and for pointing out the reference [10]. We thank Patrick Farrell for information about the tunnelling/deflation method. The author is partially supported by Young Research Talents grant 300814 from Research Council of Norway.
The remaining of the paper is as follows. In Section 2 we present our two new methods. In Section 3 we present some illustrating experiments, including constrained optimization problems. In the last section we present some conclusions.
## 2. Two new methods: Creating walls to avoid unwanted points
The main idea is to create walls at \(A\), that is to make values of the functions at \(A\) so large that it will unlikely for a sequence constructed from a point outside of \(A\) can converge to or cross \(A\). We propose here two new methods to create such walls.
### Multiplying poles at \(A\)
The first method is to multiplying the cost function with a function having poles at \(A\).
We recall the setting: We are finding global minima of a function \(f:X\to\mathbf{R}\), while wanting to avoid a closed subset \(A\subset X\).
We will present the method first in a special case, and then in the general case.
**Special case:**\(\min_{x\in X}f(x)=0\).
In this case, we choose an appropriate positive integer \(N\) and consider a new function \(G(x)=f(x)/d(x,A)^{N}\), where \(d(.,.)\) is the distance function. The number \(N\) depends on the "multiplicity" of \(f(x)\) along \(A\). The following lemma shows that at least we are on the right track with this method.
**Lemma 2.1**.: _Assume that \(f:X\to\mathbf{R}\) is such that \(\min_{x\in X}f(x)=0\). Let \(x^{*}\in X\backslash A\) be a root of \(f(x)\) on \(X\). Then \(x^{*}\) is also a global minimizer of the new function \(G(x)\)._
Proof.: Indeed, since \(f(x)\) is non-negative, the same is true for \(G(x)\). Since \(x^{*}\in X\backslash A\) we have \(d_{A}(x^{*})\neq 0\). Hence, since \(f(x^{*})=0\), we have \(G(x^{*})=0\) as well. Therefore, \(x^{*}\) is a global minimizer of \(G\)
In the case of finding roots of a function \(F(x)=0\) for \(F:{\bf C}\to{\bf C}\), then this method is almost the same as what one usually does. Assume that one already found the roots \(z_{1}^{*},\ldots,z_{j}^{*}\) of \(F(x)\). Then the usual way to find new roots is to consider a new function \(F(x)/[(x-z_{1}^{*})\ldots(x-z_{j}^{*})]\). In our method, we will try to find global minima of \(f(x)=||F(x)||^{2}/2\), and will consider a new cost function \(G(x)=f(x)/d(x,A)^{2}\). Here \(d(x,A)\) has the following explicit formula: \(d(x,A)=\min\{|x-z_{1}^{*}|,\ldots,|x-z_{j}^{*}|\}\). Numerically, the complexity of computing \(\nabla G\) and \(\nabla^{2}G\), as well as higher derivatives, is not increased when we increase the number of points \(z_{1},\ldots,z_{j}\) to avoid.
In the usual approach, if the function \(f\) has multiplicity \(N\) at a root \(z_{i}^{*}\), one should divide by \((x-z_{i}^{*})^{N}\). Similarly one should use \(f(x)/d(x,A)^{2N}\) in our approach. [Note that since \(f(x)=||F(x)||^{2}\), a root of multiplicity \(N\) of \(F\) will become a root of "multiplicity" \(2N\) for \(f(x)\).]
The tunnelling/deflation method is probably the one closest to our method of multiplying with poles at \(A\). However, there are some differences. First, tunnelling/deflation method only applies for the case where \(A=\{z_{1},\ldots,z_{j}\}\), while our method can apply to any closed set \(A\). This is because we use a more flexible function \(d(x,A)\), instead of the function \(d(x,z_{1})\ldots d(x,z_{j})\). Second, while Backtracking line search is also used in tunnelling/deflation, it seems that they only use to guarantee descent property, and not Armijo's condition. Hence, while avoidance of the finite set \(A\) is guaranteed, the convergence issue is not (except the case where some restrictive conditions are assumed). (Note that strong convergence guarantees for Armijo's Backtracking line search for general cost functions only appear quite recently in the literature.) Third, for to solve a system of equations \(F=0\), the tunnelling/deflation method directly applies Newton's method for the system, while we work with the associated cost function \(f=||F||^{2}\).
Here we also note the relation between our method and the Penalty method recalled in the previous section. Since here the function \(f(x)\) is non-negative, one can consider a new cost function \(h(x)=\log f(x)\). Then, the Penalty method will require to consider the function \(H(x)=\log f(x)-\epsilon\log d(x,A)\), where \(\epsilon>0\) is **small enough**. Taking logarithm of the function \(G(x)=f(x)/d(x,A)^{N}\), we get another function \(\log G(x)=\log f(x)-N\log d(x,A)\). Hence, the forms of \(H(x)\) and \(\log G(x)\) are the same, except a difference in detail: In \(H(x)\) the factor \(\epsilon\) before \(\log d(x,A)\) is to be small, while in \(\log G(x)\) the factor \(N\) before \(\log d(x,A)\) is to be big (at least \(\geq 1\)). Note also that generally our method using \(G(x)\) is more numerically stable than using the Penalty method using \(H(x)\), if IM at least uses gradients of the cost functions. Indeed, let \(x^{*}\in X\backslash A\) be a point where \(f(x^{*})=0\) Then, near \(x^{*}\) we have that \(d(x,A)\) is strictly positive. Hence \(\nabla G(x)\) is of the same size as \(\nabla f(x)\), while \(\nabla H(x)\) is of the same size as \((\nabla f(x))/f(x)\). Hence, near \(x^{*}\), the gradient \(\nabla H(x)\) is very big. Similar observation applies to points near \(A\), where \(d(x,A)\) is \(0\). An experiment showing the difference between these two approaches will be presented later.
**A heuristic argument.** While finding global minima is NP-hard, the following heuristic argument supports that our method will work well generally.
Assume that \(IM\) has the descent property for function values, that is whenever it is applied for a function \(g\), then \(g(x_{n+1})\leq g(x_{n})\) for all \(n\). There are several methods to ensure this property: Line search, Trust region, and Backtracking line search (Armijo or Frank-Wolfe). Among these, Armijo's Backtracking line search is the most flexible while have strong theoretical guarantees when applied to many large and useful classes of cost functions. (For example, functions with at most countably many critical points, or functions satisfying Lojasiewicz gradient inequality. The first class includes Morse functions, which is a dense class in the closed-open topology. The
second class includes semi-analytic functions.) For some recent results on using Armijo's method on general functions, for both first and second order methods, see [14][15][13][12]. There, one also finds a brief comparison of several different methods.
We also assume that with an appropriate choice of \(N\), we obtain that \(G(x)=\infty\) if and only if \(x\in A\).
Then for any choice of the initial point \(x_{0}\in X\backslash A\), because IM has the descent property, it follows easily that the sequence \(\{x_{n}\}\) constructed by IM from \(x_{0}\) will belong to \(X\backslash A\). Moreover, any cluster point of \(\{x_{n}\}\) must be in \(X\backslash A\).
Now we explain why it is more likely that if \(x_{0}\) is in a connected component of \(X\backslash A\), then the whole sequence \(\{x_{n}\}\) should be also in \(X\backslash A\). We first present a rigorous argument, which is easy to prove.
**Lemma 2.2**.: _Assume further that IM is such that there is a constant \(r>0\) such that \(d(x_{n},x_{n+1})<r\) for all constructed sequence \(\{x_{n}\}\). Let \(A_{r}=\{x\in X:\ d(x,A)\leq r\}\). Assume that \(x_{0}\in X\) is such that \(G(x_{0})<\inf_{x\in A_{r}}G(x)\). Then the sequence \(\{x_{n}\}\) constructed by IM will belong to the same connected component of \(X\backslash A\) as \(x_{0}\)._
Proof.: Indeed, assume by contradiction that some points in the sequence \(\{x_{n}\}\) belongs to another component of \(X\backslash A\). Let \(N\) be the one index for which \(x_{N}\) is still in the same connected component of \(X\backslash A\) as \(x_{0}\), but \(x_{N+1}\) belongs to another component. Then since \(d(x_{N},x_{N+1})\leq r\), we must have \(x_{N}\in A_{r}\). However, this contradicts the facts that IM is descent (hence, \(G(x_{N})\leq G(x_{0})\)) and \(G(x_{0})<\inf_{x\in A_{r}}G(x)\).
Even if the assumptions in Lemma 2.2 are not satisfied, we have the following heuristic argument. When we use Backtracking line search, then heuristically since on \(A\) the function \(G\) is \(+\infty\), \(A\) will play the roll of a wall which \(\{x_{n}\}\) cannot cross. When \(x_{n}\) is closer to \(A\), the distance \(d(x_{n},x_{n+1})\) should become smaller in such a way that \(x_{n+1}\) is always kept in the same connected component as \(x_{n}\).
This is how far heuristic arguments go, given that finding global minima are NP-hard. There are experiments which show that indeed there are cases where the sequence \(\{x_{n}\}\) can cross to another connected component. Even so, we observe that by using the function \(G(x)\), the chance for \(\{x_{n}\}\) to stay in the same connected component indeed increases.
**The general case**. We now consider a general function \(f:X\to\mathbf{R}\), without assuming that it is non-negative (in particular, without assuming that \(\min_{x}f(x)=0\)). In theory, this general case and the special case are equivalent, since if \(f(x)\) achieves minimum on \(X\), then we can replace \(f\) by the function \(f(x)-\min_{x}f(x)\) and one is reduced to the special case. However, finding \(\min_{x}f(x)\) may not be easy, and hence in such cases one would like to be able to proceed without the hurdle to know that \(\min_{x}f(x)=0\) (or even knowing if the function is non-negative).
Some remarks are in order, before we continue. First, note that in this case the Penalty method which uses \(\log f(x)\) is formally inapplicable, since the logarithm of a negative number is not defined (as a real number). Second, note that if the setting is semi-algebraic, one may use algebraic methods to find the minimum value of \(f(x)\), and hence can reduce to the special case discussed before.
To deal with this case where we do not know the minimum value of \(f\), we consider the following function: \(G_{\gamma}(x)=(f(x)-\gamma)/d(x,A)^{N}\), where \(\gamma\) is an approximate lower bound for \(min_{x}f(x)\)
obtained "on the fly". More precisely, we will proceed as follows. We will do many runs of the method IM for \(G_{\gamma}(x)\), and in each run will try to obtain a better value for \(\gamma\). In each run, we will do the following steps:
Step 1: Choose a random initial point \(x_{0}\).
Step 2: Apply IM to \(G_{\gamma}(x)\) and construct a sequence \(\{x_{n}\}\), where we will stop when some conditions are satisfied (for example, when \(||\nabla G_{\gamma}(x_{n})||\) is smaller than a prescribed threshold or when the number of iterates exceeds a given bound).
Step 3: If \(\gamma>\min_{n}f(x_{n})\), then we replace \(\gamma\) by \(\min_{n}f(x_{n})\).
If one wants to do optimization on a connected component \(B\) of \(X\backslash A\) only, then in Step 1 one chooses a random point \(x_{0}\in B\), and one can modify Step 3 above in the following manner:
Step 3': If \(\gamma>\min_{x_{n}\in B}f(x_{n})\), then we replace \(\gamma\) by \(\min_{n}f(x_{n})\).
An example presented later will illustrate how this algorithm is used in practice.
**To escape the basin of attraction of a positive dimension component.** We now consider the following question: Assume we apply the iterative method IM to a non-negative cost function \(f(x)\), whose zero set has different components \(C_{1},C_{2},\ldots,C_{j}\). If we found only points in \(C_{1}\), can we escape it to reach another component?
Here we present an idea to use our method of multiplying poles, in combination of a careful choice of the next initial point, to resolve this question.
Assume that we already found a sequence of points (close to points) on \(C_{1}\), called \(p_{1},\ldots,p_{k}\). As before, we will consider a new cost function \(f(x)/d(x,A)^{N}\). Now, for the next initial point, we will not choose it randomly in a prescribed domain, but to close to the point \(p_{k}\). This way, it can help us to quickly move away from \(C_{1}\) and hopefully reach another component.
Here is a heuristic for why this idea can help. Because the component we attempt to escape has positive dimension, it can happen that even if we already created many poles, still in the next run we may end up at another point in that same component. When we creating more poles, we are also somehow creating a wall preventing initial points to escape to the other side. Hence, if the new poles are in between the domain where we choose our initial points and the component we want to reach, then may be we would not be able to reach that component as we wanted. To choose the initial point for the next pole close to the point (which now becomes a new pole) we found in the previous run, it is more likely that we avoid better the affect of the wall created by the poles.
An experiment later will illustrate how this is used.
**Constrained optimization.** This is a more precise form of the idea in Applications 3 and 4. In the above, we assumed that \(G=+\infty\) on \(A\) so that we can avoid \(A\) (this is the case if \(f>0\) on \(A\)). However, if \(f<0\) at some points of \(A\), the sequence we construct may converge to that point and we discover a possible minimum point. This way, we still can apply to constrained optimization.
If the constraints contain an equation \(p(x)=0\), we replace it by two inequalties \(p(x)\geq-\epsilon\) and \(p(x)\leq\epsilon\), where \(\epsilon>0\). The idea is that if \(\epsilon\) goes to \(0\), then the minimizers we found will converge to a minimizer of the original question. Numerically, with \(\epsilon\) small enough, the minimizer we found for the new question should be very close to a minimizer of the original question.
Recall about the theorems for Backtracking New Q-Newton's method.
Note that if a function has compact sublevels, then when adding poles this way the new function will also has compact sublevels.
### Making the function to be a big constant on \(A\)
Another way to make walls at \(A\) is to define the function \(f\) to be a very large constant on \(A\). More precise, we choose a very big number \(R>0\), and then define a new function: \(g(x)=f(x)\) if \(x\in X\backslash A\), and \(g(x)=R\) if \(x\in A\).
While this method is very simple, it is guaranteed to avoid the interior of \(A\). It also has the good property that it preserves critical points of \(f\) inside \(X\backslash A\) (this property, while simple, does not hold for all other approaches).
**Theorem 2.3**.: _Assume that the iterative method IM has the descent property. Assume that IM is applied to \(g\) at an initial point \(x_{0}\) so that \(f(x_{0})<R\). Then no cluster point of the constructed sequence belongs to the interior of \(A\)._
_The critical points (minima, maxima, saddle points) inside \(X\backslash A\) of \(f\) and \(g\) are the same._
Proof.: Indeed, let \(x_{n}\) be the constructed sequence. Then \(f(x_{n})\leq f(x_{0})<R\) for all \(n\). Hence \(\{x_{n}\}\subset X\backslash A\), therefore no cluster point of \(\{x_{n}\}\) belongs to the interior of \(A\).
Some experiments presented later illustrate that this method works well for constrained optimization. Note that while the function \(g\) is not continuous on the boundary \(\partial A\) of \(A\), usually \(\partial A\) has zero Lebesgue measure and does not affect in numerical calculations.
### Convergence issue
Since we will use Backtracking New Q-Newton's method, which has strong convergence guarantees see [13][12], the issue of global convergence is less serious than if we use other methods, like Newton's method in the tunnelling/deflation method where a lot of care is needed.
## 3. Experimental results
We now present several experimental results illustrating the usefulness of the approach proposed in this paper.
The setting of the experimental results is as follows. In a run for a cost function \(g\), we will stop until either \(||\nabla g||\) is smaller than a small threshold (e.g. \(1e-6\)) or if the number of iterates exceeds \(10000\).
In experiments where basins of attractions are drawn, we choose initial points on a square grid where the centre of the grid is chosen randomly in some domains. An initial point \(x_{0}\) is considered to be in the basin of attraction of a point \(z^{*}\) if \(d(x_{0},z^{*})\) is smaller than a small threshold (e.g. \(1e-5\)).
Usually we choose the initial point to be random, but there are cases where we fix the initial point.
Except Example 1- (which tests with methods Gradient Descent, Backtracking Gradient Descent (i.e. Armijo's Backtracking line search for Gradient Descent) and Backtracking New Q-Newton's method [12] (developed from an earlier algorithm in [13]) - all other examples test
only Backtracking New Q-Newton's method. Backtracking New Q-Newton's method is found to be very much faster than the other two methods in the examples considered here. The exception is that Example 1 is taken from [10], where Projected Gradient Descent was tested and found to fail to avoid the saddle point. Therefore, it is reasonable to test Gradient Descent and Backtracking Gradient Descent for the function \(G(x)\) in Example 1 also.
### Example 1
In this example, taken from [10], we consider the following constrained optimization problem:
\(arg\min_{S}f(x,y)\), where \(S=\{(x,y)\in\mathbf{R}^{2}:\ x+y\leq 0\}\) and \(f(x,y)=-xye^{-x^{2}-y^{2}}+y^{2}/2\).
One can check that the critical points of \(f\) are (approximate to) the following:
\(p_{1}=(0,0)\), \(p_{2}=(0.7071067,0.3128011)\) and \(p_{3}=(-0.7071067,-0.3128011)\).
The point \((0,0)\) is a saddle point with function value \(0\), while the other two points are global minimizer with function value (close to) \(-0.0727279\). Therefore, \(\gamma_{0}=-0.0727280\) is a good lower bound for \(\min_{S}f(x,y)\).
In [10], it was shown that Projected Gradient Descent applied to this constrained optimization problem fails. More precisely, there is a small open set \(U\) in \(S\) which touches the point \((0.5,-0.5)\) on the boundary for which if we apply Projected Gradient Descent with a learning rate \(0<\alpha<2/3\) with an initial point in \(U\), then the constructed sequence will converge to the saddle point \((0,0)\).
Here we draw basins of attraction for different methods for the function \(G(x,y)=(f(x,y)-\gamma_{0})/d((x,y),A)\) where \(A=\partial S\). More explicitly, \(d((x,y),A)=|x+y|\).
We will also consider \(2\) functions \(H_{1}(x,y)=(f(x,y)-\gamma_{0})-\epsilon\log d((x,y),A)\), and \(H_{2}(x,y)=\log(f(x,y)-\gamma_{0})-\epsilon\log d((x,y),A)\), where \(\epsilon>0\) is a constant. \(H_{1}\) is the Penalty method applied to the function \(f(x,y)-\gamma_{0}\), and \(H_{2}(x,y)\) is the Penalty method applied to the function \(\log(f(x,y)-\gamma_{0})\).
We have some remarks about experiments in Example 1.
Figures 1, 2 and 3 show that indeed using the function \(G(x,y)\) helps improve the convergence to global minima. In particular, the phenomenon mentioned in [10] for Projected Gradient Descent does not seem to occur here. The pictures for methods using Armijo's Backtracking line search look better than that for Gradient Descent.
Figures 4, 5, 6 and 7 show that using Penalty method with \(\log f(x,y)\), with small \(\epsilon\) does not improve the performance. For \(H_{1}(x,y)\), as we expected in the above, the performance will be like that for the function \(f(x,y)\) when \(\epsilon\) is small. Note that Figure 5 is reminiscent of the phenomenon in Schroder's theorem for Newton's method for a polynomial of degree \(2\), which is also numerically observed for Backtracking New Q-Newton's method see [12]. The yellow part in Figure 5 is close to the bisector of \(2\) points \(p_{2}\) and \(p_{3}\). Figures 6 and 7 show that, while formally optimising a non-negative function \(f\) or its logarithm should be similar, there are enough differences (both theoretical and numerical), which lead to very different behaviours in experiments. We also remark that for \(H_{1}\) and \(H_{2}\), if we choose \(\epsilon\) bigger, for example \(\epsilon=0.01\), then we encounter errors such as NAN.
Figure 2. Basins of attraction for the function G(x,y) in Example 1, using Backtracking Gradient Descent. Points are chosen on a square grid, with centre at the point \((0.5,-0.5003)\). Cyan: initial points which converge to \(p_{2}\). Yellow: initial points which converge to \(p_{3}\).
Figure 1. Basins of attraction for the function G(x,y) in Example 1, using Gradient Descent with learning rate \(0.1\). Points are chosen on a square grid, with centre at the point \((0.5,-0.5003)\). Cyan: initial points which converge to \(p_{2}\). Yellow: initial points which converge to \(p_{3}\).
### Example 2
In this example, we find roots of a polynomial of degree \(5\) in \(1\) complex variable. More precisely, the polynomial is \(F(z)=z^{5}-3iz^{3}-(5+2i)z^{2}+3z+1\). This polynomial has the following (approximate roots): \(p_{1}\sim-1.28992-1.87357i\), \(p_{2}\sim-0.824853+1.17353i\), \(p_{3}\sim-0.23744+0.0134729i\), \(p_{4}\sim 0.573868-0.276869i\), and \(p_{5}\sim 1.77834+0.963437i\). The associated cost function is \(f(x,y)=|F(x+iy)|^{2}/2\), where \(x,y\in\mathbf{R}\). The following is a picture of basins of attractions when applying Backtracking New Q-Newton's method (it is drawn in a bigger domain than that in [12]).
We see from Figure 8 that the basin of attraction for \(p_{3}\) seems to be smaller and surrounded by the basins of attraction of other points. Hence, it may be difficult to find the root \(p_{3}\).
We will consider the following \(4\) closed sets: \(A_{1}=\{p_{1},p_{2},p_{4},p_{5}\}\), \(A_{2}=\{p_{1},p_{2},p_{4}\}\), \(A_{3}=\{p_{1},p_{2}\}\) and \(A_{4}=\{p_{1}\}\). Correspondingly, we consider \(4\) functions: \(G_{1}(.)=f(.)/d(.,A_{1})^{2}\), \(G_{2}(.)=f(.)/d(.,A_{2})^{2}\), \(G_{3}(.)=f(.)/d(.,A_{3})^{2}\), and \(G_{4}(.)=f(.)/d(.,A_{4})^{2}\).
### Example 3
We consider now finding roots of a polynomial in \(2\) real variables \(F(x,y)=(y-x^{2}-2)\times(y+x^{4}+2)\times(x^{2}+(y-1)^{2})\times((x-1)^{2}+(y +1)^{2})\times((x+1)^{2}+(y-4)^{2})\). The solution set has \(5\) connected components, where \(2\) of them are curves: \(C_{1}=\{(x,y)\in\mathbf{R}^{2}:\ y-x^{2}-2=0\}\), \(C_{2}=\{(x,y)\in\mathbf{R}^{2}:\ y+x^{4}+2=0\}\), \(C_{3}=(0,1)\), \(C_{4}=(1,-1)\), \(C_{5}=(-1,4)\). The first two components are curves, and the remaining components are points. Component \(5\) is separated components \(C_{2},C_{3}\) and \(C_{4}\) by component \(C_{1}\).
In this experiment, we test if the initial point is in the set \(y>x^{2}+2\), whether it is possible for an iterative use of the method in paper will allow to reach the components \(C_{2},C_{3}\) or \(C_{4}\).
Since we want to solve equation, we choose our cost function as follows: \(f(x,y)=(y-x^{2}-2)^{2}\times(y+x^{4}+2)^{2}\times(x^{2}+(y-1)^{2})\times((x-1) ^{2}+(y+1)^{2})\times((x+1)^{2}+(y-4)^{2})\)
Figure 3. Basins of attraction for the function G(x,y) in Example 1, using Backtracking New Q-Newton’s method. Points are chosen on a square grid, with centre at the point \((0.5,-0.5003)\). Cyan: initial points which converge to \(p_{2}\). Yellow: initial points which converge to \(p_{3}\).
a) We test with our method using \(f(x,y)/d((x,y),A)^{2}\).
We always choose the initial point to be \((0.1,3.1)\).
Run Backtracking New Q-Newton's method for \(f(x,y)\), we end up at the point \(p_{1}=(0.02299946,2.00052898)\), which is close to a point on \(C_{1}\). Set \(A_{1}=\{p_{1}\}\).
Run Backtracking New Q-Newton's method for \(f(x,y)/d((x,y),A_{1})^{2}\), we end up at the point \(p_{2}=(-0.2120813,2.04497848)\), which is again close to a point on \(C_{2}\). Set \(A_{2}=\{p_{1},p_{2}\}\).
Run Backtracking New Q-Newton's method for \(f(x,y)/d((x,y),A_{2})^{2}\), we end up at the point \(p_{3}=(-0.2559445,2.06550759)\), which is again close to a point on \(C_{2}\). Set \(A_{3}=\{p_{1},p_{2},p_{3}\}\).
Run Backtracking New Q-Newton's method for \(f(x,y)/d((x,y),A_{3})^{2}\), we end up at the point \(C_{3}\).
Figure 4. Basins of attraction for the function \(H_{1}(x,y)\) in Example 1, with \(\epsilon=0.001\) using Backtracking New Q-Newton’s method. Points are chosen on a square grid, with centre at the point \((0.5,-0.5003)\). Cyan: initial points which converge to \(p_{2}\). Yellow: initial points which converge to \(p_{3}\).
Thus, this example shows that it is possible to use our method iteratively to escape the basin of attraction for a big component and move to another component.
b) We next test if using \(\prod_{p_{j}\in A}d((x,y),p_{j})^{2}\), as in the tunnelling/deflation method, can help escape to another component. Again, we always choose the initial point to be \((0.1,3.1)\), which lies inside the domain \(y>x^{2}+2\).
Run Backtracking New Q-Newton's method for \(f(x,y)\), we end up at the point \(p_{1}=(0.02299946,2.00052898)\), which is close to a point on \(C_{1}\). Set \(A_{1}=\{p_{1}\}\).
Run Backtracking New Q-Newton's method for \(f(x,y)/d((x,y),A_{1})^{2}\), we end up at the point \(p_{2}=(-0.2120813,2.04497848)\), which is again close to a point on \(C_{2}\). Set \(A_{2}=\{p_{1},p_{2}\}\).
Figure 5. Basins of attraction for the function \(H_{1}(x,y)\) in Example 1, with \(\epsilon=0.0001\) using Backtracking New Q-Newton’s method. Points are chosen on a square grid, with centre at the point \((0.5,-0.5003)\). Cyan: initial points which converge to \(p_{2}\). Yellow: initial points which converge to \(p_{3}\).
Run Backtracking New Q-Newton's method for \(f(x,y)/\prod_{p_{j}\in A}d((x,y),p_{j})^{2}=\\ f(x,y)/[d((x,y),p_{1})^{2}d((x,y),p_{2})^{2}]\), we end up at the point \((-1,4)\), which is the component \(C_{5}\). So we cannot escape the domain \(y>x^{2}+2\).
### Example 4
The test in this experiment is like that in Example 3, but now we look for roots of the real elliptic curve \(E=\{y^{2}=x^{3}-x\}\subset\mathbf{R}^{2}\). This curve has two components \(C_{1}=E\cap\{x\leq 0\}\) and \(C_{2}=E\cap\{x\geq 1\}\). Since we want to find roots, we choose the cost function to be \(f(x,y)=(y^{2}-x^{3}+x)^{2}\).
We would like to start from a point close to \(C_{1}\), and hope to reach \(C_{2}\).
In this example, there are some differences to Example 3.
First, if we only divide by \(d(.,A)^{2}\), it is not enough to escape \(C_{1}\). Hence, we change to \(d(.,A)^{4}\).
Figure 6. Basins of attraction for the function \(H_{2}(x,y)\) in Example 1, with \(\epsilon=0.001\) using Backtracking New Q-Newton’s method. Points are chosen on a square grid, with centre at the point \((0.5,-0.5003)\). Cyan: initial points which converge to \(p_{2}\). Yellow: initial points which converge to \(p_{3}\).
Second, we find that if we choose the initial point in a prescribed small open set near \(C_{1}\), we may not be able to reach \(C_{2}\) but may wander around in \(\mathbf{R}^{2}\).
Therefore, we will use the idea how to escape a component of positive dimension in Section 0.3.
For example, assume that our random initial point happens to be \((-0.9,0.1)\), which is inside the open set bound by \(C_{1}\)! After running Backtracking New Q-Newton's method for \(f\), we end at the point \(p=(-0.14223901,0.3733089)\), which is (close to) a point on \(C_{1}\).
We now consider the new cost function \(G(.)=f(.)/d(.,p)^{4}\). We now run Backtracking New Q-Newton's method for \(G\), with the initial point \((-0.142,0.373)\) which is close to the point \(p\). We end up at the point \((1.01894051,-0.19739787)\), which is (close to) a point on the component \(C_{2}\), as wanted!
Figure 7. Basins of attraction for the function \(H_{2}(x,y)\) in Example 1, with \(\epsilon=0.0001\) using Backtracking New Q-Newton’s method. Points are chosen on a square grid, with centre at the point \((0.5,-0.5003)\). Cyan: initial points which converge to \(p_{2}\). Yellow: initial points which converge to \(p_{3}\).
### Example 5
In this experiment, we illustrate how the method in Section 0.3 applies for the case we do not know about \(\min f(x)\) (and hence, not know if that value is actually \(0\), which is the best case to use the method).
We revisit the question in Example 1: Consider the following constrained optimization problem:
\[arg\min_{S}f(x,y)\text{, where }S=\{(x,y)\in\mathbf{R}^{2}:\ x+y\leq 0\}\text{ and }f(x,y)=-xye^{-x^{2}-y^{2}}+y^{2}/2.\]
Now, we do not before hand try to determine the critical points (and in particular, the minimum value) of the function \(f\). By the ideas in Section 0.3, we will start from \(M=0\) and update \(M\) to have better estimates for a lower bound for \(\min f\) on the fly.
Figure 8. Basins of attraction for the function f(x,y) in Example 2, using Backtracking New Q-Newton’s method. Blue: initial points which converge to \(p_{1}\). Cyan: initial points which converge to \(p_{2}\). Green: initial points which converge to \(p_{3}\). Red: initial points which converge to \(p_{4}\). Yellow: initial points which converge to \(p_{5}\).
Accordingly, the new cost function which we will apply Backtracking New Q-Newton's method to is \(G_{M}(x,y)=(f(x,y)-M)/d((x,y),A)\), here \(A=\{x+y=0\}\cup\{x=-10\}\cup\{x=10\}\cup\{y=1-0\}\cup\{y=10\}\). The reason to choose \(A\) much bigger than the line \(\{x+y=0\}\) is because it can easily check that the function \(f\), while having only 3 critical points, has its gradient converges to 0 for certain sequences of points going to \(\infty\). Hence (and this is observed in practice), the
Figure 10. Basins of attraction for the function \(G_{2}(x,y)\) in Example 2, using Backtracking New Q-Newton’s method. Blue: initial points which converge to \(p_{1}\). Cyan: initial points which converge to \(p_{2}\). Green: initial points which converge to \(p_{3}\). Red: initial points which converge to \(p_{4}\). Yellow: initial points which converge to \(p_{5}\).
Figure 9. Basins of attraction for the function \(G_{1}(x,y)\) in Example 2, using Backtracking New Q-Newton’s method. Blue: initial points which converge to \(p_{1}\). Cyan: initial points which converge to \(p_{2}\). Green: initial points which converge to \(p_{3}\). Red: initial points which converge to \(p_{4}\). Yellow: initial points which converge to \(p_{5}\).
sequence constructed by an iterative algorithm can diverge to infinity. To prevent this, we create new walls surrounding a bounded domain.
Here, it turns out that always choosing a fixed point as the initial point for every run does not yield desired result. Hence, we switch to choosing our initial point randomly inside the domain \(x+y<0\). Specifically, we will choose our random initial point in this way. We choose \(x_{0}\) as a
Figure 11. Basins of attraction for the function \(G_{3}(x,y)\) in Example 2, using Backtracking New Q-Newton’s method. Blue: initial points which converge to \(p_{1}\). Cyan: initial points which converge to \(p_{2}\). Green: initial points which converge to \(p_{3}\). Red: initial points which converge to \(p_{4}\). Yellow: initial points which converge to \(p_{5}\).
Figure 12. Basins of attraction for the function \(G_{4}(x,y)\) in Example 2, using Backtracking New Q-Newton’s method. Blue: initial points which converge to \(p_{1}\). Cyan: initial points which converge to \(p_{2}\). Green: initial points which converge to \(p_{3}\). Red: initial points which converge to \(p_{4}\). Yellow: initial points which converge to \(p_{5}\).
random point between \((-1,1)\) and also choose a random number \(err\in(0,1)\). Then our initial point is \((x_{0},y_{0})=(x_{0},-x_{0}-err)\).
Beware that if \(M\neq\min f(x)\), then global minimzer of \(f\) may not be critical points of \(G_{M}!\) Therefore, applying an iterative method to \(G_{M}\) may not be able to find a global minimizer of \(f\).
Here we report such an experiment.
Start with \(M_{0}=0\): The random initial point is \((0.23201923,-0.29366141)\). Running Backtracking New Q-Newton's method for \(G_{M_{0}}(x,y)\), we get the sequence of points \((0.20221512,0.3141085)\), \((-0.04991277,0.02411369)\), \((-0.05235699,0.01419474)\), \((0.05467527,-0.01150241)\), \((-0.70066302,-0.14982963)\), \((-0.52160544,-0.19009722)\), \((-0.51322439,-0.2418863)\), \((-0.52258626,-0.24766466)\), \((-0.52258709,-0.24768905)\), \((-0.52258709,-0.24768905)\), \((-0.5225871,-0.24768905)\), \((-0.5225871,-0.24768905)\), \((-0.5225871,-0.24768905)\), \((-0.5225871,-0.24768905)\), \((-0.5225871,-0.24768905)\).
The end point here is in the domain \(x+y<0\), but is not close to any critical point of \(f\). We find that the value of \(f(-0.5225871,-0.24768905)-M_{0}=-0.06196899168272818\). Therefore, we can choose \(M_{1}=M_{0}-0.061968992=-0.061968992\) as the next lower bound estimate for \(\min f\) in the concerned domain.
Continue with \(M_{1}=-0.06196899168272818\): The random initial point is \((0.67180663,-1.47719597)\). Running Backtracking New Q-Newton's method for \(G_{M_{1}}\), we get the sequence of points: \((-0.21324179,-0.40600826)\), \((-0.44163373,-0.22503746)\), \((-0.60779332,-0.29120854)\), \((-0.67567597,-0.30618429)\), \((-0.68489123,-0.30625665)\), \((-0.68507292,-0.30623803)\), \((-0.68507296,-0.30623802)\), \((-0.685073,-0.30623802)\).
Again, the end point here is in the domain \(x+y<0\), but is not close to any critical point of \(f\). We find that the value of \(f(-0.685073,-0.30623802)-M_{1}=-0.01060545418333153\). Therefore, we can choose \(M_{2}=M_{1}-0.0106054541833153=-0.0725744619\) as the next lower bound estimate for \(\min f\) in the concerned domain.
Continue with \(M_{2}=-0.07257444619\): The random initial point is \((0.68565678,-0.90941219)\). Running Backtracking New Q-Newton's method for \(G_{M_{2}}\), we get the sequence of points: \((-0.1515453,-0.07688046)\), \((-0.23533302,-0.11778859)\), \((-0.36993082,-0.18298619)\), \((-0.5502725,-0.26540423)\), \((-0.6724411,-0.30841356)\), \((-0.70462466,-0.31280385)\), \((-0.70678697,-0.31271293)\), \((-0.70679752,-0.31271166)\).
Again, the end point here is in the domain \(x+y<0\). Now, it gets fairly close to the global minimizer of \(f\) in this domain. We find that the value of \(f(-0.70679752,-0.31271166)-M_{2}=-0.00015342244591928789\). Therefore, we can choose \(M_{3}=M_{2}-0.00015342244592=-0.07272786863\) as the next lower bound estimate for \(\min f\) in the concerned domain.
Continue with \(M_{3}=-0.07272786863\): The random initial point is \((-0.65260407-0.16301343)\). Running Backtracking New Q-Newton's method for \(G_{M_{2}}\), we get the sequence of points: \((-0.66547817,-0.28644322)\), \((-0.70420061,-0.31143069)\), \((-0.70709159,-0.31279723)\), \((-0.70710672,-0.31280114)\).
Again, the end point is in the domain \(x+y<0\), and indeed it is very close to the global minimizer in the domain. We observe that the value of \(f(-0.70710672,-0.31280114)-M_{3}\) is, while still \(<0\), very small. Hence, we can conclude (correctly) that the end point is a global minimizer. We check this by running one more time with the new value \(M_{4}=M_{3}-3.1e-08\)
Running Backtracking New Q Newton's method for the function \(G_{M_{4}}\), from the randomly chosen initial point \((0.37597523,-0.91234048)\), we get the sequence of points:
\((-0.44234965,-0.06819544)\), \((-0.5447063,-0.19358355)\), \((-0.66585017,-0.28453739)\),
\((-0.70420049,-0.31125032)\), \((-0.7070916,-0.31279619)\), \((-0.70710678,-0.31280116)\).
### Example 6
In this example, we test our methods with a linear programming problem. The problem is as follows: Find \(arg\min_{(x,y)\in S}f(x,y)\) where \(f(x,y)=-40x-30y\), and
\[S=\{(x,y)\in\mathbf{R}^{2}:\ x+y\leq 12,\ 2x+y\leq 6,\ x\geq 0,\ y\geq 0\}.\]
The global minimum is \(-400\), obtained for \((x,y)=(4,8)\).
Linear programming are usually difficult for iterative methods (in particular, Newton's type method), since the gradient of the cost function is a constant and the Hessian of the cost function is \(0\).
For this problem, we find that the method of dividing with \(d((x,y),\partial S)^{2}\) can lead to sequences which cross \(\partial S\). We find that with the method of changing the function \(f\) to be a big constant on \(\mathbf{R}^{2}\backslash S\) indeed can produce sequences which approximate the global minimizers. Below we report experiments for the new function \(g(x,y)=f(x,y)\) if \((x,y)\in S\), and \(g(x,y)=1000\) if \((x,y)\in\mathbf{R}^{2}\backslash S\).
Running Backtracking New Q Newton's method for the function \(g(x,y)\), with initial point \((0.1,0.1)\) we obtain the sequence of points (all in S): \((4.56428571,3.44821429)\), \((5.68035714,4.28526786)\), \((5.75011161,4.33758371)\), \((5.80072885,4.39576044)\), \((5.80159205,4.39583457)\), \((5.80202368,4.3958531)\), \((5.80205066,4.39585426)\), \((5.80206414,4.39585484)\), \((5.80207089,4.39585513)\), \((5.80207173,4.39585516)\), \((5.80207215,4.39585518)\), \((5.80207236,4.39585519)\), \((5.80207239,4.39585519)\). The function value at the last point is \(f(5.80207239,4.39585519)=-363.95855134613464\).
Running Backtracking New Q Newton's method for the function \(g(x,y)\), with initial point \((1,2)\) we obtain the sequence of points (all in S): \((3.23214286,3.67410714)\), \((4.34821429,4.51116071)\), \((4.34832006,6.18830041)\), \((4.37053239,6.88502747)\), \((4.4401263,6.95855197)\), \((4.48607869,6.99847415)\), \((4.48773112,7.02416628)\), \((4.487774027,.02436737)\), \((4.48775047,7.02446792)\), \((4.48775209,7.02449305)\), \((4.48775219,7.02449462)\), \((4.48775224,7.02449541)\), \((4.48775224,7.02449551)\). The function value at the last point is \(f(4.48775224,7.02449551)=-390.2449550156782\).
Running Backtracking New Q Newton's method for the function \(g(x,y)\), with initial point \((2,1)\) we obtain the sequence of points (all in S): \((4.23214286,2.67410714)\), \((3.82457748,7.44440135)\), \((4.01653468,7.71777055)\), \((4.07413609,7.77070227)\), \((4.09326204,7.78987838)\), \((4.09733609,7.79377355)\), \((4.09771509,7.80059851)\), \((4.09873397,7.80157261)\), \((4.09878135,7.80242612)\), \((4.09878172,7.80243279)\), \((4.09878191,7.80243612)\), \((4.09878191,7.80243618)\). The function value at the last point is \(f(4.09878191,7.80243618)=-398.0243616122982\), extremely close to the global minimum value!
### Example 7
In this example we consider a constrained problem with a quadratic cost function [5]. The question is: Find \(arg\min_{(x,y)\in S}f(x,y)\) where \(f(x,y)=-2(x-0.25)^{2}+2(y-0.5)^{2}\), and
\[S=\{(x,y)\in\mathbf{R}^{2}:\ x+y\leq 1,\ 6x+2y\leq 3,\ x,y\geq 0\}.\]
This has a global minimum at the point \((0,0.5)\), with function value \(f(0,0.5)=-0.125\).
In [4], the tunnelling/deflation method was used to solve the associated equations coming from the Karush-Kuhn-Tucker optimality conditions. We can also use our method to solve these equations, similar to above. Here, we illustrate how to use our method directly for the cost function \(f\), without going through the Karush-Kuhn-Tucker optimality conditions.
We consider a new cost function \(F(x,y)=f(x,y)\) if \((x,y)\in S\), and \(F(x,y)=1000\) when \((x,y)\notin S\). Here are some runs.
Running Backtracking New Q-Newton's method for the function \(F(x,y)\), with initial point \((0.1,0.1)\), we get a sequence: \((0.025,0.35)\), \((0.00625,0.35)\), \((0.00117188,0.35)\), \((0.00104637,0.5)\), \((2.68394854e-04,5.0000000e-01)\). The last point is very close to the global minimum in \(S\).
Running Backtracking New Q-Newton's method for the function \(F(x,y)\), with initial point \((0.2,0.3)\), we get a sequence: \((0.15,0.5)\), \((0.05,0.5)\), \((0.025,0.5)\), \((0.0109375,0.5)\), \((0.0034668,0.5)\), \((0.00154076,0.5)\), \((0.00076432,0.5)\), \((0.00076432,0.5)\). Again, the last point is very close to the global minimum.
Running Backtracking New Q-Newton's method for the function \(F(x,y)\), with initial point \((0.24,0.48)\), we get a sequence: \((0.23,0.5)\), \((0.21,0.5)\), \((0.17,0.5)\), \((0.09,0.5)\), \((0.01,0.5)\), \((0.00142857,0.5)\), \((4.57589286e-04,5.0000000e-01)\). Again, the last point is very close to the global minimum.
### Example 8
In this example we find roots of a special function in a bounded domain. The question is: Find roots of the Bessel function \(jv(1,z)\) inside the domain \(S=\{z=x+iy:\ -5\leq x,y\leq 5\}\). We use the library for special functions in Python to do computations with the Bessel function.
We consider the cost function \(f(x,y)=|jv(1,x+iy)|^{2}/2\). We define a new cost function \(F(x,y)=f(x,y)\) if \((x,y)\in S\), \(F(x,y)=1000\) if \((x,y)\notin F\). Here are some experiments (initial points are randomly generated inside \(S\)).
Running Backtracking New Q-Newton's method for the function \(F(x,y)\), with initial point \((3.61713097,1.21693436)\), we get a sequence: \((3.81002318,0.70240853)\), \((3.80130537,0.23407724)\), \((3.82500859,0.01478695)\), \((3.83166043e+00,2.93087041e-05)\), \((3.83170597e+00,3.48394778e-10)\), \((3.83170597e+00,1.74197389e-10)\), \((3.83170597e+00,-6.20221574e-20)\). The last point is very close to a root of \(jv(1,z)\) in \(S\).
Running Backtracking New Q-Newton's method for the function \(F(x,y)\), with initial point \((0.77926808,3.75383432)\), we get a sequence: \((0.68962951,3.20441492)\), \((0.22533733,2.64615371)\), \((-0.28003512,2.08563803)\), \((0.10219435,1.52027522)\), \((-0.0071105,0.95589959)\), \((7.82278286e-05,4.21454021e-01)\), \((-4.48653604e-08,6.18471202e-02)\), \((1.34987291e-14,2.35499690e-04)\), \((-3.02313546e-13,1.30608358e-11)\), \((-2.10926406e-12,4.01821256e-13)\). The last point is very close to \((0,0)\), which is a root of \(jv(1,z)\) inside \(S\).
Running Backtracking New Q-Newton's method for the function \(F(x,y)\), with initial point \((-2.1267499,-0.96193073)\), we get a sequence: \((-2.44714686,-0.43462968)\), \((-3.44602951,-0.09106988)\), \((-3.64119905,-0.05036513)\), \((-3.82515578e+00,-2.52956231e-03)\), \((-3.83168871e+00,-4.33099427e-06)\), \((-3.83170597e+00,-1.95131964e-11)\), \((-3.83170597e+00,-9.75659827e-12)\). This is close to a root of \(jv(1,z)\).
We have done many runs, and haven't seen a case where the sequence has a cluster point on \(\partial S\). This is supported by Theorem 1.1. In contrast, if we use the method of dividing \(f\) by \(d(.,\partial S)^{2}\), we often see the case where the constructed sequence converges to a root outside of \(S\)
### Example 9
We revisit the function in Example 1 and Example 5. Consider the following constrained optimization problem:
\[arg\min_{S}f(x,y),\,\text{where}\ S=\{(x,y)\in\mathbf{R}^{2}:\ x+y\leq 0\}\text{ and }f(x,y)=-xye^{-x^{2}-y^{2}}+y^{2}/2.\]
We consider a new cost function \(F(x,y)=f(x,y)\) if \((x,y)\in S\), \(F(x,y)=1000\) if \((x,y)\notin S\).
Running Backtracking New Q-Newton's method for the function \(F(x,y)\), with initial point \((0.5,-0.5003)\), we get a sequence: \((-0.57530003,-0.28065)\), \((-0.64608507,-0.30173188)\), \((-0.67661335,-0.30806708)\),\((-0.69177324,-0.31061912)\), \((-0.70704079,-0.31289215)\), \((-0.70710676,-0.31280116)\). The last point is very close to the global minimum inside \(S\).
An interesting point here is that the point \((0.5,-0.5003)\) is in the basin of attraction of the global minimum **outside of S** of the dynamics associated to Backtracking New Q-Newton's method applied to \(f(x,y)\). By a simple change of the cost function, it belongs to the basin of attraction of the basin of attraction of the global minimum **inside S** of the dynamics associated to Backtracking New Q-Newton's method applied to \(F(x,y)\)!
The way we run experiments here is much simpler that that in Example 5, where we need to take care that the minimum of the function \(f(x,y)-M\) is as close to \(0\) as possible, and have to update the estimate \(M\) for the global minimum of \(f(x,y)\).
## 4. Conclusions
In this paper we introduced two new methods to avoid a given closed set \(A\). The first method is to divide the cost function \(f\) by \(d(x,A)^{N}\) for a suitable exponent \(A\). This method is suitable in case one wants to avoid known points, to hop to a new component of the solution set, or to solve constrained optimization problems with no (local) minima on \(A\). The second method is to change the value of \(f\) on \(A\) to a big constant. This method is more suitable for constrained optimization with (local) minima on the boundary. Experiments illustrate that the new methods are promising. Combination of these methods with each other, or with other methods can yield better performances. For example, our method can be used to quickly find a good initial point for Linear Programming problems, at which Linear Programming methods can be used to find exact global minima.
|
2309.07217 | Spin-valley entangled quantum Hall states in graphene | We investigate interaction-driven integer quantum Hall states realized in
Landau levels of monolayer graphene when two out of its four nearly degenerate
spin-valley flavors are filled. By employing a model that accounts for
interactions beyond pure delta-functions as well as Zeeman and
substrate-induced valley potentials, we demonstrate the existence of a delicate
competition of several phases with spontaneous generation of spin-valley
entanglement, akin to the spontaneous appearance of spin-orbit coupling driven
by interactions. We encounter a particular phase that we term the
entangled-Kekul\'{e}-antiferromagnet (E-KD-AF) which only becomes spin-valley
entangled under the simultaneous presence of Zeeman and substrate potentials,
because it gains energy by simultaneously canting in the spin and valley
spaces, by combining features of a canted anti-ferromagnet and a canted
Kekul\'{e} state. We quantify the degree of spin-valley entanglement of the
many competing phases by computing their bipartite concurrence. | Nikolaos Stefanidis, Inti Sodemann Villadiego | 2023-09-13T18:00:00Z | http://arxiv.org/abs/2309.07217v1 | # Spin-valley entangled quantum Hall states in graphene
###### Abstract
We investigate interaction-driven integer quantum Hall states realized in Landau levels of monolayer graphene when two out of its four nearly degenerate spin-valley flavors are filled. By employing a model that accounts for interactions beyond pure delta-functions as well as Zeeman and substrate-induced valley potentials, we demonstrate the existence of a delicate competition of several phases with spontaneous generation of spin-valley entanglement, akin to the spontaneous appearance of spin-orbit coupling driven by interactions. We encounter a particular phase that we term the entangled-Kekule-antiferromagnet (E-KD-AF) which only becomes spin-valley entangled under the simultaneous presence of Zeeman and substrate potentials, because it gains energy by simultaneously canting in the spin and valley spaces, by combining features of a canted anti-ferromagnet and a canted Kekule state. We quantify the degree of spin-valley entanglement of the many competing phases by computing their bipartite concurrence.
_Introduction._ The phase diagram of monolayer graphene in strong magnetic fields continues to present puzzles. At charge neutrality in the \(N=0\) Landau level it is still debated whether graphene is in a Canted Anti-ferromagnet (CAF), as proposed in transport and magnon transmission experiments [1; 2; 3; 4; 5], or in a Kekule (KD) state as visualized in STM experiments [6; 7; 8; 9]. In higher Landau levels the nature of states remains much less clear and the experimental evidence much more limited [10].
Reference [11] introduced an important model that simplified the understanding of symmetry broken states relative to earlier studies [12; 13; 14; 15; 16] by capturing the valley symmetry breaking interactions in the \(N=0\) Landau level as pure delta function interactions. Recent studies, however, have emphasized the need to consider interactions beyond delta functions in higher Landau levels [10; 17], and also in the \(N=0\) Landau level arising from Landau level mixing [18; 19]. In this work we investigate the interplay of such longer range interactions with the presence of spin Zeeman and substrate-induced sub-lattice symmetry breaking potentials, within a model that is applicable to integer quantum Hall states of graphene in any of its Landau levels. We will demonstrate that the combination of these ingredients leads to an interesting competition of phases with spontaneous spin-valley entanglement. Interestingly we find a state which becomes entangled only under the simultaneous presence of spin and valley Zeeman terms and interactions with longer range than pure delta functions, which we term the Entangled-Kekule-Antiferromagnet state (E-KD-AF) (see Fig.1).
_Model, mean-field theory, and entanglement measure._ A series of recent works have considered the following continuum model of the projected interaction Hamiltonian onto the N-th Landau level of graphene [17; 18; 19; 20]:
\[\mathcal{H}^{N}=\sum_{i<j}\{V_{z}^{N}(r_{ij})\tau_{z}^{i}\tau_{z}^{j}+V_{ \perp}^{N}(r_{ij})\tau_{\perp}^{i}\tau_{\perp}^{j}\}-\epsilon_{Z}\sum_{i}s_{ z}^{i}-\epsilon_{V}\sum_{i}\tau_{z}^{i}, \tag{1}\]
where \(V_{z,\perp}^{N}(r_{ij})\) are interactions that depend only on distance \(r_{ij}\) between particles \(i,j\), \(\tau_{\perp}^{i}\tau_{\perp}^{j}=\tau_{x}^{i}\tau_{x}^{j}+\tau_{y}^{i}\tau_{y }^{j}\) and \(s_{a},\tau_{a},\ a=0,...,3\) are the Pauli matrices acting on the valleys and spin respectively. This model captures the symmetry breaking terms beyond the \(SU(4)\) invariant long-range part of the Coulomb interaction. This model goes beyond the model of Ref. [11] which can be viewed as a limit of Eq. 1 when the interactions become delta functions, \(V_{z,\perp}(r_{ij})=V_{z,\perp}\delta(r_{i}-r_{j})\). Refs. [1; 17] have
Figure 1: a) Integer quantum Hall states of half-filled Landau levels in graphene with Zeeman, \(\epsilon_{z}=1\), and valley potential, \(\epsilon_{v}=0.1\), and non-delta function interactions with \(\Delta_{\perp}=2,\ \Delta_{z}=1\) (see Eq.(3)). The spin-valley entangled state E-KD-AF appears between the two SVE states from Ref. [19]. b) The concurrence (\(C\)) measure of spin-valley entanglement is plotted for the cut shown in Fig. 1(a) at \(u_{\perp}^{R}=2\).
demonstrated that even for models of unprojected interactions that are short-ranged (see e.g. Ref. [21]), effective interactions will naturally appear as a result of the projection onto higher Landau levels (\(N\neq 0\)). It has been also recently emphasized that corrections to pure delta functions appear naturally in higher Landau levels [17] by projecting the general model of short-distance interactions of graphene of Ref. [21], but can also appear even in the \(N=0\) LL due to Landau level mixing effects [18; 19].
When there is an integer-filling of Landau levels, the Hartree-Fock variational energy functional of translationally invariant quantum Hall ferromagnets for the above model can be written as [17]:
\[\begin{split}\mathcal{E}_{HF}[P]&=\frac{1}{2} \sum_{i=x,y,z}\left(u_{i}^{H}(Tr\{\tau_{i}P\})^{2}-u_{i}^{X}Tr\{(\tau_{i}P)^{2 }\}\right)\\ &-\epsilon_{z}Tr\{s_{z}P\}-\epsilon_{v}Tr\{\eta_{z}P\},\end{split} \tag{2}\]
here \(P\) is the projector into the occupied spinors, which in the case of half-filling (two filled components) equals \(P=\left|F_{1}\right\rangle\left\langle F_{1}\right|+\left|F_{2}\right\rangle \left\langle F_{2}\right|\), where \(\left|F_{i}\right\rangle,\ i=1,2\) are arbitrary orthonormal vectors within the four-dimensional Hilbert space of spin and valley flavors. The HF energy function is parametrized by four independent interaction energy scales \(u_{z}^{H},u_{z}^{X},u_{x}^{H,X}=u_{y}^{H,X}=u_{\perp}^{H,X}\), given by:
\[u_{a}^{H}=\frac{V_{a}(\mathbf{q}=0)}{8\pi^{2}},u_{a}^{X}=\frac{1}{8\pi^{2}} \iint\,d\mathbf{q}V_{a}(\mathbf{q}),\ a=\perp,z. \tag{3}\]
In the limit of pure delta function interactions, the difference between Hartree and exchange energy constants, \(\Delta_{z,\perp}=u_{z,\perp}^{H}-u_{z,\perp}^{X}\) would vanish, and we would have only two interaction constants, as in the model of Ref. [11]. We will consider general spin-valley entangled variational states [17; 18; 19; 20; 22; 23]:
\[\begin{split}\left|F\right\rangle_{1}&=\cos\frac{a _{1}}{2}\left|\mathbf{\eta}\right\rangle\left|\mathbf{s}\right\rangle+e^{i\beta_{1 }}\sin\frac{a_{1}}{2}\left|-\mathbf{\eta}\right\rangle\left|-\mathbf{s}\right\rangle,\\ \left|F\right\rangle_{2}&=\cos\frac{a_{2}}{2}\left| \mathbf{\eta}\right\rangle\left|-\mathbf{s}\right\rangle+e^{i\beta_{2}}\sin\frac {a_{2}}{2}\left|-\mathbf{\eta}\right\rangle\left|\mathbf{s}\right\rangle.\end{split} \tag{4}\]
Here \(\left|\mathbf{\eta}\right\rangle\) and \(\left|\mathbf{s}\right\rangle\) are states parametrized by unit vectors \(\mathbf{\eta}\) and \(\mathbf{s}\) in the spin and valley Bloch spheres respectively and \(a_{1,2}\) and \(\beta_{1,2}\) are real constants. Because the projector \(P\) is effectively a mixed state, simple measures of bipartite entanglement applicable to pure states, such as the von-Neumann entropy of the reduced density matrix, are not suitable. Instead, the degree of spin-valley bipartite entanglement associated with the projector \(P\) onto the above two states, can be measured by the concurrence
\(C\) defined as [24; 25]:
\[C\equiv\text{Max}\{\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4},0\}, \tag{5}\]
where \(\lambda_{i}\) are the eigenvalues of the matrix \(R=P(\tau_{y}\bigotimes s_{y})P^{T}(\tau_{y}\bigotimes s_{y})P\), ordered according to \(\lambda_{i}\geq\lambda_{j}\), for \(i>j\). For projector onto the states in Eq. (4), the concurrence is:
\[C=|\sin^{2}a_{1}-\sin^{2}a_{2}|. \tag{6}\]
When the minima of HF energy are spin-valley disentangled states, we have \(C=0\), and these states can be separated into two classes, one of "valley-active" states with spinors given by:
\[\ket{F}_{1}=\ket{\mathbf{\eta}_{1}}\ket{\mathbf{s}},\ket{F}_{2}=\ket{\mathbf{\eta}_{2 }}\ket{-\mathbf{s}}, \tag{7}\]
where \(\mathbf{\eta}_{1},\mathbf{\eta}_{2}\) are two arbitrary directions in the valley Bloch sphere, and another class of "spin active" states, with spinors given by:
\[\ket{F}_{1}=\ket{\mathbf{\eta}}\ket{\mathbf{s}_{1}},\ket{F}_{2}=\ket{-\mathbf{\eta}} \ket{\mathbf{s}_{2}}. \tag{8}\]
where \(\mathbf{s}_{1},\mathbf{s}_{2}\) are two arbitrary directions in the valley Bloch sphere.
In the limit of pure delta function interactions (\(\Delta_{z,\perp}=0\)), Ref. [11] found a phase diagram with four spin-valley disentangled states that we reproduce in Fig 2.(a): FM (Ferromagnet), AF (Antiferromagnet), KD (Kekule distortion), and CDW (Charge density wave). When interactions are not pure delta functions, and in the absence of Zeeman and valley potentials (\(\epsilon_{z}=\epsilon_{v}=0\)), we recently found in Ref. [17] that a new phase termed the KD-AF (Kekule- Antiferromagnet) can appear, as shown in Fig. 1(a). However in the absence of Zeeman and valley potentials (\(\epsilon_{z}=\epsilon_{v}=0\)) all these five states have no spin-valley quantum entanglement. In particular, the KD-AF phase can be viewed as one of valley-active states from Eq.(7) having one component occupying an equal amplitude superposition of both valleys (e.g. \(\mathbf{\eta}_{1}=\mathbf{\hat{x}}\)) with one spin and the other component occupying the opposite valley coherent superposition (e.g. \(\mathbf{\eta}_{2}=-\mathbf{\hat{x}}\)) with the opposite spin, and therefore has a non-trivial spin-valley correlation, but no spin-valley entanglement properly speaking.
In this work, we will show that these five states (FM, AF, KD, CDW, KD-AF) can be viewed as parent states to several spin-valley entangled phases. Some of them, such as the KD/AF coexistence and SVE states identified in Refs. [18; 19], arise near the phase boundaries between these parent states after adding Zeeman and valley potentials. However, we will also show that among these five parent states the KD-AF is special because it is the only one that becomes spin-valley entangled under the simultaneous presence of Zeeman and valley sublattice potentials, and we will term the state that evolves continuously from the KD-AF under these perturbations the entangled-Kekule-antiferromagnet (E-KD-AF) state (see Appendix S-VI for details of comparison with Ref. [19]).
_Ground states with either Zeeman or valley potentials._ We begin our analysis by studying the phase diagram when only the Zeeman coupling, \(\epsilon_{z}\neq 0,\ \epsilon_{v}=0\), is present in Eq. (1). We find that both the AF and the KD-AF cant their spins, which is a natural tendency of the anti-ferromagnetic states in order to take advantage of the Zeeman energy, evolving into the CAF and KD-CAF states depicted in Fig. 1c). These two states remain, however, spin-valley disentangled. The KD-CAF appears in between the FM and the CDW as long as \(0<\epsilon_{z}<2\Delta_{z}\).
However, as pointed out in Refs. [18; 19], the CAF and the KD become unstable over some region close to their boundary leading to a mixed state of AFM-Kekule phase coexistence which occupies a thin sliver of the phase boundary between these two phases. The analytic coordinates for this coexistence state are discussed in Appendix S-III. This phase coexistence occurs only when \(\Delta_{\perp}>0\) and otherwise there is a direct first order phase transition between the Kekule and CAF states. Additionally, a finite \(\epsilon_{z}\) induces the formation of another a new phase, the SVE of Ref. [19] growing from the boundary of the CDW with the KD-CAF. For \(\epsilon_{z}=\epsilon_{v}=0\) the SVE phase is never the ground state over any finite region, but interestingly it is is degenerate with KD-AF only at its boundaries with the FM and CDW. We note that the degeneracy at the boundary with the FM persists for all values of the Zeeman field, making this boundary presumably of higher symmetry [26]. When \(\epsilon_{z}>0\) and \(\epsilon_{v}=0\), the SVE, therefore, starts nucleating at the boundary of the CDW and the KD-AF and grows with increasing \(\epsilon_{z}\) until it occupies the whole region between the CDW and the FM at a critical value of the Zeeman field \(\epsilon_{z}^{c}=2\Delta_{z}\). The transition of KD-CAF with the FM is continuous, i.e the spin of the KD-CAF cants continuously until it reaches the fully polarized value of \(s_{z}=2\). The KD-CAF is therefore expected to have the similar signatures as the standard CAF state in spin sensitive probes, such as the magnon transmission experiments of Ref. [1].
It is also useful to consider the limit when (\(\epsilon_{v}\neq 0\)) is present but the Zeeman coupling vanishes (\(\epsilon_{z}=0\)). This leads to the canting of the KD, similarly to the \(N=0\) Landau level, as discussed in Refs. [27] and also as shown in Fig. 2(d). Interestingly, since the KD-AF is simultaneously anti-ferromagnetic in the valley space and in the spin space, it will undergo canting of the valley pseudo-spins towards the z-axis driven by the finite \(\epsilon_{v}\). We also find that the \(\epsilon_{v}\) also induces an intermediate coexistence region at the boundary between CaKD and AF analogous to the coexistence region of Ref. [18; 19]. (see Fig. 2d). On the other hand, for \(\epsilon_{v}>0\) and \(\epsilon_{z}=0\), the SVE state now starts growing from the boundary of the FM with the KD-AF whereas the SVE is always
degenerate with the KD-AF at its boundary with the CDW. The CaKD-AF persists until a critical value of the valley Zeeman, \(\epsilon_{v}^{c}=2\Delta_{z}\).
_Ground states with both Zeeman and valley potentials._ We now turn to the general case where both the Zeeman coupling and the hBN substrate are present. Our results are illustrated in Fig. 1a). We again find a coexistence of the CaKD and the CAF along a sliver of the phase diagram. However, the main qualitative difference is that the KD-AF state transforms into a new spin-valley entangled state that we call the E-KD-AF when both spin and valley Zeeman fields are simultaneously present, as depicted in Fig. 1b). This tendency orginates from the fact that the KD-AF state gains energy by canting either in the spin and valley direction under the presence of spin or valley Zeeman terms, but it is impossible to construct disentangled states that cant simultaneously in this way (see Table 1). We have found the exact coordinates of the spin-valley entangled minima of the Hartree-Fock functional in Eq. (2) and they satisfy \(\beta=\theta_{s}=\theta_{p}=0\), which is shared by all the phases in the right two quadrants of the phase diagrams. The E-KD-AF is now sandwiched between two spin-valley entangled SVE phases of Ref. [19] in the region between the FM and the CDW (see Fig. 1a)), yet represents a qualitative distinct phase.
We can distinguish the two competing spin-valley entangled phases, namely the E-KD-AFM and the SVE of Ref. [19] by their order parameters, \(\hat{O}_{ij}=Tr\{P_{\tau i}s_{j}\}\) (see supplementary material for further details). Both of them have a vanishing total of valley and spin in the \(x-y\) plane, \(\hat{O}_{a0}=\hat{O}_{0a}=0\), with \(a=x,y\). However, the SVE phase of Ref. [19], has the order parameters \(\hat{O}_{xx},\hat{O}_{yy}\) locked to be equal \(\hat{O}_{xx}=\hat{O}_{yy}=\sin a\), while for the E-KD-AF these order parameters are generally distinct and given by \(\hat{O}_{xx}=\sin a_{1}+\sin a_{2},\hat{O}_{yy}=-\sin a_{1}+\sin a_{2}\) (see Table 1 for the values of \(a_{1,2}\)). Moreover, as illustrated in Fig. 1b), the concurrence of the SVE is different than that of the E-KD-AF (see also Table 1).
_Summary and Discussion._ We have studied the integer quantum Hall ferromagnet states of graphene within a model applicable to any of its Landau levels, and focused on the case of half-filling when two out of four of its nearly degenerate spin-valley states are filled. Our model accounts for valley symmetry-breaking interactions beyond pure delta functions, and includes the simulatneous presence of the Zeeman coupling and a substrate-induced valley symmetry breaking potential (e.g. from alignment with a hBN substrate). We have computed the concurrence measure of entanglement which allows us to quantify the degree of spin-valley entanglement of these states.
Besides the known spin-valley disentangled states such as the antiferromagnet and the Kekule valence-bond-solid, we have found a delicate competition of states featuring spontaneous spin-valley entanglement, akin to that arising from spin-orbit coupling, but whose origin stems purely from interaction driven spontaneous symmetry breaking. Notably, we have found a state which only becomes entangled under the simultaneous presence of spin and valley Zeeman terms and interactions with longer range than pure delta functions, which we term the Entangled-Kekule-Antiferromagnet state (E-KD-AF). This tendency arises because this state combines features of the anti-ferromagnet and the Kekule states, and the state tries to cant simultaneously in the spin and valley Bloch sphere in order to gain energy from these single particle terms, but it can only achieve this at the expense of becoming spin-valley entangled.
_Acknowledgements._ We would like to thank Ganpathy Murthy, Chunli Huang and Nemin Wei for valuable discussions. NS acknowledges useful discussions with Panaigiotis Giannakeas and Hongzheng Zhao. We acknowledge support by the Deutsche Forschungsgemeinschaft (DFG) through a research grant with project number 518372354.
|
2309.12168 | This is the Table I Want! Interactive Data Transformation on Desktop and
in Virtual Reality | Data transformation is an essential step in data science. While experts
primarily use programming to transform their data, there is an increasing need
to support non-programmers with user interface-based tools. With the rapid
development in interaction techniques and computing environments, we report our
empirical findings about the effects of interaction techniques and environments
on performing data transformation tasks. Specifically, we studied the potential
benefits of direct interaction and virtual reality (VR) for data
transformation. We compared gesture interaction versus a standard WIMP user
interface, each on the desktop and in VR. With the tested data and tasks, we
found time performance was similar between desktop and VR. Meanwhile, VR
demonstrates preliminary evidence to better support provenance and sense-making
throughout the data transformation process. Our exploration of performing data
transformation in VR also provides initial affirmation for enabling an
iterative and fully immersive data science workflow. | Sungwon In, Tica Lin, Chris North, Hanspeter Pfister, Yalong Yang | 2023-09-21T15:25:46Z | http://arxiv.org/abs/2309.12168v1 | # This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality
###### Abstract
Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.
Immersive Analytics, Data Transformation, Data Science, Interaction, Empirical Study, Virtual/Augmented/Mixed Reality
## 1 Introduction
Data transformation is a data science process that converts a data set into the desired format to enable subsequent data science tasks, like visualization and modeling [1]. It is well recognized that data scientists need to spend an excessive amount of time doing data transformation, making it essential but also the most tedious and time-consuming aspect of a data science project [2, 3]. Using a programming language, like SAS, R, or Python, is the standard way of performing data transformation. However, as data science becomes ubiquitous and exposed to people with limited programming knowledge, the prerequisite of knowing to program makes data science inaccessible to a large group of professionals whose workflows involve data [4], which we called _non-technical data workers_. As a result, the back-and-forth communications caused by data science's iterative and open-ended nature can heavily inhibit insight discovery and decision-making.
In response, like in many other data science processes, there is an increasing trend of providing user interface based (UI-based for short) tools for data transformation (e.g., Tableau [5] for visualization and AutoML [6, 7] for modeling). These UI-based tools lower the entry barrier for data science and also help reduce errors [8]. However, even though there exist several commercial UI-based data transformation tools (e.g., Tableau Prep Builder [9], Trifacta [10], and Alteryx [11]), the field lacks an empirical understanding of people's experiences in using these tools and the considerations in designing them. Moreover, while the WIMP (windows, icons, menus, pointer) metaphor is typically used for constructing UI-based tools, modern interaction techniques that allow direct manipulation of the visual elements in the same space (named _embedded interaction_) [12, 13] were found to be more time-efficient in specific scenarios, like manipulating visualizations [14, 15]. Most notably, Kandel et al. [2] and Nandi et al. [16, 17, 18] found their UI-based tools to be more efficient in low-level data transformation tasks. However, it is unclear if their findings can be generalized to more realistic and complicated scenarios. **Our first goal** is to investigate the potential benefits of embedded interaction techniques over traditional WIMP interfaces in more realistic data transformation tasks.
On the other hand, in addition to interaction techniques, the rapidly evolved display and interaction environments (e.g., virtual and augmented reality or VR/AR) offer tremendous opportunities for creating innovative human-computer interaction experiences. Specifically, there is a growing interest in using VR/AR for data analysis, bringing an emerging research topic -- Immersive Analytics [19, 20]. From recent studies, there are two most frequently reported motivations for using VR/AR in analytics: _large display space_[21, 22, 23, 24] and _embodied interaction_[25, 26, 27]. We believe there is great potential to explore whether those identified benefits can be generalized in improving the data transformation workflow. On the other hand, standard mid-air methods in VR/AR are not suitable for tasks requiring high-precision interactions [28, 29], which could be inevitable in some data transformation tasks. Therefore, **our second goal** is to investigate how these identified pros and cons can affect data transformation tasks in immersive environments.
Data science is iterative by its nature and does not follow a sequential pipeline. Consequently, alternating between different steps is inevitable [1]. For example, after observing some visualizations, analysts may need to perform extra data transformations for the next analysis iteration.
Although visual exploration (e.g., ImAxes [26] and DataHop [30]) is feasible in a fully immersed manner, there is no immersive data transformation tool (i.e., tools for explicitly changing data table formats). When analysts want to use immersive visualization, they have to switch between VR and desktop to complete the iterative data science tasks, causing high overhead for context-switching. To this end, we study immersive data transformation tools to progress to a future where analysts can be fully immersed in VR for the entire data science workflow and maximize the benefits of the next generation of display and interaction environment.
To close these gaps, we developed prototypes with embedded interactions on the desktop and embodied interactions in VR for non-technical data workers to support essential data transformation operations. Compared to a standard WIMP user interface, users can directly manipulate data tables through mouse or physical gestures (e.g., overlay one table on top of another to _merge_ them, see Fig. 1). We compared our interaction designs to WIMP for desktop and VR. To best simulate real-world scenarios, instead of testing low-level tasks, we asked participants to transform a set of data tables into a target format. We found that participants required a similar amount of time to complete data transformation in VR and on a desktop. Meantime, VR demonstrated the potential to facilitate strategic thinking and support provenance better. Subjectively, participants found the WIMP user interface on a desktop most familiar, and using VR was more physically demanding. On the positive side, VR was perceived as more engaging, and participants overall preferred the gesture-based experience in VR. **The contributions of this paper are twofold**: _first_, the designs of gesture-based interactions for essential data transformation operations on desktop and in VR; _and second_, a user study systematically investigating the effect of interaction methods (WIMP vs. gesture) and computing environments (desktop vs. VR) on performing essential data transformation operations.
## 2 Related Work
Our work is built upon two lines of prior work: data transformation tools and interaction methods. We also extend immersive analytics by enabling data transformation, an essential data science workflow, in immersive environments.
### _Data Transformation Tools_
**Programming-based tools** are widely used by people with expertise and experience in programming. A wide range of libraries has been developed to support data transformation, like Pandas [31], dplyr [32], tidyr [33], and plyr [33]. Programming-based tools are _expressive_, and users can use their almost "exhaustive" APIs and parameters to complete various data transformation tasks. However, mastering these tools requires extensive training and leads to a steep learning curve. Debugging complicated scripts is also oftentimes challenging [34]. There are a series of attempts to address these issues. As representative examples, DataLore [35] provides code suggestions to speed up the data transformation process. Along a similar line, Wrex [36] uses the notion of programming-by-example to generate data transformation code. On the other hand, Somnus [34] visualize data transformation scripts to help debug and gain a better overview of the process. Yet, users of these tools are still expected to be experienced in programming, which excludes a large group of non-technical data workers.
**UI-based tools** allow people to manipulate their data without programming knowledge. Microsoft Excel is undoubtedly the most popular UI-based data transformation tool. Performing simple operations (e.g., editing values, sorting and filtering a column) is straightforward with its WIMP user interface. However, more complicated operations (e.g., merging two tables) requires knowing the specific "secret" menu item or writing code. A few commercial UI-based tools aim to allow the users to quickly find the needed menu items, like Tableau Prep Builder [9], Trifacta [10], and Alteryx [11]. Wrangler [2], meanwhile, provides natural language as ways users can specify the intended operations. However, users could get into trouble with discoverability (i.e., the ability to find and execute features) as a common challenge faced by a conversational user interface [37]. GestureDB project enables gestures to describe the intended database queries [16, 18]. Our gesture design on the desktop environment shares many similar characteristics with their system, and we adapted and extended the gesture-based data transformation method to VR.
**Empirical results.** Some of the proposed tools have been evaluated, for example, Wrex was found more beneficial than a standard programming interface [36], and Wrangler was found to outperform Excel [2]. Additionally, the GestureDB system was found to be more effective than the programming interface and non-gesture UI-based interface in performing single operations [16, 18]. We focus on UI-based data transformation tools as they lower the barrier for non-technical data workers. Our study aims to enrich the empirical understanding of using UI-based data transformation tools in tasks that require a series of operations.
### _Embedded and Embodied Interactions_
We consider both embedded and embodied interaction under the same notion of _direct manipulation_ of visual representations. Direct manipulation contrasts with the standard WIMP UI design, which requires users to trigger operations on a space-separated area different from the area with visual representations.
Performing direct manipulation on a flat screen is considered **embedded interaction**. It has been widely used in many UIs. For example, when uploading an email attachment, instead of clicking a button to select a file from a newly opened file browser, people can drag&drop the file into the window. In addition to this simple example, it has been used in many other applications, like annotation [38], image editing [39], and content organization [40]. For data science, some work explored embedded interaction in manipulating data visualization [12, 14, 15, 41, 42, 43], analysis [43, 44] and modeling [45]. Most relevant to our work, GestureDB demonstrated some benefits of embedded interaction for elementary operations [16, 18], and we aim to study its potential benefits in more complex data transformation tasks.
Performing direct manipulation using body movement is considered **embodied interaction**. The ability to track physical movement is essential to enable embodied interaction, which is an intrinsic characteristic of VR. As a result,
many basic VR interactions are embodied. For example, grab&move virtual objects, and rotate the head to change the viewpoint. Embodied interaction has been explored for authoring visualizations [26], navigating in space [23, 46], and switching between different views [25]. We are interested in how we can adapt and extend the gesture designs from desktop to VR and whether the benefits of embodied interaction can be generalized to data transformation tasks.
As evaluated in the aforementioned works, one of the motivations of _direct manipulation_ design is to reduce the number of context-switching needed by having the interaction and visual representation in the same display area. We are interested in if this identified benefit can facilitate data transformation tasks.
### _Immersive Analytics Toolkits_
Immersive Analytics has exploded into a fast-growing body of research on techniques and toolkits [19]. Existing Immersive Analytics research strongly focuses on data visualization [20, 47]. Specifically, a few toolkits enable data scientists to create immersive data visualizations, including DXR [48], VRIA [49], IATK [50], DataHop [30] and ImAxes [26]. While ImAxes and DataHop provide a fully immersive visualization authoring experience, the others require users to create and configure visualizations on the desktop and view visualizations in the immersive environment. More importantly, a data science project is oftentimes iterative and includes more than just data visualization. Immersive data visualization alone cannot fully leverage immersive environments for data analysis. In this study, we explore how we can enable immersive data transformation to make progress toward a fully immersive data science workflow.
## 3 Embedded and Embodied Gesture Design
Our study is intended to investigate the opportunities in using novel interaction methods (i.e., embedded and embodied Gesture) and emerging computing environments (i.e., VR) to better support data transformation. We first reviewed the literature to identify the necessary operations and then designed gestures for both Desktop and VR.
### _Selecting Data Transformation Operations_
Kasica et al. [51] summarized 21 fine-grained data transformation operations across five categories (i.e., create, delete, transform, separate, and combine) on three targets (tables, columns, and rows), see Table I. As the first attempt to compare data transformation experience on desktop and in VR, we focus on basic operations that are commonly used by non-technical data workers without the need for programming. To this end, we excluded operations that require programming-like input, including _create_, _transform_, _separate_, and _combine_ operations on rows and columns. For example, one typical _combine columns_ operation can be inputting a formula to calculate the weighted average of selected columns. One exception was the _summarize rows_ operation, as it does not require inputting a formula.
In summary, we support 12 operations, including all nine table, one-column, and two-row operations. See Table I.
### _Designing Gestures_
After analyzing the selected operations, we found that some operations have differences at the semantic level but imply the same interaction analogy. Specifically, Kasica et al. described _extend_, _supplement_, and _match_ as operations to
Figure 1: Four conditions designed for performing data transformation in the user study, including a combination of desktop or VR environments, and WIMP or gesture interactions.
combine two data tables, with their difference being row-wise vs. column-wise combination or inner-join vs. outer-join [51]. All three operations suggest an interaction on two tables that results in one table. Thereby, we found one gesture (i.e., _merge_, see Fig. 1) can meet the semantic requirements of all three operations. The same applies to _subset_ and _decompose_, and we used a combination of _filter-extract_ gestures for them. Deleting tables, rows, and columns shares a similar gesture as the metaphor of throwing things away but differs in the target selection.
We iterate the gesture designs among the team members. The design objective is to ensure the gestures can intuitively reflect their semantic meanings of data transformation and that no conflicts exist between different gestures. Some initial gesture designs were inspired by the teaching materials of an undergraduate data science course taught by two co-authors, where they frequently used gestures as metaphors for data transformation operations. Specifically, within a data table, horizontal movements were naturally linked to column operations, while vertical movements were considered row operations. Meanwhile, operations involving multiple data tables were intrinsically demonstrated as two-handed gestures. This initial design covered a good range of operations summarized by Kasica et al. [51]. We further introduced more gestures to increase the coverage of data transformation operations, Table 1. In summary, we have designed the following gestures:
* **Extract**: after selecting the target row(s) or column(s), _pull_ them out from the original data table to create a new table with the selected content. The original table will be kept, and a new table will be created.
* **Merge**: _move_ one table to _collide_ with another table and _release_ to combine two tables into one. If the key columns [2] are selected and match the criteria, an operation similar to _JOIN_ in SQL will be performed. If no key columns are selected, and two tables share the same structure, an operation similar to _UNION_ in SQL will be performed. Otherwise, the two tables cannot be combined. After merging, the original tables will be kept, and a new merged table will be created.
* **Filter**: for each column, a histogram is created to show its distribution. _Brush_ the histograms to select the range of values to keep. The original table will be updated.
* **Sort**: _swipe_ at a column from top to bottom or bottom to top to rearrange data in ascending or descending order. The original table will be updated.
\begin{table}
\begin{tabular}{l|l|l} \hline \multicolumn{1}{c|}{**Operation**} & \multicolumn{1}{c}{**Gesture**} \\ \hline \hline \multirow{6}{*}{**G**} & Create tables & **X** \\ & Create columns & **X** \\ & Create rows & **X** \\ & delete tables & delete tables \\ & delete columns & delete columns \\ & delete rows & delete rows \\ & rearrange tables & sort \\ & reshape tables & reshape \\ & transform columns & **X** \\ & transform rows & **X** \\ \hline \multirow{6}{*}{**G**} & subset tables & filter + extract \\ & decompose tables & filter + extract \\ & split tables & extract \\ & separate columns & **X** \\ & separate rows & **X** \\ \hline \multirow{6}{*}{**G**} & extend tables & merge \\ & supplement tables & merge \\ & match tables & merge \\ & summarize rows & group / ungroup \\ & combine columns & **X** \\ \cline{1-1} & interpolate rows & **X** \\ \hline \end{tabular}
\end{table}
Table 1: A summary of our supported data transformation operations and their matching gestures.
Figure 2: VR data table and gestures for data transformation tasks in the VR+Gesture condition. VR controllers, represented as hand models in the VR prototype and in the figure above, are used to perform these operations. The operation that was excluded from the study is marked with *.
* **Group/Ungroup**: after selecting the target column(s), _squeeze_ to aggregate the values, while _expand_ to restore the aggregated values to their original values. The original table will be updated.
* **Reshape**: after selecting the target column(s), _rotate clockwise_ to transform LONG data shape to WIDE shape, while _rotate counterclockwise_ to transform WIDE data shape to LONG shape. For LONG to WIDE, the values in target column(s) will be grouped into key-value sets, while for WIDE to LONG, the values in target column(s) will be categorized. The original table will be updated.
* **Delete**: after selecting target row(s), column(s) or table(s), _throw_ them away to remove the selected content. The original table will be replaced.
Our gesture design on Desktop shares some similarities with the GestureDB system [16, 18]. We further adapted and extended those gestures into embodied interactions in VR. Most gestures require only one input device (i.e., for _extract_, _merge_, _filter_, _sort_, _reshape_, and _delete_). Those gestures are almost identical in interaction behaviors on Desktop and VR (i.e., consist of actions like click, drag, and drop). In VR, the ability to use two input devices (i.e., left and right hand-held controllers, which are visually represented as hands in our prototype) provides an alternative way to _merge_ data tables: people can manipulate two data tables simultaneously by grabbing one in each hand and moving them close to _merge_ them. The _group/ungroup_ gestures also leverage the two input devices in VR to _squeeze_ and _expand_, while we use the _draw a circle_ counterclockwise and clockwise to imitate the same semantic meaning on the Desktop due to only one input device is available.
The designed gestures for VR are illustrated in Fig. 2, and the desktop gestures are presented in Fig. 3. We conducted a pilot study with three computer science graduate students who have data science experience to verify the usability of our designed gestures. Throughout this pilot, we confirmed the feasibility of our chosen gestures to complete data transformation tasks. The pilot study revealed no major operational issues and confirmed the intuitiveness of our gestures.
## 4 User Study
This study involves two primary experimental factors: the computing environment (or Environment, i.e., Desktop vs. VR) and the interaction method (or Interaction, i.e., WIMP vs. Gesture), see Fig. 4.
### _Study Conditions_
To systematically investigate the two primary experimental factors, we included four conditions that cover all their interactions (Fig. 4), namely, \(\includegraphics[width=14.0pt]{_1}{_1}\), \(\includegraphics[width=14.0pt]{_2}{_2}\), \(\includegraphics[width=14.0pt]{_3}{_1}\), \(\includegraphics[width=14.0pt]{_4}{_2}\), \(\includegraphics[width=14.0pt]{_4}{_3}\), \(\includegraphics[width=14.0pt]{_4}{_4}\), \(\includegraphics[width=14.0pt]{_4}{_5}\), \(\includegraphics[width=14.0pt]{_6}{_6}\), \(\includegraphics[width=14.0pt]{_7}{_8}\), \(\includegraphics[width=14.0pt]{_8}{_9}\), \(\includegraphics[width=14.0pt]{_9}{_10}\), and \(\includegraphics[width=14.0pt]{_9}{_11}\). The conditions are also demonstrated in the supplemental video.
**Desktop Conditions**: On the Desktop, we provide an _infinite canvas_ as the primary working space. The infinite canvas is a _zoomable_ and _pannable_ canvas with no boundary where the user can place and move digital content (data tables, in our case). It provides extra freedom to content organization and overcomes the size limitation of a physical screen. Various commercial tools (e.g., Miro [52], Google Jamboard [53], Microsoft Whiteboard [54], and SAGE [55]) use it as their working space, and there are also a series of attempts of using it in data science [55, 56, 57, 58]. Following their success and design, we allow the user to zoom in and out of the workspace by scrolling the mouse scroll wheel. The same interaction has been implemented
Figure 3: Desktop data table and gestures for data transformation tasks in the Desktop+Gesture condition. The operation that was excluded from the study is marked with *.
in many widely used Zoomable User Interfaces (ZUI), like Google Maps. The user can also move data tables to any desired location using drag&drop. A mouse is the input device for the two Desktop conditions. In _Desktop+WIMP_, the user clicks buttons to trigger operations (Fig. 1(a)). We provide one button for each operation and place all buttons on a panel that is fixed on the right side of the screen. It is visible to the user all the time. In _Desktop+Gesture_, the user uses our implemented gestures (Fig. 3) to perform operations (Fig. 1(b)).
\(\mathfrak{CVR}\)**Conditions**: In VR, we allow the user to physically move in the space and freely place and move data tables to any location around them. In _VR+WIMP_, operation buttons were placed on a panel and interacted in the same way as in Desktop+WIMP. Due to the different display spaces between Desktop and VR, it is unclear where to place this panel. To make the panel placement in VR as close as the Desktop+WIMP condition, we initially placed it using a head-reference approach [59], i.e., the panel will move as the user rotate their head, always visible to the user. However, our preliminary test indicates that such a design is distracting and annoying. We then changed the design to attach the panel to a left-hand-held controller according to [23], where the user can easily access or hide it with arm movements. The latter design was clearly preferred by the users we tested with, and was used in the user study (Fig. 1(c)). We also noticed participants struggled to select rows and columns precisely in our pilot study. Participants' comments revealed the need for a real-time visual indicator for selections. Thus, we rendered a "red dot" to indicate the pointer position on the data table. This mitigated the difficulties in selecting rows and columns based on another round of pilot tests. In _VR+Gesture_, the user uses our implemented gestures (Fig. 2) to perform operations (Fig. 1(d)).
**Summary**: WIMP and Gesture differ in the way they trigger the operations. Additionally, performing operations in WIMP requires the user to move the cursor or pointer back and forth between the data tables and menu panel, which is likely to introduce a context-switching cost, while the gestures are directly operated on the data tables. Desktop and VR both have an "infinite" display space and let the user reposition the table at any location. Regarding the navigation method, the Desktop provides pankzoom, while VR enables physical navigation.
### _Participants_
We recruited 20 participants (Male=16, Female=4; Age from 18 to 35) from the university mailing list after screening for their data transformation experiences with a five-question quiz. 20 out of 25 respondents answered at least four questions correctly and were invited to participate in the study. Nine participants indicated they use VR regularly on a weekly basis, another nine only used VR occasionally, and the rest two had no VR experience. All participants had normal or corrected-to-normal vision. We provided a $20 Amazon Gift Card as compensation for each participant.
### _Experimental Setup_
For VR conditions, we used a Meta Quest 2 virtual reality headset with \(1920\times 1832\) resolution per eye and a 90 Hz refresh rate. For Desktop conditions, we used a 27" monitor with a \(2560\times 1440\) resolution and 75 Hz refresh rate, which is a standard office setup. Both conditions use a PC with an Intel i7-11800H 2.30 GHz processor and NVIDIA GeForce RTX 3070 graphics card. We used the Air Link feature from Meta, which uses the PC for computation and the headset for rendering. Air Link enables a wireless experience while still leveraging the stronger computing power from the PC. Meanwhile, participants could move around more confidently without worrying about being tripped by cables. The study took place in the space of 3.5 x 3.5 meters (\(12.25m^{2}\)), and we let participants freely walk around the given area in VR conditions. The participants were asked to place themselves in the center of the actual space at the beginning of every VR condition. In Desktop conditions, participants sat on a comfortable office chair with the monitor placed in front of them on an office desk.
On the Desktop, every initial data table, including the target data table, was the same size as \(530\times 400\) pixels. The initial tables form a grid layout to maximize the use of display space, and the target table was placed in the middle for participants to reference and remember easily. In VR, five initial data tables, each with the size of \(1.15\times 0.65\) meters, were placed \(1.65\) meters in front of the participant's initial position in a semi-circular curved layout, which was identified as an effective space use strategy in VR for multi-window applications [60, 21], which allows all data tables to be placed at the same distance within participant's reach. The target table in VR was placed \(35\) cm higher than the initial table for the same purpose as in the Desktop conditions, i.e., to be easily distinguishable from other tables. The initial data tables and the target data table cannot be deleted. This is to avoid accidentally deleting those data tables and being unable to complete the task.
### _Task and Data_
We asked our participants to _perform data transformations with five given data tables to produce a table in the target format_. Participants could move and resize all the given data tables and the target data table. To decide the number of given data tables, we piloted three different options (i.e., three, five, and eight). We found the task was too obvious with three tables and too difficult with eight tables, so we decided to use five tables for the study.
After training, the participants were asked to use data transformation operations to complete the task with each condition. Initially, we included all operations from Sec. 3.2. However, in our pilot tests, we found participants had a hard time conceptually understanding and applying the _reshape_ operation, resulting in an unexpectedly long completion time for each trial (\(>\)30 minutes). This was aligned with some previous works [2, 51, 61], pointing out the _reshape_ operation can be too complicated for novice users. Thus, we decided to remove the _reshape_ operation from the
Fig. 4: Our study compares two primary factors, leading to four conditions.
task to target non-technical data workers and control the study duration. We further conducted another test without using _reshape_. Participants could complete the task in a reasonable amount of time (around 15 minutes per trial) without struggling. To ensure the difficulty of each trial was similar, all trials required a minimum of twelve operations. The sequence of the required minimum operations in each trial was different to reduce the learning effect.
We used tabular data sets collected online for training and study tasks. To eliminate the effect of participants' previous experience, we explicitly told them the data was not reflective of real-world information. We also controlled the data size of initial data tables (row count for each: 30, total column count: \(\sim\)20) and target table (row count: 10, column count: \(\sim\)6). We have included all our study stimuli in the supplementary material.
### _Design and Procedures_
The conducted user study followed a full-factorial within-subjects design. We used a Latin square (4 groups) to balance the study conditions. Each participant completed four study trials, i.e., one for each condition. The user study lasted two hours on average. The participants were first welcomed and reviewed to sign a consent form. Participants were then instructed about the purpose and steps of the study. They will then complete the following components of the study:
**Adjustment**: We asked participants to adjust the Quest 2 headset (e.g., the IPD) to a comfortable setting for the VR conditions. Similarly, for the Desktop conditions, participants were instructed to adjust the chair height to their preference before starting the tasks. We confirmed that all participants could see the sample text clearly in all conditions before proceeding.
**Training**: We first introduce the data transformation terms, considering people might use different terminologies for the same data transformation. We then introduced the computing environment (i.e., Desktop or VR) when a participant first encountered it. Sufficient time was provided for them to get familiar with the hardware until the participant asked to continue (usually around five minutes). For each study condition, when it first appeared to the participant, we first asked them to watch a video demonstrating each operation in that study condition. We confirmed that the participant fully understood how to perform each operation before moving to the next one. After that, we asked participants to perform the same task as in the user study but with only three initial data tables and one target data table. In this phase, we encouraged participants to ask questions about interactions and study tasks. All participants completed the training by finishing the task and confirmed familiarity with the study conditions (the training task took around five to seven minutes).
**Study Task**: The study task with each study condition started after participants completed the training session for that condition. Before we started the study, we ensured participants were well-informed by providing sufficient context. This included a brief overview of the datasets and the high-level semantic meaning of the target data table. Participants had no time limit for task completion, but we instructed participants to complete the task as accurately and as fast as they could. For the VR environment, we repositioned participants to the room's center and let them face the same direction before each study task. All participants were able to complete the study task.
**Questionnaires**. _Post-condition questionnaires_: after completing the study task with each condition, participants were first asked to recall their performed operations in sequence. They were informed about this question at the beginning of the user study. They then filled out a Likert-scale survey adapted from SUS and NASA TLX to rate their subjective experience and provide qualitative feedback about the pros and cons of that condition. _Post-study questionnaires:_ after completing all study tasks and post-condition questionnaires, participants were asked to rank all study conditions based on their overall experience. We asked them to provide demographic information at the end.
### _Measures_
We collected quantitative data and interaction records for each study condition to capture their task performance and sense-making process. Specifically, we used the following measures. **Error score**: we compared the difference between the target table and the participant's result table by rows and columns. Each difference contributed to one error score. The order of rows was considered (as _sorting_ operation was included in the task), while the column order did not affect the error score. **Time**: we measured the time from the initial data tables that were first rendered to the participant's task completion. **Number of operations**: we recorded the total number of operations performed by participants to complete the task. **Recall score**: we calculated the _Levenshtein distance_ between the participant's actual performed operations sequence and recalled operations sequence that were collected in the questionnaire. A lower value means a closer match and suggests participants can remember their action history more accurately. **Number of performed delete operations**: we recorded the total number of performed table deleting operations in each study trial. **Number of data tables left**: we documented the number of data tables when the participant completed each study trial.
We also collected subjective ratings on a seven-point Likert scale for **mental demand**, **physical demand**, **learnability**, **engagement**, and **usability**. Lower mental and physical demands were considered positive, while higher learnability, engagement, and usability were treated as beneficial. Participants also **ranked** their overall experience. **Qualitative feedback** about each condition's pros and cons were collected from participants. Two authors derived a set of codes from the responses of the first five participants and applied the codes to the remaining responses.
### _Hypotheses_
We developed our hypotheses based on previous empirical results and our analysis of study conditions, see Sec. 4.1.
**Error score**. We did not expect any difference in the error score as all required operations were provided consistently across all conditions. We believed that participants could complete the study task for all conditions successfully.
**Time**. We expected Desktop to outperform VR (\(H_{time-env}\)) based on previous studies, which found desktop interactions faster than VR interactions due to less required movement [27, 62, 63, 64]. Meanwhile, we anticipated Gesture to be faster than WIMP (\(H_{time-interaction}\)) considering better completion time of embedded/embodied interaction over WIMP found in earlier research [14, 15].
**Number of operations**. Our tasks require a sense-making process of foraging and structuring information to solve the problem. Under such a context, we considered VR requires a fewer number of operations than Desktop (\(H_{ops-env}\)) based on the previous investigation of sense-making in immersive space [24, 65]. We also expected Gesture requires a fewer number of operations than WIMP (\(H_{ops-interaction}\)). With a lower context-switching cost, Gesture would have fewer disruptions from navigation [66] and require less "mental map" rebuilding for the users [67, 68].
**Recall score**. We expected participants to have a better recall performance in VR than Desktop (\(H_{recall-env}\)), as VR with a 3D spatial environment was found to be more effective than Desktop for memorizing and retrieving information [69]. Meanwhile, we foresaw Gesture outperforms WIMP (\(H_{recall-interaction}\)) since performing body motions with the Gesture has a positive effect on memorability [70].
**Number of performed delete operations** and **data tables left**. We predicted a fewer number of delete operations (\(H_{del-env}\)) and a larger number of data tables left (\(H_{left-env}\)) in VR than in Desktop. We argue that the larger display space in VR allows participants to keep more intermediate results and reduce the need to delete them. We also anticipated Gesture requires a fewer number of delete operations (\(H_{del-interaction}\)) and has a larger number of data tables left (\(H_{left-interaction}\)) than WIMP. As discussed, Gesture has a lower context-switching cost than WIMP, which can reduce the content organization workload and increase the maximum number of data tables participants can handle.
## 5 Results
We present our statistical results regarding our hypotheses, outline participants' strategies for using the display space and summarize qualitative feedback for each condition. For dependent variables or their transformed values that met the normality assumption, we used _linear mixed modeling_ to evaluate the effect of independent variables on the dependent variables [71]. Compared to repeated measure ANOVA, linear mixed modeling does not have the constraint of sphericity [72, Ch. 13]. We modeled all independent variables (Environment and Interaction), and their interactions as fixed effects. A within-subject design with random intercepts was used for all models. We evaluated the significance of the inclusion of an independent variable or interaction terms using a log-likelihood ratio. We then performed Tukey's HSD posthoc tests for pairwise comparisons using the least square means [73]. We used predicted vs. residual and Q--Q plots to graphically evaluate the homoscedasticity and normality of the Pearson residuals respectively. For other dependent variables that cannot meet the normality assumption, we used the _Friedman_ test to evaluate the effect of the independent variable, as well as a Wilcoxon-Nemenyi-McDonald-Thompson test for pairwise comparisons. Significance values are reported for \(p<.05(*)\), \(p<.01(**)\), and \(p<.001(***)\). We also report mean values, 95% confidence intervals (CI), as well as Cohen's d as an effect size indicator for significant comparisons.
### _Quantitative Results_
Results are illustrated in Fig. 5, Fig. 6, and Fig. 7. All statistical analyses and anonymized data were included in the supplementary material.
**Error score.** As expected, all participants could complete the study task correctly under all conditions.
**Time.** Surprisingly, we found Environment (\(p=0.14\)), Interaction (\(p=0.51\)) and their interaction (\(p=0.42\)) did not have a significant effect on time. All conditions took similar amount of time: Desktop+WIMP (634s, CI=738), Desktop+Gesture (727s, CI=134s), VR+WIMP (744s, CI=116s), and VR+Gesture (728s, CI=96s). Desktop+WIMP tended to be slightly faster but without statistical significance. Thus, we reject \(H_{time-env}\) and \(H_{time-interaction}\).
**Number of operations.** We found significant effects of Environment (\(***\)), Interaction (\(***\)) and their interaction (\(***\)) on the total number of performed operations. VR conditions (WIMP with 21.0, CI=2.90 and Gesture with 20.5, CI=2.54) required less number of operations to complete the study task than Desktop conditions (WIMP with 26.6, CI=4.22 and Gesture with 43.8, CI=7.06). All comparisons were statistically significant (\(*\)) except for the comparison between VR+WIMP and Desktop+WIMP (\(p=0.057\)). On Desktop, Gesture also required more operations than WIMP (\(***\)). In summary, we accept \(H_{ops-env}\), and reject \(H_{ops-interaction}\).
**Recall score.** We found significant effects of Environment (\(***\)), Interaction (\(***\)) on the recall score, with a marginally significant effect from their interaction (\(p=0.081\)). Condition-wise, participants can remember their action history better in VR+WIMP (5.0, CI=1.83), VR+Gesture (7.4, CI=2.20), and Desktop+WIMP (7.3, CI=1.94) than Desktop+Gesture (18.6, CI=5.42). VR+WIMP was also marginally more memorable than Desktop+WIMP (\(p=0.078\)). For both the WIMP and Gesture, VR could better support the recall
Figure 5: Measurement of time, the total number of operations, recall score, the number of delete operations, and the number of tables left by task. Solid lines indicate statistical significance with \(p<0.05\), and dashed lines indicate \(p<0.1\). The tables below show the effect sizes for pairwise comparison. Circles with black borders indicate the winning conditions.
process than a Desktop. Dekstop+WIMP was also found more memorable than Desktop+Gesture (\(**\)). In conclusion, we accept \(H_{recall-env}\), and reject \(H_{recall-interaction}\).
**Number of performed delete operations and number of data tables left.** For the number of performed deleting operations and the number of left tables, there was a significant effect from Environment (all \(***\)). Interaction only had a significant effect on the number of performed deleting operations (\(*\)). Participants deleted fewer tables and kept more data tables in VR than in Desktop (all \(***\)). In summary, along with accepting \(H_{del-env}\) and \(H_{left-env}\), we reject \(H_{del-interaction}\) and \(H_{left-interaction}\).
**Ratings.** We found a significant effect of the study condition on _physical demand_ (\(***\)), _learnability_ (\(**\)), and _engagement_ (\(***\)). Participants found VR conditions (WIMP with 4.0, CI=0.71 and Gesture with 3.7, CI=0.92) more physically demanding than Desktop conditions (WIMP with 2.25, CI=0.69 and Gesture with 2.6, CI=0.88). Participants considered Desktop+WIMP (4.7, CI=0.55) most easy to learn, with Desktop+Gesture (3.45, CI=0.81), VR+WIMP (4.0, CI=0.55), and VR+Gesture (3.7, CI=0.76). VR+Gesture (6.4, CI=0.41) was found more engaging to use than Desktop conditions (WIMP with 4.7, CI=0.73 and Gesture with 5.3, CI=0.64).
**Ranking.** Participants ranked the VR+Gesture as providing the best overall experience, with 65% ranked it as first place, and 25% ranked it as second place (i.e., 90% in total ranked VR+Gesture as the first or second place).
### _Layout Strategies_
To better understand how participants used the display space, we grouped the final layout of each trial. We found participants had different strategies in Desktop and VR, but used a similar layout between Gesture and WIMP. All final layouts are included in the supplementary material.
On Desktop, we identified four different layout strategies (Fig. 8): **Grid** (five). Participants placed their data tables in a regular layout, closely forming a grid shape. **Piling** (one). The participant created a few piles of data tables. **Grid+Piling** (five). The participants created a regular layout with some piles of data tables. **No obvious pattern** (nine). Roughly half of the participants did not demonstrate a clear layout pattern and created an "organic" layout.
In VR, 19 out of 20 participants almost did not move the five initial tables or the target table. Participants seemed to treat these given tables in as _strong anchors_ and were reluctant to manipulate them. This observation might reflect participants' intention to keep data provenance. Thus, VR might increase the awareness of provenance and allow the track of provenance to be more manageable. Regarding the final layouts, we found three different strategies (Fig. 9). 17 participants performed data transformation behind themselves: **Behind-cluster** (16). Participants created a few clusters at the back of their initial orientation. **Behind-piling** (one). Participants piled data tables in one cluster and formed a roughly vertical line at the back of their initial orientation. **Front** (three). Participants used the space in front of their initial orientation, and to avoid occlusions, they had to delete data tables more frequently. In all strategies, we observed that participants preferred to stay in the center and place data tables close to them to reduce physical movements.
Due to limited data points in some strategies, we could not identify significant performance differences between different strategies, except that the _Front_ strategy required more _deletion and total operations_ and had fewer _tables left_ than other strategies utilizing the entire 360\({}^{\circ}\) circular space in VR.
### _Qualitative Feedback_
We performed qualitative coding to extract common themes from user feedback on each condition. For each condition, we listed the top three mentioned codes and those mentioned more than five times by the participants (frequency shown in parenthesis). We further highlighted the top codes with other frequently associated codes. Finally, we summarized the overall insights across all conditions. The complete coding results can be found in the supplemental materials.
**Desktop+EWMP** was considered _straightforward_ (7), and _familiar_ (5). The downsides were _limited space_ (11), _hard to interact_ (6), and button interaction were _not intuitive_ (5). Specifically, limited space was the primary concern,
Fig. 6: Subjective ratings on mental demand, physical demand, learnability, engagement, and usability by task. Solid lines indicate statistical significance with \(p<0.05\), and dashed lines indicate \(p<0.1\).
Fig. 7: User ranking of overall user experience for each condition. Solid lines indicate significant differences with \(p<0.05\).
as shown by its association with several codes, including hard-to-interact (3) and the constant shift in focus from the working window to the buttons created task disruptions when executing operations (2).
**Desktop+****C Gesture** was considered _intuitive (15)_, _better than button_ (8), and _easy to use_ (8). Because of the intuitive feeling, it was considered better than buttons and easy to use by five users, good for merging by four users, and good for sorting by two users. The primary issues included _limited space_ (9), _hard to interact_ (8), _discoverability issues_ (6), and _functionality_ (5). Similar to the Desktop+WIMP, the limited space was the predominant concern linked to other codes. This included instances where gestures were occasionally challenging to execute due to the limited space, which resulted in creating task disruptions (3), particularly when executing certain functions (3) that demanded more space, such as extraction tasks.
**Cvr+****Wimp** was praised with _large space_ (12), _better than desktop_ (9), _grabbedemoved_ navigation was _intuitive_ (7), _easy to use_ (6), and _flexible_ (6). Specifically, the major benefit of the large space was associated with flexibility by five users, easy to use by two users, and good understanding by two users. Among comments on better than desktop, people referred to it as easy to use (5), easy to organize (2), and accessible (2). Conversely, the issues related to _functionality_ (13). In addition, one participant pointed out that using a pointer to select two tables from a dropdown menu caused greater disruption when trying to execute merge operations compared to the Desktop+WIMP setup. The concerns arose regarding the constant need to hold buttons on the left hand, causing task disruptions (6), along with another reason similar to those encountered with the Desktop+WIMP. People had diverse opinions on the functionality, such as hard-to-select (3), heavy headset (2), and other technical issues like resolutions.
**Cvr+****C Gesture** was found to be _intuitive_ (17), _better than button_ (10), _easy to use_ (7), _straightforward_ (5), and _flexible_ (5). The majority (17) found it intuitive and subsequently associated it with better than buttons (7), easy to use (6), and flexible (4). Three also cited more accessible, and the other three cited large spaces. Furthermore, people considered it better than buttons mainly because it is easy to use (4), straightforward (4), accessible (3), and promotes good understanding (2). Participants particularly spoke highly about gesture interactions and grabbemove navigation. On the other hand, the major concern was _functionality_ (12), mostly with resolution and technical issues (7), but no specific functionality issues were found to perform operations in VR+WIMP.
**Summary.** All conditions were considered _easy to use_. In Gesture conditions (Desktop+Gesture and VR+Gesture), the majority cited _intuitive_ (15 and 17 times, respectively) and considered them _better than button_ (8 and 10 times). VR conditions (VR+WIMP and VR+Gesture) were considered _flexible_ (6 and 5 times) and _better than desktop_ by several users (9 and 4 times). Interestingly, Desktop+WIMP and VR+Gesture each were cited _straightforward_ by a handful of users (7 and 5 times), indicating that users have expected that WIMP is natural for desktop and 3D gestures for VR environment. More discussions were presented in Sec. 5.
On the other hand, Desktop conditions hinder interaction due to _limited space_ (11 and 9 times) and _hard to interact_ (6 and 8 times). With VR conditions, the majority have commented on _functionality_ (13 and 12 times) due to the unfamiliarity with data transformation in VR, such as technical issues and feature suggestions. In VR+WIMP, people found it _hard to interact_ and _select_ (5 times), leading to creating _disturbed_ (6 times).
## 6 Key Findings and Discussions
**Performing data transformation in**Cvr** and on a**Desktop **had similar time performance.** Previous studies produced mixed results in comparing VR and Desktop. In immersive analytics, for completion time, VR was found to be primarily beneficial for visualizing spatial data and 3D shape perception [19, 47], and to be slower than Desktop due to more required movements (see our _time_ hypothesis in Sec. 4.7). Our tested task did not involve perceiving 3D or spatial visualization, so we expected Desktop to be faster than VR. Surprisingly, our four tested conditions had similar time performance. We believe the empirical results of VR being slower than Desktop in performing interactions still applied to our case, which is partially reflected by the fact that VR conditions were considered significantly more physically demanding than Desktop conditions. On the other hand, unlike the previously tested tasks, our tested task
Figure 8: Four layout strategies used by our participants on Desktop.
Figure 9: Three layout strategies used by our participants in **Cvr**. The red dot indicates the position of the participant.
not only involved low-level interactions but also required participants to actively make sense of the data to come up with a sequence of interactions to complete the task. We could reasonably expect participants to put a significant amount of effort into thinking and planning strategies for a complicated task like ours. Based on this assumption and our results, we suggest that VR allows participants to complete the high-level sense-making components faster than Desktop, which supplements its extra time costs in performing low-level interactions. Below, we elaborate more on the high-level sense-making process from the _provenance_ and _strategic thinking_ perspectives.
**OVR showed the potential to provide improved provenance over Desktop**. Provenance is about the lineage and processing history of data. It provides a detailed record of the origins of the data, how it has been processed or modified, and where and when it transformed over time [74]. The ability to track provenance can support many applications, like data quality control, data auditing, and replication [75]. In our study, we used the _recall score_, _the number of performed delete operations_, and _the number of data tables left_ as indicators of certain aspects of provenance. The _recall score_ measures the ability to recall the operation history, reflecting when the data was transformed. The latter two metrics offer objective measures of the amount of kept information.
In addition to the previously confirmed memorability advantage in VR (see our provenance hypothesis in Sec. 4.7), we believe the large display space and embodied navigation in VR also helped participants keep track of provenance. _First_, participants had more space to place tables in VR, whereas on Desktop, they were more likely to delete tables to free display space to reduce clutter. Our results of fewer table deletions and more kept tables in VR than on Desktop are well aligned with provenance tracking ability. The data tables, kept persistently visible, provided a reliable reference for participants to confirm the success of their current operations. Moreover, they were a handy tool for participants to return to whenever they made mistakes. This was likely due to the large display space provided within the VR setup. Particularly, nine participants explicitly complained about the display space on the Desktop; for example, _"I have a very limited workspace (on the desktop), and cannot see all the tables at once, which really hurts my performance (P15)."_ This limited space on the Desktop lets participants continually delete the data tables, which leads to a loss of opportunities to keep track of the processing history. _Second_, we consider physical navigation more efficient than virtual navigation. Previous research identified the benefits of using physical movements to navigate large displays over virtual navigation (i.e., zooming) [76, 77]. Our results partially re-confirmed their findings in VR. 12 participants' comments reconciled with our assumption, like, _"I have all data tables in front of me without zooming in & out, and I can focus on the task (P12)."_ We found that the participants prefer physical navigation, as it offers a more intuitive and faster method for altering the perspective view of their workspace for accessing all kept information. One participant also specifically described the use of large display space to improve spatial memory and embodied navigation in VR: _"I like the point that I put the data table behind me so that I can come back if I make mistakes (P7)."_ This feedback aligns well with the improved provenance in VR, which enhances data quality through auditing without manually documenting every detail of each step. However, provenance is still a relatively abstract concept, and our measurements also only captured certain aspects of it. More effort is needed to formally define and quantify provenance in data transformation.
**OVR demonstrated preliminary evidence of promoting strategic thinking.** Participants needed to continuously develop the next steps in our tested task and evaluate their progress. We found VR required fewer operations to complete transformation tasks than Desktop, which points to the potential advantages of VR in supporting strategic planning. With the improved provenance, it is likely that VR users can better track the progress and take more efficient steps. Another potential reason is the flexible arrangement of VR tables that support easier visual comparison. We observed _all 20_ participants in VR _grabbed@moved_ their working table under the target table for comparison. Meanwhile, participants rarely performed a similar interaction on the Desktop. We believe such frequent comparisons enabled continuous progress evaluation and promoted strategic thinking in VR. We anticipate the embodied table management (i.e., the grabbed&moved) and large display space in VR provide a natural mechanism to support such an approach. In contrast, the need for repetitive zooming in/out on Desktop made participants reluctant to perform the same strategy. Additionally, participants also commented on their experiences along this line, for instance, _"I have many more places to put the data table (in VR). Organizing the table was way more manageable (P14)." "The interaction in VR was way better than the monitor version; the task became way more accessible, and organizing was easy (P3)."_
**WWimp tended to be more suitable for Desktop than Gesture.** Unlike previous studies [14, 15], we did not find positive effects on the collected measures of using Gesture over WIMP on a Desktop. We believe the task and gesture complexities could be the main reasons for our contradictory results. We tested a more complicated high-level task (average completion time was 10+ minutes per trial) than the previous low-level tasks (average completion time was around one minute per trial). Performing a single interaction using Gesture could outperform WIMP in our study, but the complexity of Gesture might introduce other overheads that decreased its performance significantly in our tested task. The subjective ratings partially confirmed our assumption: Desktop+Gesture was considered harder to learn than Desktop+WIMP. Participants had to remember more gestures (eight) than in the previous studies (three in [14] and five in [15]), which might introduce a high working memory and affect their performance, especially in a more complicated task. The fact that participants struggled with recalling their action history in Desktop+Gesture resonated with our conclusion, with some representative comments like: _"The interaction was confusing (in Desktop+Gesture). I think I would like[ly] have more errors than other conditions."_.
**No noticeable difference in time, efficiency, and provenance measures between \(\boxplus\)Wimp and \(\boxplus\)Gesture in \(\bigtriangledown\)VR.** Similar to Desktop, the identified benefits of body motion [70] might also be affected by the task and gesture complexities in VR. On the other hand, we did not find any significant difference between VR+WIMP and VR+Gesture,
indicating the overhead introduced by learning and remembering Gesture in VR could be negligible due to its intuitiveness. Participants ranked VR+Gesture with the best overall experience, and some commented on the benefits of using Gesture in VR explicitly, like _"Embodied interaction helps me a lot in understanding the data story (P2)"_ and _"Using both hands was very helpful in performing the task (P4)."_ More importantly, the embodied table management (i.e., the grabbed&moved) and physical navigation were provided in all VR conditions. Compared to interactions for triggering the operations, these two features might contribute a heavier weight in the final performance, making both VR conditions have similar performances. Furthermore, the perceived intuitiveness of the gestures and embodied interactions was reflected in the feedback we collected. Comparably, none of the participants explicitly mentioned Desktop+WIMP to be intuitive.
**Various layout strategies exist in both environments.** The "infinite" space in both the desktop and VR conditions allowed participants to freely lay out content in their workspaces. We found two distinct dimensions in their strategies: _space usage_ and _view management methods_. For space usage, on Desktop, most participants only used space slightly larger than the initial setup (Fig. 8) with 6 to 9 data tables and adjusted the view by zooming. On the other hand, the majority of participants in VR utilized the entire 360\({}^{\circ}\) circular space around them to fit more data tables (12 to 15) (Fig. 9). In VR, previous studies observed using semi-circular layouts were more frequent [21] and beneficial over 360\({}^{\circ}\) layouts [60, 78] for performing low-level tasks (e.g., search and comparison). However, for high-level tasks, our observations aligned well with some other works, such as sensemaking [79] and visual exploration [80], where people preferred using the entire fully circular space. We anticipated that placing content within the field of view is beneficial for time-constrained low-level tasks, while fully utilizing the display space is essential for the larger amount of content generated in high-level tasks. In terms of view management methods, the layout on the Desktop was considerably more organized than in VR, showing that precise view placement is easier with a mouse than with a VR controller. However, user behaviors were similar across both Desktop and VR. Most participants intended to create occlusion-free clusters (e.g., a grid layout in Fig. 9b), yet some participants created piled clusters (e.g., a piling layout in Fig. 9a), which was also observed in a recent AR study [81]. We also noticed the use of space to record the operation history from some participants (e.g., moving older tables to the top), which has been identified on large 2D displays [46].
## 7 Generalizations, Limitations and Future Work
**Applications.** Our study provides empirical evidence of the benefits of using VR for data transformation. We believe the large display space, spatial memory, and embodied navigation offered by VR are the primary factors for its improved performance. Since these are general characteristics of VR, we expect our results can be generalized to similar tasks that require organizing a large number of entities in space for sense-making. Some preliminary research explored the use of VR in such applications, like multiple view visualization [82, 26], multi-scale geographic navigation [21], and large-scale document analysis [81, 24, 65]. However, as highlighted by Ens et al. [19], there is a lack of fundamental study for comparing immersive versus non-immersive platforms for analytics purposes. Our study provides a preliminary assessment for data transformation under this context. Future studies may test other applications and delineate the benefits of immersive environments for a broader range of analytical tasks.
**Target users and functionality.** We focused on investigating intuitive data transformation tools for non-technical data workers and intentionally excluded the programming requirements in our study. However, programming-like operations are essential for more experienced users and more complicated tasks. The Gesture has limitations in how much information and intention it can express. To extend this work in this direction, future work needs to integrate a programming interface into the prototypes, like many UI-based commercial tools (e.g., Tableau and Excel) and research prototypes [83, 84]. To achieve this in VR, we need to consider the most appropriate text input method [85]. Alternatively, we may also consider using natural language as a user interface with higher expressiveness [86, 87]. On the other hand, we also see opportunities to develop new interaction techniques (e.g., embodied gestures) to lower the learning curve of complicated data transformation operations for non-technical data workers.
**Techniques.** We believe our designed gestures were intuitive and natural to their represented data transformation operations. However, the Gesture did not contribute to the participants' performance as we expected. We believe Gesture can still outperform WIMP in low-level tasks when the user knows the precise operation to perform [15, 16, 41]. Specifically, Nandi et al. found gestures to have better performance and discoverability than WIMP for a single data transformation operation [16]. However, discoverability might become a more severe issue in high-level tasks where the user must continuously develop the next steps. Although during the training phase, the participants were able to complete training tasks with a smaller dataset, it was still reasonable for participants to spend extra time and effort to recall the gestures in the longer study tasks. Future work may look at further improving the discoverability and learnability of gestures [88, 89, 90] or conduct a longitudinal study to reduce this effect.
Additionally, there are a few other directions to extending our work. Although we improved the selection experience in VR, precisely selecting and manipulating objects can still be challenging. Along with improving mid-air interactions, one may consider other input devices, like using a mouse in VR [91, 92] and other tangible proxies [93, 94]. Besides, we consider visualizing data processing history could further improve provenance, like a data flow system [58, 95].
**Computing environments.** We tested Desktop and VR, as Desktop is the most widely used, and VR is emerging. Testing other computing environments with different input modalities could bring more insights into interaction designs, like a tablet [15, 16, 41, 44] and display wall [96]. Tablets offer multitouch capabilities and are considered
more natural for performing gestures than Desktop. However, tablets are usually limited in size, and users may suffer from "fat finger" issues in precise interactions. A larger touch screen may alleviate the issue, but not easily accessible to many people. Nevertheless, testing different touching devices is an exciting future direction. Moreover, specific to our study, our gestures primarily only involve click and drag, which could be easily completed with a mouse, as demonstrated in previous work [13, 14]. Increasing the display space on the Desktop is likely to increase the performance (like using multiple monitors or a display wall), as one of the most notable benefits of VR is its large display space. The effect of display size has been previously explored [97]. Future work can follow the same methodology to study the effect of display size on the data transformation workspace. Furthermore, we found that our participants might not be familiar with our zoomable and panable interface on the Desktop. However, there's an increasing trend of no-code data science tools designed with zoomable and panable interfaces, as we discussed in Section 4.1. It gradually becomes an essential design for non-technical data workers. Despite this, we also see other opportunities for managing multiple data tables, such as using tabs.
**Scalability and ecological validity.** We tested relatively small data in our study (see Sec. 4.4), as we want to control the study duration. Meanwhile, we had an interesting comment from one participant that they did not actively check the rows of data tables but focused on the columns more. This comment aligns well with the design of the GestureDB system [16, 18], where they present the columns all the time and only present rows on demand or for confirmation purposes. Larger data should be tested in future studies, especially data with more columns. Meanwhile, it is challenging to display all columns for large data sets (e.g., with 20 columns). As such, in our tested data, we intentionally have columns that cannot fit into the table view. The width of each column was determined by its longest data entry, causing the total width of all columns to exceed the table's width. Hence, even with a limited number of columns, columns were not visible at once, and scrolling was necessary to finish the task. From this perspective, we anticipate that, with increasing data size, participants need to scroll more, which will make the task more challenging for all conditions. Precision is often required in scrolling, so increasing the data size may affect VR more than the Desktop. On the other hand, it is possible to leverage the large display space in VR to alleviate this issue: we can use a large space to ensure all columns are visible without the need to scroll in VR. The user then can physically move in the space to navigate the large data tables. Such physical navigation has been found more effective than virtual navigation like panning or scrolling [77]. However, additional interaction supports are likely to be needed (e.g., shrinking the size of the table when it is grabbed) and should be tested in future work. Our study did not include commercial data transformation tools on the desktop like Tableau Pre Builder and Trifacta. Those tools include many more features besides the basic data transformation operations, which may distract us from studying our intended main effects (e.g., WIMP vs. Gesture and Desktop vs. VR). Adapting necessary features from those commercial desktop tools to VR is a promising next step, along with an ecological investigation of a complete system. Beyond the size of the datasets used, conducting user studies with a larger number of participants also should be tested. We expect this to lower the variance in several measures and further strengthen our hypothesis.
**VR vs. AR.** Our study was conducted in VR, as the VR hardware is more mature than AR headsets (e.g., higher resolution and larger field of view). On the positive side, our designed gestures and implemented prototypes can be easily migrated from VR to AR thanks to the developing ecosystem, i.e., Unity3D. We also believe our identified benefits in VR can be generalized to AR or hybrid systems of AR and Desktop [98], because the same as VR, AR also provides embodied interaction and large display space. With AR hardware improving, it is important to investigate AR's unique challenges and benefits. For example, observing the physical environment can improve people's spatial memory [81] and seamlessly switch between different computing environments [98]. We will systematically analyze and evaluate data transformation in AR in a future study once better AR headsets are released.
## 8 Conclusion
In this paper, we presented our prototypes to enable embedded interactions on the Desktop and embodied interactions in VR for data transformation. We conducted a controlled user study to systematically evaluate the effect of the computing environments (Desktop vs. VR) and interaction methods (WIMP vs. Gesture). We found initial evidences showing the benefits of using VR for data transformation: VR had the potential to provide improved provenance over Desktop and demonstrated preliminary evidence of promoting strategic thinking. VR also had a similar time performance compared to Desktop. Additionally, participants found VR to be more engaging, and VR+Gesture provided the best overall experience. Considering the iterative nature of data science, we foresee strong initiatives to combine immersive data transformation and visualization to enable a more complete _Immersive Analytics_ workflow, reducing the overhead of switching between different computing environments. On the other hand, we did not find a better performance of Gesture over WIMP. We still believe there is potential to improve Gestures, for example, by providing long-term training and real-time memory assistance. In summary, our results provide preliminary evidence that the large display space, spatial memory, and embodied navigation in VR are beneficial for high-level data transformation tasks.
## Acknowledgments
This work was supported in part by NSF I/UCRC CNS-1822080 via the NSF Center for Space, High-performance, and Resilient Computing (SHREC) and NSF grant III-2107328.
|
2309.05414 | Carleson embeddings and pointwise multipliers between Hardy-Orlicz
spaces and Bergman-Orlicz spaces of the upper half-plane | In this article, we give a general characterization of Carleson measures
involving concave or convex growth functions. We use this characterization to
establish continuous injections and also to characterize the set of pointwise
multipliers between Hardy-Orlicz spaces and Bergman-Orlicz spaces. | J. M Tanoh Dje, Benoit F. Sehba | 2023-09-11T12:27:16Z | http://arxiv.org/abs/2309.05414v1 | Carleson embeddings and pointwise multipliers between Hardy-Orlicz spaces and Bergman-Orlicz spaces of the upper half-plane
###### Abstract.
In this article, we give a general characterization of Carleson measures involving concave or convex growth functions. We use this characterization to establish continuous injections and also to characterize the set of pointwise multipliers between Hardy-Orlicz spaces and Bergman-Orlicz spaces.
## 1. Introduction.
Let \(\mathbb{D}\) be the unit disc of \(\mathbb{C}\). For \(\alpha>-1\), and \(0<p<\infty\), the Bergman space \(A^{p}_{\alpha}(\mathbb{D})\) consists of all holomorphic functions \(f\) on \(\mathbb{D}\) such that
\[\|f\|_{p,\alpha}^{p}:=\int\limits_{\mathbb{D}}|f(z)|^{p}(1-|z|^{2})^{\alpha}d \nu(z)<\infty. \tag{1.1}\]
Here, \(d\nu(z)\) is the normalized area measure on \(\mathbb{D}\).
When \(\alpha\longrightarrow-1\), the corresponding space \(A^{p}_{-1}(\mathbb{D})\) is the Hardy space \(H^{p}(\mathbb{D})\) which consists of all holomorphic functions \(f\) on \(\mathbb{D}\) such that
\[\|f\|_{p}^{p}:=\|f\|_{p,-1}^{p}:=\sup\limits_{0\leq r<1}\int\limits_{0}^{2\pi }|f(re^{i\theta})|^{p}d\theta<\infty. \tag{1.2}\]
One of the most studied questions on holomorphic function spaces and their operators is the notion of Carleson meausures for these spaces. In the unit disc, this is about characterizing all positive measures \(\mu\) on \(\mathbb{D}\) such that for some constant \(C>0\), and for any \(f\in A^{p}_{\alpha}(\mathbb{D})\), \(\alpha\geq-1\),
\[\int\limits_{\mathbb{D}}|f(z)|^{q}d\mu(z)\leq C\|f\|_{p,\alpha}^{q}. \tag{1.3}\]
This problem was first solved by L. Carleson in [3, 4] for Hardy spaces in the case \(p=q\). Extension of this result for \(p<q\) was obtained by P. Duren in [14] The case with loss \(p<>q\) was solved by I. V. Videnskii in [36]. The Corresponding results for Bergman spaces of the unit disc and the unit ball were obtained by W. Hastings and D. Luecking, J. A. Cima and W. Wogen in [8, 15, 19, 20, 21, 22]. For other contributions, we also refer the reader to the following [16, 25, 35].
Our interest in this paper is for the inequality (1.3) in the case where the power functions \(t^{q}\) and \(t^{p}\) are replaced by some continuous increasing and onto functions on \([0,\infty)\), \(\Phi_{2}\) and \(\Phi_{1}\) respectively. In the unit ball of \(\mathbb{C}^{n}\), this problem was solved in the case where \(t\mapsto\frac{\Phi_{2}(t)}{\Phi_{1}(t)}\) is nondecreasing for Hardy and Bergman spaces in the following and the references therein [5, 6, 29]. The case where \(t\mapsto\frac{\Phi_{2}(t)}{\Phi_{1}(t)}\) is nonincreasing was handled in [28] for the Bergman-Orlicz spaces.
In this paper, our setting is the upper-half plane \(\mathbb{C}_{+}\) and we still consider problem (1.3) for growth functions \(\Phi_{1}\) and \(\Phi_{2}\). In [12], we considered this question for the case where \(t\mapsto\frac{\Phi_{2}(t)}{\Phi_{1}(t)}\) is nondecreasing both functions being convex growth function. We are presenting here a more general result that encompasses the case where both \(\Phi_{1}\) and \(\Phi_{2}\) are concave, still with \(t\mapsto\frac{\Phi_{2}(t)}{\Phi_{1}(t)}\) nondecreasing. We note that even in the case of power functions, the study of Carleson measures for Bergman spaces of the upper-half plane with exponent in \((0,1]\) seems to have never been considered before. Our work will fix this gap beyond power functions as we are dealing here with growth functions that generalize them.
## 2. Statement of main results.
In this paper, a continuous and nondecreasing function \(\Phi\) from \(\mathbb{R}_{+}\) onto itself is called a growth function. Observe that if \(\Phi\) is a growth function, then \(\Phi(0)=0\) and \(\lim_{t\to+\infty}\Phi(t)=+\infty\). If \(\Phi(t)>0\) for all \(t>0\) then \(\Phi\) is a homeomorphism of \(\mathbb{R}_{+}\) onto \(\mathbb{R}_{+}\).
Let \(p>0\) be a real and \(\Phi\) a growth function. We say that \(\Phi\) is of upper-type (resp. lower-type) \(p>0\) if there exists a constant \(C_{p}>0\) such that for all \(t\geq 1\) (resp. \(0<t\leq 1\)),
\[\Phi(st)\leq C_{p}t^{p}\Phi(s),\ \forall\ s>0. \tag{2.1}\]
We denote by \(\mathscr{U}^{p}\) (resp. \(\mathscr{L}_{p}\)) the set of all growth functions of upper-type \(p\geq 1\) (resp. lower-type \(0<p\leq 1\)) such that the function \(t\mapsto\frac{\Phi(t)}{t}\) is non decreasing (resp. non-increasing) on \(\mathbb{R}_{+}^{*}=\mathbb{R}_{+}\backslash\{0\}\). We put \(\mathscr{U}:=\bigcup_{p\geq 1}\mathscr{U}^{p}\) (resp. \(\mathscr{L}:=\bigcup_{0<p\leq 1}\mathscr{L}_{p}\)). Any element belongs \(\mathscr{L}\cup\mathscr{U}\) is a homeomorphism of \(\mathbb{R}_{+}\) onto \(\mathbb{R}_{+}\).
We say that two growth functions \(\Phi_{1}\) and \(\Phi_{2}\) are equivalent, if there exists a constant \(c>0\) such that
\[c^{-1}\Phi_{1}(c^{-1}t)\leq\Phi_{2}(t)\leq c\Phi_{1}(ct),\ \forall\ t>0. \tag{2.2}\]
We will assume in the sequel that any element of \(\mathscr{U}\) (resp. \(\mathscr{L}\)) belongs to \(\mathscr{C}^{1}(\mathbb{R}_{+})\) and is convex (resp. concave). Moreover,
\[\Phi^{\prime}(t)\approx\frac{\Phi(t)}{t},\ \forall\ t>0,\]
(see for example [2, 11, 12, 13, 30]).
Let \(I\) be an interval of nonzero length. The Carleson square associated with \(I\), \(Q_{I}\) is the subset of \(\mathbb{C}_{+}\) defined by
\[Q_{I}:=\left\{x+iy\in\mathbb{C}_{+}:x\in I\ \text{et}\ 0<y<|I|\right\}. \tag{2.3}\]
**Definition 2.1**.: _Let \(s>0\) be a real, \(\Phi\) a growth function and \(\mu\) a positive Borel measure on \(\mathbb{C}_{+}\). We say that \(\mu\) is a \((s,\Phi)-\)Carleson measure if there is a constant \(C>0\) such that for any interval \(I\) of nonzero length_
\[\mu(Q_{I})\leq\frac{C}{\Phi\left(\frac{1}{|I|^{s}}\right)}. \tag{2.4}\]
* When \(s=1\), we say that \(\mu\) is a \(\Phi-\)Carleson measure.
* When \(s=2+\alpha\), with \(\alpha>-1\), we say that \(\mu\) is a \((\alpha,\Phi)-\)Carleson measure.
Let \(\alpha>-1\) be a real and \(\Phi\) a growth function.
* The Hardy\(-\)Orlicz space on \(\mathbb{C}_{+}\), \(H^{\Phi}(\mathbb{C}_{+})\) is the set of analytic functions on \(\mathbb{C}_{+}\) which satisfy
\[\|F\|_{H^{\Phi}}^{lux}:=\sup_{y>0}\inf\left\{\lambda>0:\int\limits_{\mathbb{R }}\Phi\left(\frac{|F(x+iy)|}{\lambda}\right)dx\leq 1\right\}<\infty.\]
* The Bergman\(-\)Orlicz space on \(\mathbb{C}_{+}\), \(A_{\alpha}^{\Phi}(\mathbb{C}_{+})\) is the set of analytic functions on \(\mathbb{C}_{+}\) which satisfy
\[\|F\|_{A_{\alpha}^{\Phi}}^{lux}:=\inf\left\{\lambda>0:\int\limits_{\mathbb{C} _{+}}\Phi\left(\frac{|F(x+iy)|}{\lambda}\right)dV_{\alpha}(x+iy)\leq 1\right\}<\infty,\]
where \(dV_{\alpha}(x+iy):=y^{\alpha}dxdy\).
If \(\Phi\) is convex and \(\Phi(t)>0\) for all \(t>0\) then \(\left(H^{\Phi}(\mathbb{C}_{+}),\|.\|_{H^{\Phi}}^{lux}\right)\) and \((A_{\alpha}^{\Phi}(\mathbb{C}_{+}),\|.\|_{A_{\alpha}^{\Phi}}^{lux})\) are Banach spaces (see. [12, 33, 34]). The spaces \(H^{\Phi}(\mathbb{C}_{+})\) and \(A_{\alpha}^{\Phi}(\mathbb{C}_{+})\) generalizes respectively the Hardy space \(H^{p}(\mathbb{C}_{+})\) and the Bergman space \(A_{\alpha}^{p}(\mathbb{C}_{+})\) for \(0<p<\infty\).
Our first main result is the following which extend [12, Theorem 2.2] to Hardy-Orlicz spaces defined with concave growth functions.
**Theorem 2.2**.: _Let \(\Phi_{1},\Phi_{2}\in\mathscr{L}\cup\mathscr{U}\) and \(\mu\) a positive Borel measure on \(\mathbb{C}_{+}\). If the function \(t\mapsto\frac{\Phi_{2}(t)}{\Phi_{1}(t)}\) is non-decreasing on \(\mathbb{R}_{+}^{*}\) then the following assertions are equivalent._
1. \(\mu\) _is a_ \(\Phi_{2}\circ\Phi_{1}^{-1}-\)_Carleson measure._
2. _There exist some constants_ \(\rho\in\{1;a_{\Phi_{1}}\}\) _and_ \(C_{1}>0\) _such that for all_ \(z=x+iy\in\mathbb{C}_{+}\)__ (2.5) \[\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\Phi_{1}^{-1}\left(\frac{1}{y}\right) \frac{y^{2/\rho}}{|\omega-\overline{z}|^{2/\rho}}\right)d\mu(\omega)\leq C_{1}.\]
3. _There exists a constant_ \(C_{2}>0\) _such that for all_ \(0\not\equiv F\in H^{\Phi_{1}}(\mathbb{C}_{+})\)_,_ (2.6) \[\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\frac{|F(z)|}{\|F\|_{H^{\Phi_{1}}}^{ law}}\right)d\mu(z)\leq C_{2}.\]
4. _There exists a constant_ \(C_{3}>0\) _such that for all_ \(F\in H^{\Phi_{1}}(\mathbb{C}_{+})\)__ (2.7) \[\sup\limits_{\lambda>0}\Phi_{2}(\lambda)\mu\left(\{z\in\mathbb{C}_{+}:|F(z)|> \lambda\|F\|_{H^{\Phi_{1}}}^{lux}\}\right)\leq C_{3}.\]
As consequence, we have the following.
**Corollary 2.3**.: _Let \(\alpha>-1\) and \(\Phi_{1},\Phi_{2}\in\mathscr{L}\cup\mathscr{U}\) such that \(t\mapsto\frac{\Phi_{2}(t)}{\Phi_{1}(t)}\) is non-decreasing on \(\mathbb{R}_{+}^{*}\). The Hardy-Orlicz space \(H^{\Phi_{1}}(\mathbb{C}_{+})\) embeds continuously into the Bergman-Orlicz space \(A_{\alpha}^{\Phi_{2}}(\mathbb{C}_{+})\) if and only if there exists a constant \(C>0\) such that for all \(t>0\),_
\[\Phi_{1}^{-1}(t)\leq\Phi_{2}^{-1}(Ct^{2+\alpha}). \tag{2.8}\]
Our second main result generalizes [12, Theorem 2.4].
**Theorem 2.4**.: _Let \(\alpha>-1\), \(\Phi_{1},\Phi_{2}\in\mathscr{L}\cup\mathscr{U}\) and \(\mu\) a positive Borel measure on \(\mathbb{C}_{+}\). If the function \(t\mapsto\frac{\Phi_{2}(t)}{\Phi_{1}(t)}\) is non-decreasing on \(\mathbb{R}_{+}^{*}\) then the following assertions are equivalent._
1. \(\mu\) _is a_ \((\alpha,\Phi_{2}\circ\Phi_{1}^{-1})-\)_Carleson measure._
2. _There exist some constants_ \(\rho\in\{1;a_{\Phi_{1}}\}\) _and_ \(C_{1}>0\) _such that for all_ \(z=x+iy\in\mathbb{C}_{+}\)__ (2.9) \[\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\Phi_{1}^{-1}\left(\frac{1}{y^{2+ \alpha}}\right)\frac{y^{(4+2\alpha)/\rho}}{|\omega-\overline{z}|^{(4+2\alpha)/ \rho}}\right)d\mu(\omega)\leq C_{1}.\]
3. _There exists a constant_ \(C_{2}>0\) _such that for all_ \(0\not\equiv F\in A_{\alpha}^{\Phi_{1}}(\mathbb{C}_{+})\)_,_ (2.10) \[\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\frac{|F(z)|}{\|F\|_{A_{\alpha}^{ \Phi_{1}}}^{lux}}\right)d\mu(z)\leq C_{2}.\]
4. _There exists a constant_ \(C_{3}>0\) _such that for all_ \(F\in A_{\alpha}^{\Phi_{1}}(\mathbb{C}_{+})\)__ (2.11) \[\sup\limits_{\lambda>0}\Phi_{2}(\lambda)\mu\left(\{z\in\mathbb{C}_{+}:|F(z)|> \lambda\|F\|_{A_{\alpha}^{\Phi_{1}}}^{lux}\}\right)\leq C_{3}.\]
The following embedding result follows from the above.
**Corollary 2.5**.: _Let \(\alpha,\beta>-1\) and \(\Phi_{1},\Phi_{2}\in\mathscr{L}\cup\mathscr{U}\) such that \(t\mapsto\frac{\Phi_{2}(t)}{\Phi_{1}(t)}\) is non-decreasing on \(\mathbb{R}_{+}^{*}\). The Bergman-Orlicz space \(A_{\alpha}^{\Phi_{1}}(\mathbb{C}_{+})\) embeds continuously into the Bergman-Orlicz space \(A_{\beta}^{\Phi_{2}}(\mathbb{C}_{+})\) if and only if there exists a constant \(C>0\) such that for all \(t>0\),_
\[\Phi_{1}^{-1}(t^{2+\alpha})\leq\Phi_{2}^{-1}(Ct^{2+\beta}). \tag{2.12}\]
Let \(\Phi\in\mathscr{C}^{1}(\mathbb{R}_{+})\) a growth function. The lower and the upper indices of \(\Phi\) are respectively defined by
\[a_{\Phi}:=\inf\limits_{t>0}\frac{t\Phi^{\prime}(t)}{\Phi(t)}\qquad\text{ and }\qquad b_{\Phi}:=\sup\limits_{t>0}\frac{t\Phi^{\prime}(t)}{\Phi(t)}.\]
Let \(p,q>0\) and \(\Phi\) a growth function. We say that \(\Phi\) belongs to \(\widetilde{\mathscr{U}}^{q}\) (resp. \(\widetilde{\mathscr{L}_{p}}\)) if the following assertions are satisfied
1. \(\Phi\in\mathscr{U}^{q}\) (resp. \(\Phi\in\mathscr{L}_{p}\)).
2. there exists a constant \(C_{1}>0\) such that for all \(s,t>0\), (2.13) \[\Phi(st)\leq C_{1}\Phi(s)\Phi(t).\]
3. there exists a constant \(C_{2}>0\) such that for all \(s,t\geq 1\) (2.14) \[\Phi\left(\frac{s}{t}\right)\leq C_{2}\frac{\Phi(s)}{t^{q}}\] resp. (2.15) \[\Phi\left(\frac{s}{t}\right)\leq C_{2}\frac{s^{p}}{\Phi(t)}.\]
We put \(\widetilde{\mathscr{U}}:=\bigcup_{q\geq 1}\widetilde{\mathscr{U}}^{q}\) (resp. \(\widetilde{\mathscr{L}}:=\bigcup_{0<p\leq 1}\widetilde{\mathscr{Z}_{p}}\)).
Let \(\omega:\mathbb{R}_{+}^{*}\longrightarrow\mathbb{R}_{+}^{*}\) be a function. An analytic function \(F\) in \(\mathbb{C}_{+}\) is said to be in \(H_{\omega}^{\infty}(\mathbb{C}_{+})\) if
\[\|F\|_{H_{\omega}^{\infty}}:=\sup_{z\in\mathbb{C}_{+}}\frac{|f(z)|}{\omega( \operatorname{Im}(z))}<\infty. \tag{2.16}\]
If \(\omega\) is continuous then \((H_{\omega}^{\infty}(\mathbb{C}_{+}),\|.\|_{H^{\infty}})\) is a Banach space.
Let \(X\) and \(Y\) be two analytic function spaces which are metric spaces, with respective metrics \(d_{X}\) and \(d_{Y}\). An analytic function \(g\) is said to be a multiplier from \(X\) to \(Y\), if there exists a constant \(C>0\) such that for any \(f\in X\),
\[d_{Y}(fg,0)\leq Cd_{X}(f,0). \tag{2.17}\]
We denote by \(\mathcal{M}(X,Y)\) the set of multipliers from \(X\) to \(Y\).
The following is a characterization of pointwise multipliers from an Hardy-Orlicz space to a Bergman-Orlicz space. It is an extension of [12, Theorem 2.7].
**Theorem 2.6**.: _Let \(\Phi_{1}\in\mathscr{L}\cup\mathscr{U}\) and \(\Phi_{2}\in\widetilde{\mathscr{L}}\cup\widetilde{\mathscr{U}}\) such that the function \(t\mapsto\frac{\Phi_{2}(t)}{\Phi_{1}(t)}\) is non-decreasing on \(\mathbb{R}_{+}^{*}\). Let \(\alpha>-1\) and put_
\[\omega(t)=\frac{\Phi_{2}^{-1}\left(\frac{1}{t^{2+\alpha}}\right)}{\Phi_{1}^{- 1}\left(\frac{1}{t}\right)},\ \forall\ t>0.\]
_The following assertions are satisfied._
1. _If_ \(0<a_{\Phi_{1}}\leq b_{\Phi_{1}}<a_{\Phi_{2}}\leq b_{\Phi_{2}}<\infty\) _then_ \[\mathcal{M}(H^{\Phi_{1}}(\mathbb{C}_{+}),A_{\alpha}^{\Phi_{2}}(\mathbb{C}_{+}) )=H_{\omega}^{\infty}(\mathbb{C}_{+}).\]
2. _If_ \(\omega\approx 1\) _then_ \[\mathcal{M}(H^{\Phi_{1}}(\mathbb{C}_{+}),A_{\alpha}^{\Phi_{2}}(\mathbb{C}_{+}) )=H^{\infty}(\mathbb{C}_{+}).\]
3. _If_ \(\omega\) _is decreasing and_ \(\lim_{t\to 0}\omega(t)=0\) _then_ \[\mathcal{M}(H^{\Phi_{1}}(\mathbb{C}_{+}),A_{\alpha}^{\Phi_{2}}(\mathbb{C}_{+}) )=\{0\}.\]
The following is a characterization of pointwise multipliers Bergman-Orlicz spaces. It is an extension of [12, Theorem 2.8].
**Theorem 2.7**.: _Let \(\Phi_{1}\in\mathscr{L}\cup\mathscr{U}\) and \(\Phi_{2}\in\widetilde{\mathscr{L}}\cup\widetilde{\mathscr{U}}\) such that the function \(t\mapsto\frac{\Phi_{2}(t)}{\Phi_{1}(t)}\) is non-decreasing on \(\mathbb{R}_{+}^{*}\). Let \(\alpha,\beta>-1\) and put_
\[\omega(t)=\frac{\Phi_{2}^{-1}\left(\frac{1}{t^{2+\alpha}}\right)}{\Phi_{1}^{- 1}\left(\frac{1}{t^{2+\alpha}}\right)},\ \forall\ t>0.\]
_The following assertions are satisfied._
1. _If_ \(0<a_{\Phi_{1}}\leq b_{\Phi_{1}}<a_{\Phi_{2}}\leq b_{\Phi_{2}}<\infty\) _then_ \[\mathcal{M}\left(A_{\alpha}^{\Phi_{1}}(\mathbb{C}_{+}),A_{\beta}^{\Phi_{2}}( \mathbb{C}_{+})\right)=H_{\omega}^{\infty}(\mathbb{C}_{+}).\]
2. _If_ \(\omega\approx 1\) _then_ \[\mathcal{M}\left(A_{\alpha}^{\Phi_{1}}(\mathbb{C}_{+}),A_{\beta}^{\Phi_{2}}( \mathbb{C}_{+})\right)=H^{\infty}(\mathbb{C}_{+}).\]
3. _If_ \(\omega\) _is decreasing and_ \(\lim_{t\to 0}\omega(t)=0\) _then_ \[\mathcal{M}\left(A_{\alpha}^{\Phi_{1}}(\mathbb{C}_{+}),A_{\beta}^{\Phi_{2}}( \mathbb{C}_{+})\right)=\{0\}.\]
The paper is organized as folllows. In Section 3, we provide some further definitions and useful results on growth functions, Hardy-Orlicz and Bergman-Orlicz spaces. Indeed, there is no actual reference for a full study of our spaces in the literature, consequently, we are proving several related results needed in our study. In Section 4, we prove some characterizations of Carleson measures, in particular, a general result that encompasses assertions (ii) in both Theorem 2.2 and Theorem 2.4. Our main results are proved in Section 5.
## 3. Some definitions and useful properties
We present in this section some useful results needed in our presentation.
### Some properties of growth functions
Let \(\Phi\) be a growth function. We say that \(\Phi\) satisfies the \(\Delta_{2}-\)condition (or \(\Phi\in\Delta_{2}\)) if there exists a constant \(K>1\) such that
\[\Phi(2t)\leq K\Phi(t),\ \forall\ t>0. \tag{3.1}\]
It is obvious that any growth function \(\Phi\in\mathscr{L}\cup\mathscr{U}\) satisfies the \(\Delta_{2}-\)condition.
Let \(\Phi\) be a convex growth function. The complementary function of \(\Phi\) is the function \(\Psi\) defined by
\[\Psi(s)=\sup_{t\geq 0}\{st-\Phi(t)\},\ \forall\ s\geq 0.\]
Let \(\Phi\) be a convex growth function. We say that \(\Phi\) satisfies \(\nabla_{2}-\)condition (or \(\Phi\in\nabla_{2}\)) if \(\Phi\) and its complementary function both satisfy \(\Delta_{2}-\)condition.
Let \(\Phi\in\mathscr{C}^{1}(\mathbb{R}_{+})\) a growth function. The following assertions are satisfied.
1. If \(\Phi\in\mathscr{L}\cup\mathscr{U}\) then \(0<a_{\Phi}\leq b_{\Phi}<\infty\).
2. \(\Phi\in\mathscr{U}\) if and only if \(1\leq a_{\Phi}\leq b_{\Phi}<\infty\). Moreover, \(\Phi\in\mathscr{U}\cap\nabla_{2}\) if and only if \(1<a_{\Phi}\leq b_{\Phi}<\infty\), (see. [12]).
3. If \(0<a_{\Phi}\leq b_{\Phi}<\infty\) then the function \(t\mapsto\frac{\Phi(t)}{t^{a_{\Phi}}}\) is increasing on \(\mathbb{R}_{+}^{*}\) while the function \(t\mapsto\frac{\Phi(t)}{t^{a_{\Phi}}}\) is decreasing on \(\mathbb{R}_{+}^{*}\) (see. [30, Lemma 2.1]).
Let \(\Phi\) be a growth function and \(q>0\). If \(\Phi\) is a one-to-one growth function then \(\Phi\in\mathscr{U}^{q}\) if and only if \(\Phi^{-1}\in\mathscr{L}_{1/q}\) (see. [31, Proposition 2.1]).
**Lemma 3.1** (Lemma 3.1, [12]).: _Let \(\Phi\in\mathscr{U}\). The following assertions are equivalents._
1. \(\Phi\in\nabla_{2}\)_._
2. _There exists a constant_ \(C_{1}>0\) _such that for all_ \(t>0\)_,_ (3.2) \[\int\limits_{0}^{t}\frac{\Phi(s)}{s^{2}}ds\leq C_{1}\frac{\Phi(t)}{t}.\]
3. _There exists a constant_ \(C_{2}>1\) _such that for all_ \(t>0\)_,_ (3.3) \[\Phi(t)\leq\frac{1}{2C_{2}}\Phi(C_{2}t).\]
**Lemma 3.2**.: _Let \(\Phi\in\mathscr{C}^{1}(\mathbb{R}_{+})\) be a growth function such that \(0<a_{\Phi}\leq b_{\Phi}<\infty\). For \(s>0\), consider \(\Phi_{s}\) the function defined by_
\[\Phi_{s}(t)=\Phi\left(t^{s}\right),\ \forall\ t\geq 0.\]
_Then \(sa_{\Phi}\leq a_{\Phi_{s}}\leq b_{\Phi_{s}}\leq sb_{\Phi}\)._
Proof.: For \(t>0\), we have
\[\left(\Phi_{s}(t)\right)^{\prime}=st^{s-1}\Phi^{\prime}\left(t^{s}\right) \Rightarrow\frac{t\left(\Phi_{s}(t)\right)^{\prime}}{\Phi_{s}(t)}=s\times \frac{t^{s}\Phi^{\prime}\left(t^{s}\right)}{\Phi\left(t^{s}\right)}.\]
It follows that
\[sa_{\Phi}\leq\frac{t\left(\Phi_{s}(t)\right)^{\prime}}{\Phi_{s}(t)}\leq sb_{ \Phi},\ \forall\ t>0.\]
**Corollary 3.3**.: _Let \(s\geq 1\) and \(\Phi\in\mathscr{C}^{1}(\mathbb{R}_{+})\) a growth function such that \(0<a_{\Phi}\leq b_{\Phi}<\infty\). For \(t\geq 0\), put_
\[\Phi_{s}(t)=\Phi\left(t^{s/a_{\Phi}}\right).\]
_The following assertions are satisfied._
* _If_ \(s=1\) _then_ \(\Phi_{s}\in\mathscr{U}\)_._
* _If_ \(s>1\) _then_ \(\Phi_{s}\in\mathscr{U}\cap\nabla_{2}\)_._
**Proposition 3.4**.: _Let \(\Phi_{1},\Phi_{2}\in\mathscr{C}^{1}(\mathbb{R}_{+})\) be two growth functions such that \(0<a_{\Phi_{1}}\leq b_{\Phi_{1}}<\infty\) and \(0<a_{\Phi_{2}}\leq b_{\Phi_{2}}<\infty\). Then \(\Phi_{1}\circ\Phi_{2}\in\mathscr{C}^{1}(\mathbb{R}_{+})\) growth function and_
\[a_{\Phi_{1}a_{\Phi_{2}}}\leq a_{\Phi_{1}\circ\Phi_{2}}\leq b_{\Phi_{1}\circ \Phi_{2}}\leq b_{\Phi_{1}}b_{\Phi_{2}}.\]
Proof.: For \(t>0\), we have
\[\left(\Phi_{1}\circ\Phi_{2}\right)^{\prime}(t)=\Phi_{1}^{\prime}\left(\Phi_{2 }(t)\right)\Phi_{2}^{\prime}(t)\Rightarrow\frac{t\left(\Phi_{1}\circ\Phi_{2} \right)^{\prime}(t)}{\Phi_{1}\circ\Phi_{2}(t)}=\frac{\Phi_{2}(t)\Phi_{1}^{ \prime}\left(\Phi_{2}(t)\right)}{\Phi_{1}\left(\Phi_{2}(t)\right)}\times\frac{ t\Phi_{2}^{\prime}(t)}{\Phi_{2}(t)}.\]
It follows that
\[a_{\Phi_{1}}a_{\Phi_{2}}\leq\frac{t\left(\Phi_{1}\circ\Phi_{2}\right)^{\prime} (t)}{\Phi_{1}\circ\Phi_{2}(t)}\leq b_{\Phi_{1}}b_{\Phi_{2}},\ \forall\ t>0.\]
**Proposition 3.5**.: _Let \(\Phi\in\mathscr{C}^{1}(\mathbb{R}_{+})\) a growth function. The following assertions are equivalent._
* \(0<a_{\Phi}\leq b_{\Phi}<\infty\)_._
* \(0<a_{\Phi^{-1}}\leq b_{\Phi^{-1}}<\infty\)_._
_Moreover, \(a_{\Phi^{-1}}=1/b_{\Phi}\) and \(b_{\Phi^{-1}}=1/a_{\Phi}\)._
Proof.: Show that \(i)\) implies \(ii)\). We have
\[\left(\Phi^{-1}\right)^{\prime}(t)=\frac{1}{\Phi^{\prime}\left(\Phi^{-1}(t) \right)},\ \forall\ t>0.\]
It follows that
\[0<a_{\Phi}\leq b_{\Phi}<\infty \Rightarrow 0<a_{\Phi}\leq\frac{t\Phi^{\prime}(t)}{\Phi(t)}\leq b_{\Phi}< \infty,\ \forall\ t>0\] \[\Rightarrow 0<a_{\Phi}\leq\frac{\Phi^{-1}(t)\Phi^{\prime}\left(\Phi^{-1}(t) \right)}{\Phi\left(\Phi^{-1}(t)\right)}\leq b_{\Phi}<\infty,\ \forall\ t>0\] \[\Rightarrow \frac{1}{b_{\Phi}}\leq\frac{t\left(\Phi^{-1}\right)^{\prime}(t)}{ \Phi^{-1}(t)}\leq\frac{1}{a_{\Phi}},\ \forall\ t>0.\]
We deduce on the one hand that
\[\frac{1}{b_{\Phi}}\leq a_{\Phi^{-1}}\leq b_{\Phi^{-1}}\leq\frac{1}{a_{\Phi}}. \tag{3.4}\]
Reasoning as above, we obtain that (ii) implies (i) and we deduce on the other hand that
\[\frac{1}{b_{\Phi^{-1}}}\leq a_{\Phi}\leq b_{\Phi}\leq\frac{1}{a_{\Phi^{-1}}}. \tag{3.5}\]
From the Relations (3.4) and (3.5) we conclude that \(a_{\Phi^{-1}}=1/b_{\Phi}\) and \(b_{\Phi^{-1}}=1/a_{\Phi}\).
**Proposition 3.6**.: _Let \(\Phi_{1},\Phi_{2}\in\mathscr{L}\cup\mathscr{U}\). The following assertions are equivalent._
* _The function_ \(t\mapsto\frac{\Phi_{2}(t)}{\Phi_{1}(t)}\) _is non-decreasing on_ \(\mathbb{R}_{+}^{*}\)_._
* _The function_ \(t\mapsto\frac{\Phi_{2}\circ\Phi_{1}^{-1}(t)}{t}\) _is non-decreasing on_ \(\mathbb{R}_{+}^{*}\)_._
* _The function_ \(\Phi_{2}\circ\Phi_{1}^{-1}\) _belongs_ \(\mathscr{U}^{b_{\Phi_{2}}/a_{\Phi_{1}}}\)_._
Proof.: The equivalence between (i) and (ii) is obvious. That (iii) implies (ii) is also immediate.
Let us now show that (ii) implies (iii).
Since the functions \(t\mapsto\frac{\Phi_{1}^{-1}(t)}{t^{1/a_{\Phi_{1}}}}\) and \(t\mapsto\frac{\Phi_{2}(t)}{t^{a_{\Phi_{2}}}}\) are non-increasing on \(\mathbb{R}_{+}^{*}\), we deduce that for all \(s>0\) and \(t\geq 1\)
\[\Phi_{1}^{-1}(st)\leq t^{1/a_{\Phi_{1}}}\Phi_{1}^{-1}(s)\]
and
\[\Phi_{2}\left(t^{1/a_{\Phi_{1}}}\Phi_{1}^{-1}(s)\right)\leq t^{b_{\Phi_{2}}/a_ {\Phi_{1}}}\Phi_{2}\left(\Phi_{1}^{-1}(s)\right).\]
It follows that
\[\Phi_{2}\left(\Phi_{1}^{-1}(st)\right)\leq t^{b_{\Phi_{2}}/a_{\Phi_{1}}}\Phi_{ 2}\left(\Phi_{1}^{-1}(s)\right).\]
**Proposition 3.7**.: _Let \(\Phi\) be a growth function such that \(\Phi(t)>0\) for all \(t>0\). Consider \(\widetilde{\Omega}\) the function defined by_
\[\widetilde{\Omega}(t)=\frac{1}{\Phi\left(\frac{1}{t}\right)},\ \forall\ t>0\quad\text{ and } \quad\widetilde{\Omega}(0)=0.\]
_The following assertions are satisfied._
1. \(\Phi\in\mathscr{U}^{q}\) _(resp._ \(\mathscr{L}_{p}\)_) if and only if_ \(\widetilde{\Omega}\in\mathscr{U}^{q}\) _(resp._ \(\mathscr{L}_{p}\)_)._
2. \(\Phi\in\mathscr{U}\cap\nabla_{2}\) _if and only if_ \(\widetilde{\Omega}\in\mathscr{U}\cap\nabla_{2}\)_._
Proof.: _i_) Suppose that \(\Phi\in\mathscr{U}^{q}\). For \(0<t_{1}\leq t_{2}\), we have
\[\frac{\Phi(t_{1})}{t_{1}}\leq\frac{\Phi(t_{2})}{t_{2}}\Leftrightarrow\frac{ \Phi\left(1/t_{2}\right)}{1/t_{2}}\leq\frac{\Phi\left(1/t_{1}\right)}{1/t_{1}} \Leftrightarrow\frac{1}{t_{1}}\frac{1}{\Phi\left(1/t_{1}\right)}\leq\frac{1} {t_{2}}\frac{1}{\Phi\left(1/t_{2}\right)}\Leftrightarrow\frac{\widetilde{ \Omega}(t_{1})}{t_{1}}\leq\frac{\widetilde{\Omega}(t_{2})}{t_{2}}.\]
Since \(\Phi\) is of upper type \(q\) then so is the function \(\widetilde{\Omega}\). Indeed, for all \(s>0\) and \(t\geq 1\)
\[\Phi\left(\frac{1}{s}\right)=\Phi\left(t\times\frac{1}{st}\right)\leq C_{q}t^ {q}\Phi\left(\frac{1}{st}\right)\Rightarrow\frac{1}{C_{q}t^{q}\Phi\left(\frac {1}{st}\right)}\leq\frac{1}{\Phi\left(\frac{1}{s}\right)}\Rightarrow \widetilde{\Omega}(st)\leq C_{q}t^{q}\widetilde{\Omega}(s).\]
The converse is obtained similarly. We conclude that \(\Phi\in\mathscr{U}^{q}\) if and only if \(\widetilde{\Omega}\in\mathscr{U}^{q}\). Reasoning in the same way, we also show that \(\Phi\in\mathscr{L}_{p}\) if and only if \(\widetilde{\Omega}\in\mathscr{L}_{p}\).
(ii) We suppose that \(\Phi\in\mathscr{U}\cap\nabla_{2}\). For \(t>0\), we have
\[\Phi\left(\frac{1}{t}\right)\leq\frac{1}{2C}\Phi\left(\frac{C}{t}\right) \Rightarrow\frac{2C}{\Phi\left(\frac{C}{t}\right)}\leq\frac{1}{\Phi\left( \frac{1}{t}\right)}\Rightarrow 2C\widetilde{\Omega}\left(\frac{t}{C}\right)\leq \widetilde{\Omega}(t),\]
according to the Lemma 3.1. We deduce that \(\widetilde{\Omega}\in\mathscr{U}\cap\nabla_{2}\).
The converse is obtained similarly.
**Lemma 3.8**.: _Let \(\Phi_{1},\Phi_{2}\in\mathscr{L}\cup\mathscr{U}\) and put_
\[\widetilde{\Omega}_{3}(t)=\frac{1}{\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{t} \right)},\ \forall\ t>0\quad\text{ and }\quad\widetilde{\Omega}_{3}(0)=0.\]
_If the function \(t\mapsto\frac{\Phi_{2}(t)}{\Phi_{1}(t)}\) is non-decreasing on \(\mathbb{R}_{+}^{*}\) then \(\widetilde{\Omega}_{3}\in\mathscr{U}\)._
Proof.: The proof follows from Proposition 3.6 and Proposition 3.7.
**Lemma 3.9**.: _Let \(\Phi\in\widetilde{\mathscr{L}}\cup\widetilde{\mathscr{U}}\). There exists a constant \(C>0\) such that_
\[\Phi\left(\frac{s}{t}\right)\leq C\frac{\Phi(s)}{\Phi(t)},\ \forall\ s,t>0. \tag{3.6}\]
Proof.: The inequality (3.6) is true for \(\Phi\in\widetilde{\mathscr{U}}\) (see. [13, Lemma 4.3]).
For \(0<p\leq 1\) suppose that \(\Phi\in\widetilde{\mathscr{L}}_{p}\). For \(s,t>0\), we have
\[\Phi\left(\frac{s}{t}\right)\leq C_{1}\Phi(s)\Phi\left(\frac{1}{t}\right),\]
since the inequality (2.13) is satisfied.
If \(0<t<1\) then we have
\[\Phi(t)=\Phi\left(\frac{1}{\frac{1}{t}}\right)\leq C_{2}\frac{1^{p}}{\Phi\left( \frac{1}{t}\right)},\]
thanks to Relation (2.15). It follows that
\[\Phi\left(\frac{1}{t}\right)\leq C_{2}\frac{1}{\Phi(t)}. \tag{3.7}\]
If \(t\geq 1\) then we have
\[\Phi\left(\frac{1}{t}\right)=\Phi\left(\frac{1}{t}\times 1\right)\leq C_{p} \left(\frac{1}{t}\right)^{p}\Phi(1),\]
since \(\Phi\) is of lower type \(p\). It follows that
\[\Phi\left(\frac{1}{t}\right)\leq\frac{C_{2}}{\Phi(1)}\frac{1}{\Phi(t)}, \tag{3.8}\]
since from Relation (2.15), we have also
\[\Phi(t)=\Phi\left(\frac{t}{1}\right)\leq C_{2}\frac{t^{p}}{\Phi(1)}.\]
From Relations (3.7) and (3.8), we deduce that
\[\Phi\left(\frac{1}{t}\right)\lesssim\frac{1}{\Phi(t)}.\]
Therefore,
\[\Phi\left(\frac{s}{t}\right)\lesssim\frac{\Phi(s)}{\Phi(t)}.\]
### Some properties of Orlicz spaces
Let \((X,\sum,\mu)\) be a measure space and \(\Phi\) a growth function. The Orlicz space on \(X\), \(L^{\Phi}(X,d\mu)\) is the set of all equivalent classes (in the usual sense) of measurable functions \(f:X\longrightarrow\mathbb{C}\) which satisfy
\[\|f\|_{L^{\Phi}_{\mu}}^{lux}:=\inf\left\{\lambda>0:\int\limits_{X}\Phi\left( \frac{|f(x)|}{\lambda}\right)d\mu(x)\leq 1\right\}<\infty.\]
If \(\Phi\) is convex then \((L^{\Phi}(X,d\mu),\|.\|_{L^{\Phi}_{\mu}}^{lux})\) is a Banach space (see.[7, 18, 27]). The space \(L^{\Phi}\) generalizes the Lebesgue space \(L^{p}\) for \(0<p<\infty\).
Let \(\Phi\) be a growth function. Let \(f\in L^{\Phi}(X,d\mu)\) and put
\[\|f\|_{L^{\Phi}_{\mu}}:=\int\limits_{X}\Phi\left(|f(x)|\right)d\mu(x).\]
If \(\Phi\in\mathscr{C}^{1}(\mathbb{R}_{+})\) is a growth function such that \(0<a_{\Phi}\leq b_{\Phi}<\infty\), then we have the following inequalities
\[\|f\|_{L^{\Phi}_{\mu}}\lesssim\max\left\{\left(\|f\|_{L^{\Phi}_{\mu}}^{lux} \right)^{a_{\Phi}};\left(\|f\|_{L^{\Phi}_{\mu}}^{lux}\right)^{b_{\Phi}}\right\}\]
and
\[\|f\|_{L^{\Phi}_{\mu}}^{lux}\lesssim\max\left\{\left(\|f\|_{L^{\Phi}_{\mu}}^{ L^{1/a_{\Phi}}};\left(\|f\|_{L^{\Phi}_{\mu}}\right)^{1/b_{\Phi}}\right\}.\]
We will simply denote \(L^{\Phi}(\mathbb{R})=L^{\Phi}(\mathbb{R},dx)\), where \(dx\) is the Lebesgue measure on \(\mathbb{R}\).
Let \(\Phi\) be a convex growth function. We have the following inclusion
\[L^{\Phi}(\mathbb{R})\subset L^{1}\left(\mathbb{R},\frac{dt}{1+t^{2}}\right)\]
Let \(\alpha>-1\) and \(E\) be a measurable set of \(\mathbb{C}_{+}\). We denote
\[|E|_{\alpha}:=\int\limits_{E}dV_{\alpha}(x+iy).\]
Let \(I\) be an interval and \(Q_{I}\) its associated Carleson square. It is easy to see that
\[|Q_{I}|_{\alpha}=\frac{1}{1+\alpha}|I|^{2+\alpha}. \tag{3.9}\]
Fix \(\beta\in\{0;1/3\}\). An interval \(\beta-\)dyadic is any interval \(I\) of \(\mathbb{R}\) of the form
\[2^{-j}([0,1)+k+(-1)^{j}\beta),\]
where \(k,j\in\mathbb{Z}\). We denote by \(\mathcal{D}_{j}^{\beta}\) the set of \(\beta-\)dyadic intervals \(I\) such that \(|I|=2^{-j}\). Put \(\mathcal{D}^{\beta}:=\bigcup_{j}\mathcal{D}_{j}^{\beta}\). We have the following properties (see for example [9, 32]):
* for all \(I,J\in\mathcal{D}^{\beta}\), we have \(I\cap J\in\{\emptyset;I;J\}\),
* for each fixed \(j\in\mathbb{Z}\), if \(I\in\mathcal{D}_{j}^{\beta}\) then there exists a unique \(J\in\mathcal{D}_{j-1}^{\beta}\) such that \(I\subset J\),
* for each fixed \(j\in\mathbb{Z}\), if \(I\in\mathcal{D}_{j}^{\beta}\) then there exists \(I_{1},I_{2}\in\mathcal{D}_{j+1}^{\beta}\) such that \(I=I_{1}\cup I_{2}\) and \(I_{1}\cap I_{2}=\emptyset\).
We refer to [17, 26] for the following.
**Lemma 3.10**.: _Let \(I\) be an interval. There exists \(\beta\in\{0,1/3\}\) and \(J\in\mathcal{D}^{\beta}\) such that \(I\subset J\) and \(|J|\leq 6|I|\)._
Let \(\alpha>-1\) and \(f\) a measurable function on \(\mathbb{R}\) (resp. \(\mathbb{C}_{+}\)). The Hardy-Littlewood maximal functions on the line and on the upper-half plane for a function of \(f\) are respectively defined by
\[\mathcal{M}_{HL}(f)(x):=\sup_{I\subset\mathbb{R}}\frac{\chi_{I}(x)}{|I|}\int \limits_{I}|f(t)|dt,\ \forall\ x\in\mathbb{R},\]
and
\[\mathcal{M}_{V_{\alpha}}(f)(z):=\sup_{I\subset\mathbb{R}}\frac{\chi_{Q_{I}}(z )}{|Q_{I}|_{\alpha}}\int\limits_{Q_{I}}|f(\omega)|dV_{\alpha}(\omega),\ \forall\ z\in\mathbb{C}_{+},\]
where the supremum is taken over all intervals of \(\mathbb{R}\). Similarly, for \(\beta\in\{0;1/3\}\), we define their dyadic versions \(\mathcal{M}_{HL}^{\mathcal{D}^{\beta}}(f)\) and \(\mathcal{M}_{V_{\alpha}}^{\mathcal{D}^{\beta}}(f)\) as above but with the supremum taken this time on the intervals in the dyadic grid \(\mathcal{D}^{\beta}\). We have
\[\mathcal{M}_{HL}(f)\leq 6\sum_{\beta\in\{0;1/3\}}\mathcal{M}_{HL}^{\mathcal{D}^{ \beta}}(f) \tag{3.10}\]
and
\[\mathcal{M}_{V_{\alpha}}(f)\leq 6^{2+\alpha}\sum_{\beta\in\{0;1/3\}}\mathcal{M}_{V_ {\alpha}}^{\mathcal{D}^{\beta}}(f). \tag{3.11}\]
**Proposition 3.11**.: _Let \(\beta\in\{0;1/3\}\), \(\alpha>-1\), \(0<\gamma<\infty\) and \(\Phi\) a growth function. Put_
\[\Phi_{\gamma}(t):=\Phi(t^{\gamma}),\ \forall\ t\geq 0.\]
_If \(\Phi_{\gamma}\) is convex then the following assertions are satisfied_
* _for all_ \(0\not\equiv f\in L^{\Phi}(\mathbb{R})\) _and for_ \(\lambda>0\)_,_
* _for all_ \(0\not\equiv f\in L^{\Phi}(\mathbb{C}_{+},dV_{\alpha})\) _and for_ \(\lambda>0\)__ \[\left|\left\{z\in\mathbb{C}_{+}:\left(\mathcal{M}_{V_{\alpha}}^{\mathcal{D}^{ \beta}}\left(\left(\frac{|f|}{\|f\|_{L^{\Phi}_{\alpha}}^{\text{law}}}\right)^{ 1/\gamma}\right)(z)\right)^{\gamma}>\lambda\right\}\right|_{\alpha}\leq\frac{1 }{\Phi(\lambda)}.\]
Proof.: _i_) Let \(0\not\equiv f\in L^{\Phi}(\mathbb{R})\) and put
\[g:=\frac{|f|^{1/\gamma}}{\left(\|f\|_{L^{\Phi}}^{\text{law}}\right)^{1/\gamma}}.\]
We have
\[\int\limits_{\mathbb{R}}\Phi_{\gamma}(|g(x)|)dx=\int\limits_{\mathbb{R}}\Phi_{ \gamma}\left(\left(\frac{|f(x)|}{\|f\|_{L^{\Phi}}^{lux}}\right)^{1/\gamma} \right)dx=\int\limits_{\mathbb{R}}\Phi\left(\frac{|f(x)|}{\|f\|_{L^{\Phi}}^{lux }}\right)dx\leq 1.\]
We deduce that \(g\in L^{\Phi_{\gamma}}(\mathbb{R})\) and \(\|g\|_{L^{\Phi_{\gamma}}}^{lux}\leq 1\).
For \(\lambda>0\), we can therefore find \(\{I_{j}\}_{j\in\mathbb{N}}\) a family of pairwise disjoint \(\beta-\)dyadic intervals such that
\[\left\{x\in\mathbb{R}:\mathcal{M}_{HL}^{\mathcal{D}^{\beta}}(g)(x)>\lambda^{1 /\gamma}\right\}=\bigcup\limits_{j\in\mathbb{N}}I_{j},\]
and
\[\lambda^{1/\gamma}<\frac{1}{|I_{j}|}\int\limits_{I_{j}}|g(y)|dy,\ \forall\ j\in\mathbb{N}.\]
For \(j\in\mathbb{N}\), we have
\[\Phi(\lambda)=\Phi_{\gamma}\left(\lambda^{1/\gamma}\right)\leq\Phi_{\gamma} \left(\frac{1}{|I_{j}|}\int\limits_{I_{j}}|g(y)|dy\right)\leq\frac{1}{|I_{j}| }\int\limits_{I_{j}}\Phi_{\gamma}(|g(y)|)dy,\]
thanks to Jensen's inequality. We deduce that
\[|I_{j}|\leq\frac{1}{\Phi(\lambda)}\int\limits_{I_{j}}\Phi_{\gamma}(|g(y)|)dy, \ \forall\ j\in\mathbb{N}.\]
It follows that
\[\left|\left\{x\in\mathbb{R}:\mathcal{M}_{HL}^{\mathcal{D}^{\beta} }(g)(x)>\lambda^{1/\gamma}\right\}\right| =\sum\limits_{j}|I_{j}|\] \[\leq\sum\limits_{j}\frac{1}{\Phi(\lambda)}\int\limits_{I_{j}} \Phi_{\gamma}(|g(y)|)dy\] \[=\frac{1}{\Phi(\lambda)}\int\limits_{\bigcup_{j}I_{j}}\Phi_{ \gamma}(|g(y)|)dy\leq\frac{1}{\Phi(\lambda)}.\]
In the same way, we prove the inequality of the point (ii).
**Theorem 3.12**.: _Let \(\alpha>-1\) and \(\Phi_{1},\Phi_{2}\in\mathscr{U}\). The following assertions are equivalent._
* _There exists a constant_ \(C_{1}>0\) _such that for all_ \(t>0\)_,_ (3.12) \[\int\limits_{0}^{t}\frac{\Phi_{2}(s)}{s^{2}}ds\leq C_{1}\frac{\Phi_{1}(t)}{t}.\]
* _There exists a constant_ \(C_{2}>0\) _such that for all_ \(f\in L^{\Phi_{1}}(\mathbb{R})\)_,_ (3.13) \[\|\mathcal{M}_{HL}(f)\|_{L^{\Phi_{2}}}^{lux}\leq C_{2}\|f\|_{L^{\Phi_{1}}}^{lux}.\]
* _There exists a constant_ \(C_{3}>0\) _such that for all_ \(f\in L^{\Phi_{1}}(\mathbb{C}_{+},dV_{\alpha})\)_,_ (3.14) \[\|\mathcal{M}_{V_{\alpha}}(f)\|_{L^{\Phi_{2}}_{V_{\alpha}}}^{lux}\leq C_{3}\|f \|_{L^{\Phi_{1}}_{V_{\alpha}}}^{lux}.\]
Proof.: \(i)\Leftrightarrow ii)\) This equivalence follows from the [10, Lemma 3.15].
\((i)\Rightarrow(iii)\) The proof of this implication is identical to that of the [12, Proposition 3.12].
Let us show that (iii) implies (i). Assume that inequality (3.12) is not satisfied. We can find a sequence of positive reals \((t_{k})_{k\geq 1}\) such that
\[\int\limits_{0}^{t_{k}}\frac{\Phi_{2}(s)}{s^{2}}ds\geq\frac{2^{k}\Phi_{1}(2^{ k}t_{k})}{t_{k}},\ \forall\ k\geq 1. \tag{3.15}\]
For \(k\geq 1\), put
\[f_{k}:=2^{k}t_{k}\chi_{Q_{I_{k}}},\]
where \(Q_{I_{k}}\) is the Carleson square associated with the interval \(I_{k}\) given as follows:
\[I_{k}:=\left\{x\in\mathbb{R}:\sum_{j=0}^{k-1}\left(\frac{\alpha+1}{2^{j}\Phi_{1 }(2^{j}t_{j})}\right)^{\frac{1}{\alpha+2}}\leq x<\sum_{j=0}^{k}\left(\frac{ \alpha+1}{2^{j}\Phi_{1}(2^{j}t_{j})}\right)^{\frac{1}{\alpha+2}}\right\}\]
From the relation (3.9), we have
\[|Q_{I_{k}}|_{\alpha}=\frac{1}{1+\alpha}|I_{k}|^{2+\alpha}=\frac{1}{2^{k}\Phi_{ 1}(2^{k}t_{k})}.\]
It follows that \(f_{k}\in L^{\Phi_{1}}(\mathbb{C}_{+},dV_{\alpha})\). In indeed
\[\int\limits_{\mathbb{C}_{+}}\Phi_{1}(|f_{k}(z)|)dV_{\alpha}(z)=\int\limits_{Q_ {I_{k}}}\Phi_{1}(2^{k}t_{k})dV_{\alpha}(z)=\Phi_{1}(2^{k}t_{k})|Q_{I_{k}}|_{ \alpha}=\frac{1}{2^{k}}<\infty.\]
According the Lemma 3.10, there exists a dyadic interval \(J_{k}\in\mathcal{D}^{\beta}\) such that \(I_{k}\subset J_{k}\) and \(|J_{k}|\leq 6|I_{k}|\). Let \(z\in Q_{I_{k}}\). We have
\[|f_{k}(z)|=\frac{1}{|Q_{I_{k}}|_{\alpha}}\int\limits_{Q_{I_{k}}}2^{k}t_{k} \chi_{Q_{I_{k}}}(\omega)dV_{\alpha}(\omega)\leq 6^{2+\alpha}\frac{\chi_{Q_{J_{k} }}(z)}{|Q_{J_{k}}|_{\alpha}}\int\limits_{Q_{J_{k}}}|f_{k}(\omega)|dV_{\alpha}( \omega),\]
where \(Q_{J_{k}}\) is the Carleson square associated with \(J_{k}\). We deduce that
\[|f_{k}(z)|\leq 6^{2+\alpha}\mathcal{M}_{V_{\alpha}}^{\mathcal{D}^{\beta}_{ \alpha}}(f_{k})(z),\ \forall\ z\in\mathbb{C}_{+}.\]
It follows that for \(\lambda>0\),
\[\frac{1}{\lambda}\int\limits_{\{z\in\mathbb{C}_{+}\ :\ |f_{k}(z)|>\lambda\}}|f_{k}(z) |dV_{\alpha}(z)\leq 2^{2+\alpha}\left|\left\{z\in\mathbb{C}_{+}:\mathcal{M}_{V_{ \alpha}}^{\mathcal{D}^{\beta}}(6^{2+\alpha}f_{k})(z)>\lambda\right\}\right|_{ \alpha}. \tag{3.16}\]
Put
\[f(z)=\sum_{k=1}^{\infty}6^{2+\alpha}f_{k}(z),\ \forall\ z\in\cup_{k\geq 1}Q_{I_{k}} \quad\text{ and }\quad f(z)=0,\ \forall\ z\in\mathbb{C}_{+}\backslash\cup_{k\geq 1}Q_{I_{k}}.\]
Since the \(I_{k}\) are pairwise disjoint, the same is true for the \(Q_{I_{k}}\). So we have
\[\int\limits_{\mathbb{C}_{+}}\Phi_{1}(|f(z)|)dV_{\alpha}(z)\lesssim\sum_{k=1}^{ \infty}\int\limits_{\mathbb{C}_{+}}\Phi_{1}(|f_{k}(z)|)dV_{\alpha}(z)=\sum_{k=1 }^{\infty}\int\limits_{Q_{I_{k}}}\Phi_{1}(2^{k}t_{k})\chi_{Q_{I_{k}}}(z)dV_{ \alpha}(z)=\sum_{k=1}^{\infty}\frac{1}{2^{k}}<\infty.\]
We deduce that \(f\in L^{\Phi_{1}}(\mathbb{C}_{+},dV_{\alpha})\).
Since the inequalities (3.15) and (3.16) are satisfied, we have
\[\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\mathcal{M}_{V_{\alpha} }(f)(z)\right)dV_{\alpha}(z) \gtrsim\int\limits_{0}^{\infty}\Phi_{2}^{\prime}(\lambda)\left| \left\{z\in\mathbb{C}_{+}:\mathcal{M}_{V_{\alpha}}^{\mathcal{D}^{\beta}}(6^{2+ \alpha}f_{k})(z)>\lambda\right\}\right|_{\alpha}d\lambda\] \[\gtrsim\int\limits_{0}^{\infty}\Phi_{2}^{\prime}(\lambda)\left( \frac{1}{\lambda}\int\limits_{\{\omega\in\mathbb{C}_{+}\ :\ |f_{k}(\omega)|>\lambda\}}|f_{k}(z)|dV_{\alpha}(z)\right)d\lambda\] \[\gtrsim\int\limits_{\mathbb{C}_{+}}|f_{k}(z)|\left(\int\limits_{0 }^{|f_{k}(z)|}\frac{\Phi_{2}(\lambda)}{\lambda^{2}}d\lambda\right)dV_{\alpha}(z)\] \[\gtrsim 2^{k}t_{k}|Q_{I_{k}}|_{\alpha}\left(\int\limits_{0}^{2^{k}t_{ k}}\frac{\Phi_{2}(\lambda)}{\lambda^{2}}d\lambda\right)\] \[\gtrsim 2^{k}.\]
We deduce that \(\mathcal{M}_{V_{\alpha}}(f)\not\in L^{\Phi_{2}}(\mathbb{C}_{+},dV_{\alpha})\).
**Corollary 3.13**.: _Let \(\alpha>-1\) and \(\Phi\in\mathscr{U}\). The following assertions are equivalent._
1. \(\Phi\in\nabla_{2}\)_._
2. \(\mathcal{M}_{HL}:L^{\Phi}(\mathbb{R})\longrightarrow L^{\Phi}(\mathbb{R})\) _is bounded._
3. \(\mathcal{M}_{V_{\alpha}}:L^{\Phi}(\mathbb{C}_{+},dV_{\alpha})\longrightarrow L ^{\Phi}(\mathbb{C}_{+},dV_{\alpha})\) _is bounded._
### Some properties of Hardy-Orlicz and Bergman-Orlicz spaces on \(\mathbb{C}_{+}\)
Let \(\Phi\) be a growth function and \(F\in H^{\Phi}(\mathbb{C}_{+})\). Put
\[\|F\|_{H^{\Phi}}:=\sup_{y>0}\int\limits_{\mathbb{R}}\Phi\left(|F(x+iy)|\right)dx.\]
Let \(\Phi\in\mathscr{C}^{1}(\mathbb{R}_{+})\) a growth function such that \(0<a_{\Phi}\leq b_{\Phi}<\infty\). We have the following inequalities
\[\|F\|_{H^{\Phi}}\lesssim\max\left\{\left(\|F\|_{H^{\Phi}}^{ lux}\right)^{a_{\Phi}};\left(\|F\|_{H^{\Phi}}^{lux}\right)^{b_{\Phi}}\right\}\]
and
\[\|F\|_{H^{\Phi}}^{lux}\lesssim\max\left\{\left(\|F\|_{H^{\Phi}}\right)^{1/a_{ \Phi}};\left(\|F\|_{H^{\Phi}}\right)^{1/b_{\Phi}}\right\}.\]
Let \(\Omega\) be an open set of \(\mathbb{C}\) and \(F:\Omega\longrightarrow]-\infty,+\infty]\) a function. We say that \(F\) is subharmonic if the following assertions are satisfied:
1. \(F\) is upper semicontinuous on \(\Omega\) \[F(z_{0})\geq\lim_{z\to z_{0}}F(z),\ \forall\ z_{0}\in\Omega,\]
2. for all \(z_{0}\in\Omega\), there exists \(r(z_{0})>0\) such that \(\mathcal{D}(z_{0},r(z_{0}))=\{z\in\Omega:|z-z_{0}|<r(z_{0})\}\) is contained in \(\Omega\) and such that for all \(r<r(z_{0})\) (3.17) \[F(z_{0})\leq\frac{1}{\pi r^{2}}\int\int\limits_{|x+iy-z_{0}|<r}F(x+iy)dxdy.\]
**Proposition 3.14**.: _Let \(\Phi\) be a growth function such that \(\Phi(t)>0\) for all \(t>0\). If \(\Phi\) is convex or belongs to \(\mathscr{L}\) then for \(F\in H^{\Phi}(\mathbb{C}_{+})\), we have_
\[|F(x+iy)|\leq\Phi^{-1}\left(\frac{2}{\pi y}\right)\|F\|_{H^{\Phi}}^{lux},\ \forall\ x+iy\in\mathbb{C}_{+}. \tag{3.18}\]
Proof.: For \(t\geq 0\), put
\[\Phi_{\rho}(t)=\Phi\left(t^{1/\rho}\right),\]
where \(\rho=1\) if \(\Phi\) is convex and \(\rho=a_{\Phi}\) if \(\Phi\in\mathscr{L}\). By construction, \(\Phi_{\rho}\) is a convex growth function. Let \(0\not\equiv F\in H^{\Phi}(\mathbb{C}_{+})\), and \(z_{0}=x_{0}+iy_{0}\in\mathbb{C}_{+}\) and \(r=\frac{y_{0}}{2}\). Since \(|F|^{\rho}\) is subharmonic on \(\mathbb{C}_{+}\), we have
\[|F(z_{0})|^{\rho}\leq\frac{1}{\pi r^{2}}\int\int\limits_{\overline{\mathcal{D} (z_{0},r)}}|F(u+iv)|^{\rho}dudv,\]
where \(\mathcal{D}(z_{0},r)\) is the disk centered at \(z_{0}\) and of radius \(r\). By Jensen's inequality, it follows that
\[\Phi\left(\frac{|F(z_{0})|}{\|F\|_{H^{\Phi}}^{lux}}\right) \leq\Phi_{\rho}\left(\frac{1}{\pi r^{2}}\int\limits_{\overline{ \mathcal{D}(z_{0},r)}}\left(\frac{|F(u+iv)|}{\|F\|_{H^{\Phi}}^{lux}}\right)^{ \rho}dudv\right)\] \[\leq\frac{1}{\pi r^{2}}\int\limits_{\overline{\mathcal{D}(z_{0}, r)}}\Phi\left(\frac{|F(u+iv)|}{\|F\|_{H^{\Phi}}^{lux}}\right)dudv\] \[\leq\frac{1}{\pi r^{2}}\int\limits_{0}^{2r}\int\limits_{\mathbb{R }}\Phi\left(\frac{|F(u+iv)|}{\|F\|_{H^{\Phi}}^{lux}}\right)dudv\leq\frac{1}{ \pi r^{2}}\int\limits_{0}^{2r}dv.\]
We deduce that
\[\Phi\left(\frac{|F(z_{0})|}{\|F\|_{H^{\Phi}}^{lux}}\right)\leq\frac{2}{\pi r}, \ \forall\ r<y_{0}.\]
**Lemma 3.15**.: _Let \(\Phi\) be a growth function such that \(\Phi(t)>0\) for all \(t>0\). If \(\Phi\) is convex or belongs to \(\mathscr{L}\), then for \(F\in H^{\Phi}(\mathbb{C}_{+})\) and for \(\beta>0\), we have_
\[\Phi(|F(z+i\beta)|)\leq\frac{1}{\pi}\int\limits_{\mathbb{R}}\frac{y}{(x-t)^{2} +y^{2}}\Phi(|F(t+i\beta)|)dt,\ \forall\ z=x+iy\in\mathbb{C}_{+}. \tag{3.19}\]
Proof.: For \(t\geq 0\), put
\[\Phi_{\rho}(t)=\Phi\left(t^{1/\rho}\right),\]
where \(\rho=1\) if \(\Phi\) is convex and \(\rho=a_{\Phi}\) if \(\Phi\in\mathscr{L}\).
Let \(0\not\equiv F\in H^{\Phi}(\mathbb{C}_{+})\) and \(\beta>0\). For \(z\in\mathbb{C}_{+}\), put
\[U_{\beta}(z)=|F(z+i\beta)|^{\rho}.\]
By construction, \(U_{\beta}\) is continuous on \(\overline{\mathbb{C}_{+}}:=\mathbb{C}_{+}\cup\mathbb{R}\) and subharmonic on \(\mathbb{C}_{+}\). For \(z=x+iy\in\overline{\mathbb{C}_{+}}\), we have
\[|U_{\beta}(z)|=|F(x+i(y+\beta))|^{\rho}\leq\left(\Phi^{-1}\left(\frac{2}{\pi( y+\beta)}\right)\|F\|_{H^{\Phi}}^{lux}\right)^{\rho}\leq\left(\Phi^{-1}\left( \frac{2}{\pi\beta}\right)\|F\|_{H^{\Phi}}^{lux}\right)^{\rho},\]
according to Proposition 3.14. We deduce that \(U_{\beta}\) is bounded on \(\overline{\mathbb{C}_{+}}\). It follows that
\[|F(z+i\beta)|^{\rho}\leq\frac{1}{\pi}\int\limits_{\mathbb{R}}\frac{y}{(x-t)^{ 2}+y^{2}}|F(t+i\beta)|^{\rho}dt,\ \forall\ z=x+iy\in\mathbb{C}_{+},\]
thanks to [23, Corollary 10.15]. Since \(\Phi_{\rho}\) is convex, by Jensen's inequality we deduce that
\[\Phi(|F(z+i\beta)|)\leq\frac{1}{\pi}\int\limits_{\mathbb{R}}\frac{y}{(x-t)^{2} +y^{2}}\Phi(|F(t+i\beta)|)dt,\ \forall\ z=x+iy\in\mathbb{C}_{+}.\]
**Proposition 3.16**.: _Let \(\Phi\) be a growth function such that \(\Phi(t)>0\) for all \(t>0\) and \(F\) an analytic function on \(\mathbb{C}_{+}\). If \(\Phi\) is convex or belongs to \(\mathscr{L}\), then the following assertions are equivalent._
* \(F\in H^{\Phi}(\mathbb{C}_{+})\)_._
* _The function_ \(y\mapsto\|F(.+iy)\|_{L^{\Phi}}^{lux}\) _is non-increasing on_ \(\mathbb{R}_{+}^{*}\) _and_ \(\lim_{y\to 0}\|F(.+iy)\|_{L^{\Phi}}^{lux}<\infty\)_._
_Moreover,_
\[\|F\|_{H^{\Phi}}^{lux}=\lim_{y\to 0}\|F(.+\,iy)\|_{L^{\Phi}}^{lux}.\]
Proof.: The implication \((ii)\Rightarrow(i)\) is immediate.
Let us now show that (i) implies (ii). Suppose that \(F\not\equiv 0\) is non-identically zero because there is nothing to show when \(F\equiv 0\). Let \(0<y_{1}<y_{2}\). According to Lemma 3.15 and Fubbini's theorem, we have
\[\int\limits_{\mathbb{R}}\Phi\left(\frac{|F(x+iy_{2})|}{\|F(.+\, iy_{1})\|_{L^{\Phi}}^{lux}}\right)dx =\int\limits_{\mathbb{R}}\Phi\left(\frac{|F(x+i(y_{2}-y_{1})+iy_{1 })|}{\|F(.+\,iy_{1})\|_{L^{\Phi}}^{lux}}\right)dx\] \[=\int\limits_{\mathbb{R}}\Phi\left(\frac{|F(t+iy_{1})|}{\|F(.+iy_ {1})\|_{L^{\Phi}}^{lux}}\right)\left(\frac{1}{\pi}\int\limits_{\mathbb{R}} \frac{(y_{2}-y_{1})}{(x-t)^{2}+(y_{2}-y_{1})^{2}}dx\right)dt\] \[=\int\limits_{\mathbb{R}}\Phi\left(\frac{|F(t+iy_{1})|}{\|F(.+ iy_{1})\|_{L^{\Phi}}^{lux}}\right)dt\leq 1.\]
We deduce that \(\|F(.+\,iy_{2})\|_{L^{\Phi}}^{lux}\leq\|F(.+\,iy_{1})\|_{L^{\Phi}}^{lux}.\) Therefore,
\[\sup\limits_{y>0}\|F(.+\,iy)\|_{L^{\Phi}}^{lux}=\lim_{y\to 0}\|F(.+\,iy)\|_{L^{ \Phi}}^{lux}.\]
Let \(\Phi\) be a growth function. The Hardy space on \(\mathbb{D}\), \(H^{\Phi}(\mathbb{D})\) is the set of analytic function \(G\) on \(\mathbb{D}\) which satisfy
\[\|G\|_{H^{\Phi}(\mathbb{D})}^{lux}:=\sup_{0\leq r<1}\inf\left\{\lambda>0:\frac{ 1}{2\pi}\int\limits_{0}^{2\pi}\Phi\left(\frac{|G(re^{i\theta})|}{\lambda} \right)d\theta\leq 1\right\}<\infty.\]
Let \(\Phi\) be a growth function. If \(\Phi\) is convex or belongs to \(\mathscr{L}\) then for some \(\rho\in\{1;a_{\Phi}\}\),
\[H^{\Phi}(\mathbb{D})\subseteq H^{\rho}(\mathbb{D}). \tag{3.20}\]
The proof of the following result is identical to that of [10, Theorem 3.11]. Therefore, the proof will be omitted.
**Theorem 3.17**.: _Let \(\Phi\) be a growth function such that \(\Phi(t)>0\) for all \(t>0\). If \(\Phi\) is convex or belongs to \(\mathscr{L}\), then for \(F\in H^{\Phi}(\mathbb{C}_{+})\), the function \(G\) defined by_
\[G(\omega)=F\left(i\frac{1-\omega}{1+\omega}\right),\ \forall\ \omega\in \mathbb{D},\]
_is in \(H^{\Phi}(\mathbb{D})\). Moreover,_
\[\|G\|_{H^{\Phi}(\mathbb{D})}\leq\|F\|_{H^{\Phi}(\mathbb{C}_{+})}^{lux}.\]
Denote by \(B\) the function Beta defined by
\[B(m,n)=\int\limits_{0}^{\infty}\frac{u^{m-1}}{(1+u)^{m+n}}du,\ \forall\ m,n>0.\]
The following results can be found for example in [1].
**Lemma 3.18**.: _Let \(y>0\) and \(\alpha\in\mathbb{R}\). The integral_
\[J_{\alpha}(y)=\int\limits_{\mathbb{R}}\frac{dx}{|x+iy|^{\alpha}},\]
_converges if and only if \(\alpha>1\). In this case,_
\[J_{\alpha}(y)=B\left(\frac{1}{2},\frac{\alpha-1}{2}\right)y^{1-\alpha}.\]
**Lemma 3.19**.: _Let \(\alpha,\beta\in\mathbb{R}\) and \(t>0\). The integral_
\[\text{I}(t)=\int\limits_{0}^{\infty}\frac{y^{\alpha}}{(t+y)^{\beta}}dy, \tag{3.21}\]
_converges if and only if \(\alpha>-1\) and \(\beta>\alpha+1\). In this case,_
\[\text{I}(t)=B(1+\alpha,\beta-\alpha-1)t^{-\beta+\alpha+1}. \tag{3.22}\]
Nevanlinna's class on \(\mathbb{C}_{+}\), \(\mathscr{N}(\mathbb{C}_{+})\) is the set of holomorphic functions \(F\) on \(\mathbb{C}_{+}\) such that
\[\sup_{y>0}\int\limits_{\mathbb{R}}\log\left(1+|F(x+iy)|\right)dx<\infty.\]
For \(0\not\equiv F\in\mathscr{N}(\mathbb{C}_{+})\), there exists a unique function \(f\) measurable on \(\mathbb{R}\) such that \(\log|f|\in L^{1}\left(\mathbb{R},\frac{dt}{1+t^{2}}\right)\) and
\[\lim_{y\to 0}F(x+iy)=f(x),\]
for almost all \(x\in\mathbb{R}\), (see [24]).
**Proposition 3.20**.: _Let \(\Phi\in\mathscr{C}^{1}(\mathbb{R}_{+})\) be a growth function such that \(0<a_{\Phi}\leq b_{\Phi}<\infty\). The following assertions are satisfied._
1. _If_ \(0<a_{\Phi}\leq b_{\Phi}\leq 1\)_, then_ \(H^{\Phi}(\mathbb{C}_{+})\subset\mathscr{N}(\mathbb{C}_{+})\)_._
2. _If_ \(1<a_{\Phi}\leq b_{\Phi}<\infty\)_, then_ \(H^{\Phi}(\mathbb{C}_{+})\not\subset\mathscr{N}(\mathbb{C}_{+})\)_._
Proof.: \((i)\) For \(0\not\equiv F\in H^{\Phi}(\mathbb{C}_{+})\), put
\[F_{1}=F\chi_{0<\{|F|\leq 1\}}\ \ \text{and}\ \ F_{2}=F\chi_{\{|F|\geq 1\}}.\]
For \(z\in\mathbb{C}_{+}\), we have
\[\log(1+|F_{1}(z)|)\leq|F_{1}(z)|\leq|F_{1}(z)|^{b_{\Phi}}\leq\frac{1}{\Phi(1)} \times\Phi(|F_{1}(z)|)\]
and
\[\log(1+|F_{2}(z)|)=\frac{1}{a_{\Phi}}\log(1+|F_{2}(z)|)^{a_{\Phi}}\leq\frac{2 ^{a_{\Phi}}}{a_{\Phi}}|F_{2}(z)|^{a_{\Phi}}\leq\frac{2^{a_{\Phi}}}{a_{\Phi}} \frac{1}{\Phi(1)}\times\Phi(|F_{2}(z)|),\]
since the function \(t\mapsto\frac{\Phi(t)}{t^{a_{\Phi}}}\) (resp. \(t\mapsto\frac{\Phi(t)}{t^{b_{\Phi}}}\)) is non-decreasing (resp. non-increasing) on \(\mathbb{R}_{+}^{*}\). Using the sub-additivity of the logarithmic function on \((1,\infty)\), we deduce that
\[\log(1+|F(z)|)\lesssim\log(1+|F_{1}(z)|+|F_{2}(z)|)\lesssim\left(\Phi(|F_{1}(z )|)+\Phi(|F_{2}(z)|)\right).\]
It follows that \(F\in\mathscr{N}(\mathbb{C}_{+})\). Indeed, for \(y>0\), we have
\[\int\limits_{\mathbb{R}}\log(1+|F(x+iy)|)dx \lesssim \int\limits_{\mathbb{R}}\Phi(|F_{1}(x+iy)|)dx+\int\limits_{ \mathbb{R}}\Phi(|F_{2}(x+iy)|)dx\] \[\lesssim \sup\limits_{y>0}\int\limits_{\mathbb{R}}\Phi(|F(x+iy)|)dx<\infty.\]
\((ii)\) Let \(\alpha\in\mathbb{R}\) such that \(1/a_{\Phi}<\alpha<1\). For \(z\in\mathbb{C}_{+}\), put
\[F_{\alpha}(z)=\frac{1}{(z+i)^{\alpha}}.\]
By construction, \(F_{\alpha}\) is an analytic function on \(\mathbb{C}_{+}\) and
\[|F_{\alpha}(z)|=\frac{1}{|x+i(1+y)|^{\alpha}}<1,\ \forall\ z=x+iy\in\mathbb{C}_{+}.\]
We deduce that
\[\log\left(1+|F_{\alpha}(z)|\right)\geq\frac{1}{2}\frac{1}{|x+i(1+y)|^{\alpha}},\ \forall\ z=x+iy\in\mathbb{C}_{+}\]
and
\[\Phi\left(|F_{\alpha}(z)|\right)\leq\Phi(1)\frac{1}{|x+i(1+y)|^{\alpha a_{ \Phi}}},\ \forall\ z=x+iy\in\mathbb{C}_{+},\]
since \(|F_{\alpha}|<1\) and the function \(t\mapsto\frac{\Phi(t)}{t^{a_{\Phi}}}\) is non-decreasing on \(\mathbb{R}_{+}^{*}\). It follows that \(F_{\alpha}\in H^{\Phi}(\mathbb{C}_{+})\) and \(F_{\alpha}\not\in\mathscr{N}(\mathbb{C}_{+})\). Indeed, for \(y>0\), we have
\[\int\limits_{\mathbb{R}}\Phi\left(|F_{\alpha}(x+iy)|\right)dx\lesssim B\left( \frac{1}{2},\frac{\alpha a_{\Phi}-1}{2}\right)(1+y)^{1-\alpha a_{\Phi}}\leq B \left(\frac{1}{2},\frac{\alpha a_{\Phi}-1}{2}\right)<+\infty\]
and
\[\int\limits_{\mathbb{R}}\log\left(1+|F_{\alpha}(x+iy)|\right)dx\geq\frac{1}{ 2}\int\limits_{\mathbb{R}}\frac{dx}{|x+i(1+y)|^{\alpha}}=+\infty,\]
according to Lemma 3.18.
Let \(f\) be a measurable function on \(\mathbb{R}\). The Poisson integral \(U_{f}\) of \(f\) is the function defined by
\[U_{f}(x+iy):=\frac{1}{\pi}\int\limits_{\mathbb{R}}\frac{y}{(x-t)^{2}+y^{2}}f(t )dt,\ \forall\ x+iy\in\mathbb{C}_{+},\]
when it makes sense.
If \(f\in L^{1}\left(\mathbb{R},\frac{dt}{1+t^{2}}\right)\) then \(U_{f}\) is a harmonic function on \(\mathbb{C}_{+}\) and
\[\lim\limits_{y\to 0}U_{f}(x+iy)=f(x),\]
for almost all \(x\in\mathbb{R}\) (see [23]).
**Lemma 3.21** (Lemma 4.1, [10]).: _Let \(\Phi\) be a convex growth function such that \(\Phi(t)>0\) for all \(t>0\) and \(0\not\equiv F\) an analytic function on \(\mathbb{C}_{+}\). The following assertions are equivalent._
1. \(F\in H^{\Phi}(\mathbb{C}_{+})\)_._
2. _There exists a unique function_ \(f\in L^{\Phi}\left(\mathbb{R}\right)\) _such that_ \(\log|f|\in L^{1}\left(\mathbb{R},\frac{dt}{1+t^{2}}\right)\) _and_ \[F(x+iy)=U_{f}(x+iy),\ \forall\ x+iy\in\mathbb{C}_{+}.\]
_Moreover,_
\[\|F\|_{H^{\Phi}}^{lux}=\lim_{y\to 0}\|F(.+iy)\|_{L^{\Phi}}^{lux}=\|f\|_{L^{ \Phi}}^{lux}.\]
**Theorem 3.22**.: _Let \(\Phi\) be a growth function such that \(\Phi(t)>0\) for all \(t>0\). If \(\Phi\) is convex or belongs to \(\mathscr{L}\), then for \(0\not\equiv F\in H^{\Phi}(\mathbb{C}_{+})\), there exists a unique function \(f\in L^{\Phi}\left(\mathbb{R}\right)\) such that \(\log|f|\in L^{1}\left(\frac{dt}{1+t^{2}}\right)\),_
\[f(x)=\lim_{y\to 0}F(x+iy),\]
_for almost all \(x\in\mathbb{R}\), \(f(t)\neq 0\) for almost all \(t\in\mathbb{R}\),_
\[\log|F(x+iy)|\leq\frac{1}{\pi}\int\limits_{\mathbb{R}}\frac{y}{(x-t)^{2}+y^{2 }}\log|f(t)|dt,\ \forall\ x+iy\in\mathbb{C}_{+}\]
_and_
\[\|F\|_{H^{\Phi}}^{lux}=\lim_{y\to 0}\|F(.+iy)\|_{L^{\Phi}}^{lux}=\|f\|_{L^{ \Phi}}^{lux}. \tag{3.23}\]
Proof.: Let \(0\not\equiv F\in H^{\Phi}(\mathbb{C}_{+})\). There exists a unique measurable function \(f\) on \(\mathbb{R}\) such that \(\log|f|\in L^{1}\left(\frac{dt}{1+t^{2}}\right)\) and
\[\lim_{y\to 0}F(x+iy)=f(x),\]
for almost all \(x\in\mathbb{R}\), according to point \((i)\) of Proposition 3.20 and Lemma 3.21. Suppose that there exists \(A\) a measurable subset of \(\mathbb{R}\) with Lebesgue measure \(|A|>0\), and
\[f(x)=0,\ \forall\ x\in A.\]
We have
\[+\infty=\int\limits_{A}|\log|f(t)||\frac{dt}{1+t^{2}}\leq\int\limits_{\mathbb{ R}}|\log|f(t)||\frac{dt}{1+t^{2}}.\]
We deduce that \(\log|f|\not\in L^{1}\left(\frac{dt}{1+t^{2}}\right)\). Which is absurd. Hence, \(f(t)\neq 0\), for almost all \(t\in\mathbb{R}\). For \(\omega\in\mathbb{D}\), put
\[G(\omega)=F\left(i\frac{1-\omega}{1+\omega}\right).\]
Since \(G\in H^{\Phi}(\mathbb{D})\subset H^{p}(\mathbb{D})\), with \(p>0\), there exists a unique function \(g\in L^{\Phi}(\mathbb{T})\) such that \(\log|g|\in L^{1}(\mathbb{T})\) and
\[\lim_{r\to 1}G(re^{i\theta})=g(e^{i\theta}),\]
for almost all \(\theta\in\mathbb{R}\) and
\[\log|G(re^{i\theta})|\leq\frac{1}{2\pi}\int\limits_{-\pi}^{\pi}\frac{1-r^{2}} {1-2r\cos(u-\theta)+r^{2}}\log|g(e^{iu})|du,\ \forall\ re^{i\theta}\in\mathbb{D}.\]
Moreover,
\[\log|g(e^{i\theta})|=\lim_{r\to 1}\left(\frac{1}{2\pi}\int\limits_{-\pi}^{\pi} \frac{1-r^{2}}{1-2r\cos(u-\theta)+r^{2}}\log|g(e^{iu})|du\right), \tag{3.24}\]
for almost all \(\theta\in\mathbb{R}\).
Consider \(\varphi\), the map defined by
\[\varphi(\omega)=i\frac{1-\omega}{1+\omega},\ \forall\ \omega\in\mathbb{D}\cup \mathbb{T}\backslash\{-1\},\]
where \(\mathbb{T}\) is the complex unit circle. Note that the restriction of \(\varphi\) to \(\mathbb{D}\) (resp. \(\mathbb{T}\backslash\{-1\}\)) is an analytic function on \(\mathbb{D}\) with values in \(\mathbb{C}_{+}\) (resp. a homeomorphism from \(\mathbb{T}\backslash\{-1\}\) onto \(\mathbb{R}\)).
For \(z=x+iy\in\mathbb{C}_{+}\) and \(\omega=re^{iu}\in\mathbb{D}\) such that \(z=i\frac{1-\omega}{1+\omega}\), using
\[y=\frac{1-r^{2}}{1+r^{2}+2r\cos u}\]
and the Relation (3.24), we deduce that
\[|f(x)|=|g\circ\varphi^{-1}(x)|,\]
for almost all \(x\in\mathbb{R}\). Therefore,
\[\log|F(x+iy)|\leq\frac{1}{\pi}\int\limits_{\mathbb{R}}\frac{y}{(x-t)^{2}+y^{2 }}\log|f(t)|dt,\ \forall\ x+iy\in\mathbb{C}_{+}. \tag{3.25}\]
Indeed
\[\log|F(x+iy)| =\log|G(re^{iu})|\] \[\leq\frac{1}{2\pi}\int\limits_{-\pi}^{\pi}\frac{1-r^{2}}{1-2r \cos(u-\theta)+r^{2}}\log|g(e^{i\theta})|d\theta\] \[=\frac{1}{\pi}\int\limits_{\mathbb{R}}\frac{y}{(x-t)^{2}+y^{2}} \log|g\circ\varphi^{-1}(t)|dt=\frac{1}{\pi}\int\limits_{\mathbb{R}}\frac{y}{( x-t)^{2}+y^{2}}\log|f(t)|dt.\]
Let us prove Relation (3.23). By Fatou's lemma, we have
\[\int\limits_{\mathbb{R}}\Phi\left(\frac{|f(x)|}{\|F\|_{H^{\Phi}}^{lux}} \right)dx\leq\liminf_{y\to 0}\int\limits_{\mathbb{R}}\Phi\left(\frac{|F(x+ iy)|}{\|F\|_{H^{\Phi}}^{lux}}\right)dx\leq\sup_{y>0}\int\limits_{\mathbb{R}} \Phi\left(\frac{|F(x+iy)|}{\|F\|_{H^{\Phi}}^{lux}}\right)dx\leq 1.\]
We deduce that \(f\in L^{\Phi}(\mathbb{R})\) and
\[\|f\|_{L^{\Phi}}^{lux}\leq\|F\|_{H^{\Phi}}^{lux}. \tag{3.26}\]
Put
\[\Phi_{\rho}(t)=\Phi\left(t^{1/\rho}\right),\ \forall\ t\geq 0,\]
where \(\rho=1\) if \(\Phi\) is convex and \(\rho=a_{\Phi}\) if \(\Phi\in\mathscr{L}\).
From Jensen's inequality and also from the Relation (3.25), we deduce that
\[|F(x+iy)|^{\rho}\leq\frac{1}{\pi}\int\limits_{\mathbb{R}}\frac{y}{(x-t)^{2}+y ^{2}}|f(t)|^{\rho}dt,\ \forall\ x+iy\in\mathbb{C}_{+}.\]
Fix \(y>0\). We have
\[\int\limits_{\mathbb{R}}\Phi\left(\frac{|F(x+iy)|}{\|f\|_{L^{\Phi }}^{lux}}\right)dx \leq\int\limits_{\mathbb{R}}\Phi_{\rho}\left(\frac{1}{\pi}\int \limits_{\mathbb{R}}\frac{y}{(x-t)^{2}+y^{2}}\left(\frac{|f(t)|}{\|f\|_{L^{ \Phi}}^{lux}}\right)^{\rho}dt\right)dx\] \[\leq\int\limits_{\mathbb{R}}\frac{1}{\pi}\int\limits_{\mathbb{R} }\frac{y}{(x-t)^{2}+y^{2}}\Phi_{\rho}\left(\left(\frac{|f(t)|}{\|f\|_{L^{\Phi }}^{lux}}\right)^{\rho}\right)dtdx\] \[=\int\limits_{\mathbb{R}}\Phi\left(\frac{|f(t)|}{\|f\|_{L^{ \Phi}}^{lux}}\right)\left(\frac{1}{\pi}\int\limits_{\mathbb{R}}\frac{y}{(x-t)^ {2}+y^{2}}dx\right)dt\] \[=\int\limits_{\mathbb{R}}\Phi\left(\frac{|f(t)|}{\|f\|_{L^{ \Phi}}^{lux}}\right)dt\leq 1.\]
We deduce that
\[\|F\|_{H^{\Phi}}^{lux}\leq\|f\|_{L^{\Phi}}^{lux}. \tag{3.27}\]
From Relations (3.26) and (3.27) and also from Proposition 3.16, it follows that
\[\|F\|_{H^{\Phi}}^{lux}=\lim_{y\to 0}\|F(.+iy)\|_{L^{\Phi}}^{lux}=\|f\|_{L^{ \Phi}}^{lux}.\]
**Lemma 3.23**.: _Let \(\alpha>-1\) and \(\Phi\) a one-to-one growth function. If \(\Phi\) is convex or belongs to \(\mathscr{L}\), then there exists a constant \(C:=C_{\alpha,\Phi}>1\) such that for \(F\in A_{\alpha}^{\Phi}(\mathbb{C}_{+})\),_
\[|F(x+iy)|\leq C\Phi^{-1}\left(\frac{1}{y^{2+\alpha}}\right)\|F\|_{A_{\alpha}^{ \Phi}}^{lux},\ \forall\ x+iy\in\mathbb{C}_{+}. \tag{3.28}\]
Proof.: For \(t\geq 0\), put
\[\Phi_{\rho}(t)=\Phi\left(t^{1/\rho}\right),\]
where \(\rho=1\) if \(\Phi\) is convex and \(\rho=a_{\Phi}\) if \(\Phi\in\mathscr{L}\).
Let \(0\not\equiv F\in A_{\alpha}^{\Phi}(\mathbb{C}_{+})\). Fix \(z_{0}=x_{0}+iy_{0}\in\mathbb{C}_{+}\) and put \(r=\frac{y_{0}}{2}\). Since \(|F|^{\rho}\) is subharmonic on \(\mathbb{C}_{+}\), we have
\[|F(z_{0})|^{\rho}\leq\frac{1}{\pi r^{2}}\int\int\limits_{\overline{\mathcal{D} (z_{0},r)}}|F(u+iv)|^{\rho}dudv.\]
For \(u+iv\in\overline{\mathcal{D}(z_{0},r)}\), we have
\[r\leq v\leq 3r\Rightarrow 0<\frac{1}{v^{\alpha}}\leq 2^{\alpha}\times\frac{1}{y_ {0}^{\alpha}},\ \text{if}\ \alpha\geq 0\quad\text{ and }\quad 0<\frac{1}{v^{\alpha}}\leq\left(\frac{2}{3}\right)^{\alpha}\times \frac{1}{y_{0}^{\alpha}},\ \text{if}\ -1<\alpha<0.\]
We deduce that
\[0<\frac{1}{v^{\alpha}}\leq C_{\alpha}\frac{1}{y_{0}^{\alpha}},\ \forall\ u+iv\in \overline{\mathcal{D}(z_{0},r)}, \tag{3.29}\]
where \(C_{\alpha}:=\max\left\{2^{\alpha};(2/3)^{\alpha}\right\}\). By Jensen's inequality, we have
\[\Phi\left(\left(\frac{\pi}{4C_{\alpha}}\right)^{1/\rho}\times \frac{|F(z_{0})|}{\|F\|_{A_{\alpha}^{\Phi}}^{lux}}\right) \leq\frac{\pi}{4C_{\alpha}}\Phi_{\rho}\left(\frac{1}{\pi r^{2}} \int\int\limits_{\overline{\mathcal{D}(z_{0},r)}}\left(\frac{|F(u+iv)|}{\|F\| _{A_{\alpha}^{\Phi}}^{lux}}\right)^{\rho}dudv\right)\] \[\leq\frac{\pi}{4C_{\alpha}}\times\frac{4}{\pi y_{0}^{2}}\times \frac{C_{\alpha}}{y_{0}^{\alpha}}\int\limits_{\overline{\mathcal{D}(z_{0},r)}} \Phi\left(\frac{|F(u+iv)|}{\|F\|_{A_{\alpha}^{\Phi}}^{lux}}\right)v^{\alpha} dudv\] \[\leq\frac{1}{y_{0}^{2+\alpha}}\int\limits_{\mathbb{C}_{+}}\Phi \left(\frac{|F(u+iv)|}{\|F\|_{A_{\alpha}^{\Phi}}^{lux}}\right)dV_{\alpha}(u+iv) \leq\frac{1}{y_{0}^{2+\alpha}}.\]
We deduce that
\[|F(z_{0})|\leq\left(\frac{4C_{\alpha}}{\pi}\right)^{1/\rho}\Phi^{-1}\left( \frac{1}{y_{0}^{2+\alpha}}\right)\|F\|_{A_{\alpha}^{\Phi}}^{lux}.\]
**Proposition 3.24**.: _Let \(\alpha>-1\). There exist \(C:=C_{\alpha}>0\) and \(\beta\in\{0,1/3\}\) such that for any analytic function \(F\) on \(\mathbb{C}_{+}\) and for all \(0<\gamma<\infty\),_
\[|F(z)|^{\gamma}\leq C\mathcal{M}_{V_{\alpha}}^{\mathcal{D}^{\beta}}\left(|F|^{ \gamma}\right)(z),\ \forall\ z\in\mathbb{C}_{+}. \tag{3.30}\]
Proof.: Let \(0<\gamma<\infty\) and \(0\not\equiv F\) an analytic function on \(\mathbb{C}_{+}\). Fix \(z_{0}=x_{0}+iy_{0}\in\mathbb{C}_{+}\) and \(r=\frac{y_{0}}{2}\). From Relation (3.29) we have
\[0<\frac{1}{v^{\alpha}}\leq\max\left\{2^{\alpha};(2/3)^{\alpha}\right\}\frac{1} {y_{0}^{\alpha}},\ \forall\ u+iv\in\overline{\mathcal{D}(z_{0},r)}.\]
Let \(I\) be an interval centered at \(x_{0}\) and of length \(|I|=2y_{0}\). Consider \(Q_{I}\) the Carleson square associated with \(I\). According to Lemma 3.10, there exist \(\beta\in\{0,1/3\}\) and \(J\in\mathcal{D}^{\beta}\) such that \(I\subset J\) and \(|J|\leq 6|I|\). From Relation (3.9) we have
\[|Q_{J}|_{\alpha}=\frac{1}{1+\alpha}|J|^{2+\alpha}\leq\frac{6^{2+\alpha}}{1+ \alpha}|I|^{2+\alpha}=\frac{12^{2+\alpha}}{1+\alpha}y_{0}^{2+\alpha}.\]
Since \(|F|^{\gamma}\) is subharmonic on \(\mathbb{C}_{+}\) and \(\overline{\mathcal{D}(z_{0},r)}\) is contained in \(Q_{I}\) we have
\[|F(z_{0})|^{\gamma} \leq\frac{1}{\pi r^{2}}\int\int\limits_{\overline{\mathcal{D}(z_{0},r)}}|F(u+iv)|^{\gamma}dudv\] \[\leq\frac{4}{\pi y_{0}^{2}}\times\frac{\max\left\{2^{\alpha};(2/3 )^{\alpha}\right\}}{y_{0}^{\alpha}}\int\limits_{\overline{\mathcal{D}(z_{0},r )}}|F(u+iv)|^{\gamma}v^{\alpha}dudv\] \[\leq C_{\alpha}\frac{\chi_{Q_{J}}(z_{0})}{|Q_{J}|_{\alpha}}\int \limits_{\int\limits_{Q_{J}}}|F(u+iv)|^{\gamma}v^{\alpha}dudv\leq C_{\alpha} \mathcal{M}_{V_{\alpha}}^{\mathcal{D}^{\beta}}\left(|F|^{\gamma}\right)(z_{0}),\]
where \(C_{\alpha}:=\frac{4}{\pi}\times\frac{12^{2+\alpha}}{1+\alpha}\times\max\left\{ 2^{\alpha};(2/3)^{\alpha}\right\}\).
**Proposition 3.25**.: _Let \(\alpha>-1\) and \(\Phi\) a one-to-one growth function. If \(\Phi\) is convex or belongs to \(\mathscr{L}\) then there exists some constants \(\rho\in\{1;a_{\Phi}\}\) and_
\[C_{\alpha}:=B\left(1+\alpha,2+\alpha\right)B\left(\frac{1}{2},\frac{3+2\alpha }{2}\right), \tag{3.31}\]
_such that for all \(z=x+iy\in\mathbb{C}_{+}\) the functions \(F_{z}\) and \(G_{z}\) defined respectively by_
\[F_{z}(\omega)=\Phi^{-1}\left(\frac{1}{\pi y}\right)\frac{y^{2/\rho}}{(\omega- \overline{z})^{2/\rho}},\ \forall\ \omega\in\mathbb{C}_{+} \tag{3.32}\]
_and_
\[G_{z}(\omega)=\Phi^{-1}\left(\frac{1}{C_{\alpha}y^{2+\alpha}}\right)\frac{y^{ (4+2\alpha)/\rho}}{(\omega-\overline{z})^{(4+2\alpha)/\rho}},\ \forall\ \omega\in\mathbb{C}_{+}, \tag{3.33}\]
_are analytic functions belong respectively to \(H^{\Phi}(\mathbb{C}_{+})\) and \(A_{\alpha}^{\Phi}(\mathbb{C}_{+})\). Moreover, \(\|F_{z}\|_{H^{\Phi}}^{lux}\leq 1\) and \(\|G_{z}\|_{A_{\Phi}^{\Phi}}^{lux}\leq 1\)._
Proof.: Fix \(z=x+iy\in\mathbb{C}_{+}\). By construction \(F_{z}\) ad \(G_{z}\) are analytic functions which does not vanish on \(\mathbb{C}_{+}\). For \(\omega=u+iv\in\mathbb{C}_{+}\), we have
\[\frac{y^{2}}{|(u-x)+i(y+v)|^{2}}\leq 1.\]
Put \(\rho=1\) if \(\Phi\) is convex and \(\rho=a_{\Phi}\) if \(\Phi\in\mathscr{L}\), and
\[C_{\alpha}:=B\left(1+\alpha,2+\alpha\right)B\left(\frac{1}{2},\frac{3+2\alpha }{2}\right).\]
Since the function \(t\mapsto\frac{\Phi(t)}{t\rho}\) is non-decreasing on \(\mathbb{R}_{+}^{*}\), we deduce that
\[\int\limits_{\mathbb{R}}\Phi\left(|F_{z}(u+iv)|\right)du\lesssim\frac{y}{\pi} \int\limits_{\mathbb{R}}\frac{1}{|(u-x)+i(y+v)|^{2}}du\]
and
\[\int\limits_{\mathbb{C}_{+}}\Phi(|G_{z}(\omega)|)dV_{\alpha}(\omega)\lesssim \frac{y^{2+\alpha}}{C_{\alpha}}\int\limits_{0}^{\infty}\left(\int\limits_{ \mathbb{R}}\frac{du}{|(u-x)+i(v+y)|^{4+2\alpha}}\right)v^{\alpha}dv.\]
According to Lemma 3.18, we have
\[\int\limits_{\mathbb{R}}\frac{1}{|(u-x)+i(y+v)|^{2}}du=B\left(\frac{1}{2}, \frac{1}{2}\right)\frac{1}{y+v}\]
and
\[\int\limits_{\mathbb{R}}\frac{du}{|(u-x)+i(v+y)|^{4+2\alpha}}=B\left(\frac{1}{ 2},\frac{3+2\alpha}{2}\right)\frac{1}{(v+y)^{3+2\alpha}}.\]
We deduce that
\[\int\limits_{\mathbb{R}}\Phi\left(|F_{z}(u+iv)|\right)du\lesssim 1,\ \forall\ v>0\]
\[\int\limits_{\mathbb{C}_{+}}\Phi(|G_{z}(\omega)|)dV_{\alpha}(\omega)\lesssim 1,\]
since
\[\int\limits_{0}^{\infty}\frac{v^{\alpha}}{(y+v)^{3+2\alpha}}dv=B(1+\alpha,2+ \alpha)\frac{1}{y^{2+\alpha}},\]
thanks to Lemma 3.19. Therefore, \(F_{z}\in H^{\Phi}(\mathbb{C}_{+})\) with \(\|F_{z}\|_{H^{\Phi}}^{lux}\leq 1\) and \(G_{z}\in A_{\alpha}^{\Phi}(\mathbb{C}_{+})\) with \(\|G_{z}\|_{A_{\alpha}^{\Phi}}^{lux}\leq 1\).
## 4. Some characterizations of Carleson measures.
In this section, we give among others, a general characterization of an \((s,\Phi)\)-Carleson measure.
**Proposition 4.1**.: _Let \(s>0\), \(\alpha>-1\) and \(\Phi_{1},\Phi_{2}\) be two one-to-one growth functions. The following assertions are equivalent._
* \(V_{\alpha}\) _is a_ \((s,\Phi_{2}\circ\Phi_{1}^{-1})-\)_Carleson measure._
* _There exists a constant_ \(C>0\) _such that for all_ \(t>0\)__ (4.1) \[\Phi_{1}^{-1}(t^{s})\leq\Phi_{2}^{-1}(Ct^{2+\alpha}).\]
Proof.: Show that (i) implies (ii).
Fix \(t>0\) and let \(I\) an interval such that \(|I|=\frac{1}{t}\). Consider \(Q_{I}\) the Carleson square associated with \(I\). Since \(V_{\alpha}\) is a \((s,\Phi_{2}\circ\Phi_{1}^{-1})-\)Carleson measure, we have
\[V_{\alpha}(Q_{I})\leq\frac{C}{\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{|I|^{s }}\right)}\Rightarrow\frac{1}{1+\alpha}\frac{1}{t^{2+\alpha}}\leq\frac{C}{ \Phi_{2}\circ\Phi_{1}^{-1}(t^{s})}\Rightarrow\Phi_{2}\circ\Phi_{1}^{-1}(t^{s}) \leq(1+\alpha)Ct^{\alpha+2}.\]
For the converse, we suppose that (ii) is true and prove (i).
Let \(I\) be an interval of nonzero length and \(Q_{I}\) the Carleson square associated with \(I\). Since the inequality (4.1) is satisfied, we have
\[\Phi_{1}^{-1}\left(\frac{1}{|I|^{s}}\right)\leq\Phi_{2}^{-1}\left( \frac{C}{|I|^{\alpha+2}}\right) \Rightarrow\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{|I|^{s}}\right) \leq\frac{C}{|I|^{\alpha+2}}\] \[\Rightarrow\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{|I|^{s}} \right)\leq\frac{C}{(1+\alpha)V_{\alpha}(Q_{I})}\] \[\Rightarrow V_{\alpha}(Q_{I})\leq\frac{C^{\prime}}{\Phi_{2}\circ \Phi_{1}^{-1}\left(\frac{1}{|I|^{s}}\right)}.\]
**Proposition 4.2**.: _Let \(s\geq 1\) and \(\Phi\in\mathscr{U}\). Put_
\[d\mu(x+iy)=\frac{dxdy}{y^{2}\Phi\left(\frac{1}{y^{\prime}}\right)},\ \forall\ x+iy\in\mathbb{C}_{+}.\]
_If \(\Phi\in\nabla_{2}\) then \(\mu\) is a measure \((s,\Phi)-\)Carleson. In particular, the converse is true for \(s=1\)._
Proof.: Put
\[\widetilde{\Omega}(t)=\frac{1}{\Phi\left(\frac{1}{t}\right)},\ \forall\ t>0\quad\text{ and } \quad\widetilde{\Omega}(0)=0.\]
According to Proposition 3.7, \(\Phi\in\mathscr{U}\cap\nabla_{2}\).
Let \(I\) be an interval of nonzero length and \(Q_{I}\) the Carleson square associated with \(I\). We have
\[\mu(Q_{I}) =\int\limits_{0}^{|I|}\int\limits_{I}\frac{\widetilde{\Omega}(y^{s })}{y^{2}}dxdy=|I|\int\limits_{0}^{|I|}\frac{\widetilde{\Omega}(y^{s})}{y^{2s} }y^{s-1}y^{s-1}dy\] \[\leq s^{-1}|I|^{s}\int\limits_{0}^{|I|^{s}}\frac{\widetilde{ \Omega}(y)}{y^{2}}dy\leq s^{-1}|I|^{s}C\frac{\widetilde{\Omega}(|I|^{s})}{|I|^{s }}=\frac{C/s}{\Phi\left(\frac{1}{|I|^{s}}\right)},\]
thanks to Lemma 3.1. In particular, for \(s=1\), we have
\[\mu(Q_{I})\lesssim\widetilde{\Omega}(|I|)\Leftrightarrow\int\limits_{0}^{|I|} \frac{\widetilde{\Omega}(y)}{y^{2}}dy\lesssim\frac{\widetilde{\Omega}(|I|)}{|I|}.\]
**Lemma 4.3**.: _Let \(\alpha>-1\), \(\Phi\in\mathscr{U}\) and \(\mu\) be a positive Borel measure on \(\mathbb{C}_{+}\). Put_
\[\widetilde{\Omega}(t)=\frac{1}{\Phi\left(\frac{1}{t}\right)},\ \forall\ t>0\quad\text{ and } \quad\widetilde{\Omega}(0)=0.\]
_The following assertions are satisfied_
* \(\mu\) _is a measure_ \(\Phi-\)_Carleson if and only if there exists a constant_ \(C_{1}>0\) _such that for all_ \(f\in L^{1}\left(\mathbb{R},\frac{dt}{1+t^{2}}\right)\) _and any_ \(\lambda>0\)_,_ (4.2) \[\mu\left(\left\{z\in\mathbb{C}_{+}:|U_{f}(z)|>\lambda\right\}\right)\leq C_{ 1}\widetilde{\Omega}\left(|\{x\in\mathbb{R}:\mathcal{M}_{HL}(f)(x)>\lambda\} |\right),\]
_where_ \(U_{f}\) _is the Poisson integral of_ \(f\)_._
* \(\mu\) _is a measure_ \((\alpha,\Phi)-\)_Carleson if and only if there exists a constant_ \(C_{2}>0\) _such that for_ \(f\in L^{\Phi}\left(\mathbb{C}_{+},dV_{\alpha}\right)\) _and_ \(\lambda>0\)_,_ (4.3)
Proof.: (i) That \(\mu\) is a measure \(\Phi-\)Carleson implies that (4.2) holds, has already been proved in [12, Lemma 4.2].
Suppose the inequality (4.2) is satisfied and show that \(\mu\) is a measure \(\Phi-\)Carleson.
Let \(I\) be an interval of \(\mathbb{R}\) of non-zero length and \(Q_{I}\) the Carleson square associated with \(I\). Put
\[\lambda=\frac{1}{2}\Phi^{-1}\left(\frac{1}{|I|}\right)\]
and
\[f=2\lambda\chi_{I}.\]
By construction \(f\in L^{\Phi}(\mathbb{R})\) and \(\|f\|_{L^{\Phi}}^{lux}\leq 1\). Indeed
\[\int\limits_{\mathbb{R}}\Phi(|f(x)|)dx=\int\limits_{I}\Phi\left(\Phi^{-1} \left(\frac{1}{|I|}\right)\right)dx=1.\]
Let \(x_{0}+iy_{0}\in Q_{I}\). We have
\[\lambda<f(x_{0})=\liminf_{y\to 0}U_{f}(x_{0}+iy)\leq U_{f}(x_{0}+iy_{0}),\]
where \(U_{f}\) is the Poisson integral of \(f\). We deduce that
\[Q_{I}\subset\left\{z\in\mathbb{C}_{+}:|U_{f}(z)|>\lambda\right\}.\]
Since inequality (4.2) is satisfied, we have
\[\mu(Q_{I}) \lesssim\mu\left(\left\{z\in\mathbb{C}_{+}:|U_{f}(z)|>\lambda\right\}\right)\] \[\lesssim\widetilde{\Omega}\left(|\{x\in\mathbb{R}:\mathcal{M}_{ HL}(f)(x)>\lambda\}|\right)\] \[\lesssim\widetilde{\Omega}\left(\frac{1}{\Phi\left(\lambda\right) }\right)\lesssim\widetilde{\Omega}\left(|I|\right).\]
(ii) Again, that \(\mu\) is a measure \((\alpha,\Phi)-\)Carleson implies that (4.3) holds was proved in [12, Lemma 4.3]. Let us prove the converse. Let \(I\) be an interval of nonzero length and \(Q_{I}\) the Carleson square associated with \(I\). Put
\[\lambda=\frac{1}{2}\Phi^{-1}\left(\frac{1+\alpha}{|I|^{2+\alpha}}\right)\]
and
\[f=2\lambda\chi_{Q_{I}}.\]
By construction \(f\in L^{\Phi}(\mathbb{C}_{+},dV_{\alpha})\) and \(\|f\|_{L^{\Phi}_{\alpha}}^{lux}\leq 1\). Indeed
\[\int\limits_{\mathbb{C}_{+}}\Phi(|f(z)|)dV_{\alpha}(z)\leq\int\limits_{Q_{I}} \Phi\left(\Phi^{-1}\left(\frac{1+\alpha}{|I|^{2+\alpha}}\right)\right)dV_{ \alpha}(z)=1.\]
By Lemma 3.10, there are \(\beta\in\{0,1/3\}\) and \(J\in\mathcal{D}^{\beta}\) such that \(I\subset J\) and \(|J|\leq 6|I|\). Consider \(Q_{J}\) the Carleson square associated with \(J\). Let \(z\in Q_{I}\). We have
\[\lambda<\frac{\chi_{Q_{I}}(z)}{|Q_{I}|_{\alpha}}\int\limits_{Q_{I}}f(\omega) dV_{\alpha}(\omega)\lesssim\frac{\chi_{Q_{J}}(z)}{|Q_{J}|_{\alpha}}\int\limits_{Q_ {J}}f(\omega)dV_{\alpha}(\omega)\lesssim\mathcal{M}_{V_{\alpha}}^{\mathcal{D} ^{\beta}}f(z).\]
We deduce that
\[Q_{I}\subset\left\{z\in\mathbb{C}_{+}:\mathcal{M}_{V_{\alpha}}^{\mathcal{D}^{ \beta}}f(z)>\lambda\right\}.\]
Since the inequality (4.3) is satisfied and by Chebychev's inequality, we have
\[\mu(Q_{I}) \lesssim\mu\left(\left\{z\in\mathbb{C}_{+}:\mathcal{M}_{V_{\alpha }}^{\mathcal{D}^{\beta}}f(z)>\lambda\right\}\right)\] \[\lesssim\widetilde{\Omega}\left(\left|\left\{z\in\mathbb{C}_{+}: \mathcal{M}_{V_{\alpha}}^{\mathcal{D}^{\beta}}f(z)>\lambda\right\}\right|_{ \alpha}\right)\] \[\lesssim\widetilde{\Omega}\left(\frac{1}{\Phi\left(\Phi^{-1} \left(\frac{1}{|I|^{2+\alpha}}\right)\right)}\right)\lesssim\widetilde{\Omega }\left(|I|^{2+\alpha}\right).\]
The following is a generalization of [12, Theorem 4.1]
**Theorem 4.4**.: _Let \(s>0\) be a real, \(\Phi_{1},\Phi_{2}\) two one-to-one growth functions and \(\mu\) a positive Borel measure on \(\mathbb{C}_{+}\). If \(\Phi_{2}\in\mathscr{L}\cup\mathscr{U}\) and \(\Phi_{1}\) is convex or belongs \(\mathscr{L}\) then the following assertions are equivalent._
* \(\mu\) _is a_ \((s,\Phi_{2}\circ\Phi_{1}^{-1})-\)_Carleson measure._
* _There exist some constants_ \(\rho\in\{1;a_{\Phi_{1}}\}\) _and_ \(C:=C_{s,\Phi_{1},\Phi_{2}}>0\) _such that for all_ \(z=x+iy\in\mathbb{C}_{+}\)__ (4.4) \[\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\Phi_{1}^{-1}\left(\frac{1}{y^{\prime }}\right)\frac{y^{2s/\rho}}{|\omega-\overline{z}|^{2s/\rho}}\right)d\mu( \omega)\leq C.\]
Proof.: Show that (ii) implies (i). We assume that the inequality (4.4) holds.
Let \(I\) be an interval of nonzero length and \(Q_{I}\) its Carleson square.
Fix \(z_{0}=x_{0}+iy_{0}\in\mathbb{C}_{+}\) and we assume that \(x_{0}\) is the center of \(I\) and \(|I|=2y_{0}\).
Let \(\omega=u+iv\in Q_{I}\). We have
\[|\omega-\overline{z_{0}}|^{2}=|(u-x_{0})+i(v+y_{0})|^{2}\leq y_{0}^{2}+(3y_{0} )^{2}=10y_{0}^{2}.\]
It follows that
\[1\leq 10^{s/\rho}\frac{y_{0}^{2s/\rho}}{|\omega-\overline{z_{0}}|^{2s/\rho}}.\]
Since \(\Phi_{1}^{-1}\) is increasing and \(t\mapsto\frac{\Phi_{2}(t)}{t^{\Phi_{2}}}\) is non-increasing on \(\mathbb{R}_{+}^{*}\), we have
\[\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{|I|^{s}}\right) \leq\Phi_{2}\left(\Phi_{1}^{-1}\left(\frac{1}{y_{0}^{s}}\right) \frac{10^{s/\rho}y_{0}^{2s/\rho}}{|\omega-\overline{z_{0}}|^{2s/\rho}}\right)\] \[\leq 10^{sb_{\Phi_{2}}/\rho}\Phi_{2}\left(\Phi_{1}^{-1}\left( \frac{1}{y_{0}^{s}}\right)\frac{y_{0}^{2s/\rho}}{|\omega-\overline{z_{0}}|^{2s /\rho}}\right).\]
We deduce that
\[\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{|I|^{s}}\right)\leq 10^{sb_{\Phi_{2}}/ \rho}\Phi_{2}\left(\Phi_{1}^{-1}\left(\frac{1}{y_{0}^{s}}\right)\frac{y_{0}^{2s /\rho}}{|\omega-\overline{z_{0}}|^{2s/\rho}}\right),\ \forall\ \omega\in Q_{I}.\]
Since the inequality (4.4) is satisfied, we have
\[\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{|I|^{s}}\right)\mu(Q_{I}) =\int\limits_{Q_{I}}\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{|I|^{ s}}\right)d\mu(\omega)\] \[\leq 10^{bb_{2}/\rho}\int\limits_{\mathcal{C}_{+}}\Phi_{2}\left( \Phi_{1}^{-1}\left(\frac{1}{y_{0}^{s}}\right)\frac{y_{0}^{2s/\rho}}{|\omega- \overline{z}_{0}|^{2s/\rho}}\right)d\mu(\omega)\leq 10^{sb_{2}/\rho}C_{2}.\]
We deduce that
\[\mu(Q_{I})\leq\frac{10^{sb_{2}/\rho}C_{2}}{\Phi_{2}\circ\Phi_{1}^{-1}\left( \frac{1}{|I|^{s}}\right)}.\]
For the converse, we assume that the inequality (2.4) holds.
Put
\[\rho=\left\{\begin{array}{ll}1&\text{if $\Phi_{1}$ is convex}\\ a_{\Phi_{1}}&\text{if $\Phi_{1}\in\mathscr{L}$}\end{array}\right.\]
Fix \(z_{0}=x_{0}+iy_{0}\in\mathbb{C}_{+}\) and let \(j\in\mathbb{N}\). Consider \(I_{j}\) the centered interval \(x_{0}\) with \(|I_{j}|=2^{j+1}y_{0}\) and \(Q_{I_{j}}\) its Carleson square. Put
\[E_{j}:=Q_{I_{j}}\backslash Q_{I_{j-1}},\ \forall\ j\geq 1\ \text{and}\ E_{0}=Q_{I_{0}}.\]
Fix \(j\in\mathbb{N}\) and let \(\omega=u+iv\in\mathbb{C}_{+}\).
If \(\omega\in E_{0}\) then we have
\[|\omega-\overline{z_{0}}|^{2}=|(u-x_{0})+i(v+y_{0})|^{2}\geq(v+y_{0})^{2}\geq y _{0}^{2}\geq 2^{-2}y_{0}^{2}.\]
If \(\omega\in E_{j}\) with \(j\geq 1\) then we have
\[|\omega-\overline{z_{0}}|^{2}=|(u-x_{0})+i(v+y_{0})|^{2}\geq(u-x_{0})^{2}\geq 2 ^{2(j-1)}y_{0}^{2}.\]
We deduce that
\[\frac{y_{0}^{2s/\rho}}{|\omega-\overline{z_{0}}|^{2s/\rho}}\leq\frac{1}{2^{2( j-1)s/\rho}},\ \forall\ \omega\in E_{j},\ \forall\ j\geq 0.\]
Fix \(j\in\mathbb{N}\) and let \(\omega\in E_{j}\). Since the functions \(t\mapsto\frac{\Phi_{1}^{-1}(t)}{t^{1/\rho}}\) and \(t\mapsto\frac{\Phi_{2}(t)}{t^{2\Phi_{2}}}\) are non-increasing on \(\mathbb{R}_{+}^{*}\) and \(t\mapsto\frac{\Phi_{2}(t)}{t^{1\Phi_{2}}}\) is non-decreasing on \(\mathbb{R}_{+}^{*}\), we have
\[\Phi_{2}\left(\Phi_{1}^{-1}\left(\frac{1}{y_{0}^{s}}\right)\frac {y_{0}^{2s/\rho}}{|\omega-\overline{z_{0}}|^{2s/\rho}}\right) \leq\Phi_{2}\left(\Phi_{1}^{-1}\left(\frac{1}{y_{0}^{s}}\right) \frac{1}{2^{2(j-1)s/\rho}}\right)\] \[=\Phi_{2}\left(\Phi_{1}^{-1}\left(\frac{1}{y_{0}^{s}}\right) \frac{1}{2^{(j+1)s/\rho}}\times\frac{1}{2^{js/\rho}}\times\frac{1}{2^{-3s/ \rho}}\right)\] \[\leq\frac{1}{2^{-3sb_{2}/\rho}}\times\frac{1}{2^{jsa_{2}/\rho}} \times\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{|I_{j}|^{s}}\right).\]
We deduce that
\[\Phi_{2}\left(\Phi_{1}^{-1}\left(\frac{1}{y_{0}^{s}}\right)\frac{y_{0}^{2s/ \rho}}{|\omega-\overline{z_{0}}|^{2s/\rho}}\right)\leq\frac{1}{2^{-3sb_{2}/ \rho}}\times\frac{1}{2^{jsa_{2}/\rho}}\times\Phi_{2}\circ\Phi_{1}^{-1}\left( \frac{1}{|I_{j}|^{s}}\right),\ \forall\ \omega\in E_{j}.\]
Since the inequality (2.4) holds, it follows that
\[\int\limits_{E_{j}}\Phi_{2}\left(\Phi_{1}^{-1}\left(\frac{1}{y_{0 }^{s}}\right)\frac{y_{0}^{2s/\rho}}{|\omega-\overline{z_{0}}|^{2s/\rho}} \right)d\mu(\omega) \leq\int\limits_{E_{j}}\frac{1}{2^{-3sb_{2}/\rho}}\times\frac{1}{2^{ jsa_{2}/\rho}}\times\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{|I_{j}|^{s}} \right)d\mu(\omega)\] \[\leq\frac{1}{2^{-3sb_{2}/\rho}}\times\frac{1}{2^{jsa_{2}/\rho}} \times\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{|I_{j}|^{s}}\right)\mu(Q_{I_{j}})\] \[\leq\frac{1}{2^{-3sb_{2}/\rho}}\times\frac{1}{2^{jsa_{2}/\rho}} \times C_{1}.\]
We deduce that
\[\int\limits_{E_{j}}\Phi_{2}\left(\Phi_{1}^{-1}\left(\frac{1}{y_{0}^{s}}\right) \frac{y_{0}^{2s/\rho}}{|\omega-\overline{z}_{0}|^{2s/\rho}}\right)d\mu(\omega) \leq\frac{C_{1}}{2^{-3sb_{2}/\rho}}\times\frac{1}{2^{jsa_{2}/\rho}},\ \forall\ j\geq 0.\]
By construction, the \(E_{j}\) are pairwise disjoint and form a partition of \(\mathbb{C}_{+}\). So we have
\[\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\Phi_{1}^{-1}\left(\frac {1}{y_{0}^{s}}\right)\frac{y_{0}^{2s/\rho}}{|\omega-\overline{z}_{0}|^{2s/\rho }}\right)d\mu(\omega) =\sum\limits_{j=0}^{\infty}\int\limits_{E_{j}}\Phi_{2}\left(\Phi_ {1}^{-1}\left(\frac{1}{y_{0}^{s}}\right)\frac{y_{0}^{2s/\rho}}{|\omega- \overline{z}_{0}|^{2s/\rho}}\right)d\mu(\omega)\] \[\leq\frac{C_{1}}{2^{-3sb_{2}/\rho}}\times\sum\limits_{j=0}^{ \infty}\frac{1}{2^{jsa_{2}/\rho}}<\infty.\]
\(\Box\)
## 5. Proofs of main results.
Proof of Theorem 2.2.: The equivalence \((i)\Leftrightarrow(ii)\) is given by Theorem 4.4. The implication \((iii)\Rightarrow(iv)\) is obvious. Let us prove that \((i)\Rightarrow(iii)\) and \((iv)\Rightarrow(i)\) which is enough to conclude.
\((i)\Rightarrow(iii)\): Let \(0\not\equiv F\in H^{\Phi_{1}}(\mathbb{C}_{+})\). According to Theorem 3.22, there exists a unique function \(f\in L^{\Phi}\left(\mathbb{R}\right)\) such that \(\log|f|\in L^{1}\left(\frac{dt}{1+t^{2}}\right)\) and
\[\log|F(x+iy)|\leq\frac{1}{\pi}\int\limits_{\mathbb{R}}\frac{y}{(x-t)^{2}+y^{2 }}\log|f(t)|dt,\ \forall\ x+iy\in\mathbb{C}_{+} \tag{5.1}\]
and \(\|F\|_{H^{\Phi}}^{lux}=\|f\|_{L^{\Phi}}^{lux}\). Using Jensen's inequality in Relation (5.1), we deduce that
\[|F(x+iy)|\lesssim\left(\mathcal{M}_{HL}(|f|^{a_{\Phi_{1}}/2})(x)\right)^{2/a_ {\Phi_{1}}},\ \forall\ x+iy\in\mathbb{C}_{+}.\]
Fix \(\lambda>0\) and put
\[E_{\lambda}:=\left\{x\in\mathbb{R}:\left(\mathcal{M}_{HL}\left(\frac{|f|}{\| f\|_{L^{\Phi}}^{lux}}\right)^{a_{\Phi_{1}}/2}(x)\right)^{2/a_{\Phi_{1}}}> \lambda\right\}.\]
From the Relation(3.10), we deduce that
Put
\[\Phi_{a}(t)=\Phi_{1}\left(t^{2/a_{\Phi_{1}}}\right),\ \forall\ t\geq 0.\]
From Proposition 3.3, we deduce that \(\Phi_{a}\in\mathscr{U}\cap\nabla_{2}\). According to Proposition 3.11, it follows that
\[\left|\left\{x\in\mathbb{R}:\left(\mathcal{M}_{HL}^{\mathcal{D}^{\mathcal{D}} }\left(\frac{|f|}{\|f\|_{L^{\Phi}}^{lux}}\right)^{a_{\Phi_{1}}/2}(x)\right)^{2 /a_{\Phi_{1}}}>\frac{\lambda}{12}\right\}\right|\lesssim\frac{1}{\Phi_{1}( \lambda)},\ \forall\ \beta\in\{0;1/3\}.\]
We deduce that
\[|E_{\lambda}|\lesssim\frac{1}{\Phi_{1}(\lambda)}.\]
Put
\[\widetilde{\Omega}_{3}(t)=\frac{1}{\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{t }\right)},\ \forall\ t>0\quad\text{ and }\quad\widetilde{\Omega}_{3}(0)=0.\]
From Lemma 3.8, we deduce that \(\widetilde{\Omega}_{3}\in\mathscr{U}\). Since \(\mu\) is an \(\Phi_{2}\circ\Phi_{1}^{-1}-\)Carleson measure and \(t\mapsto\frac{\widetilde{\Omega}_{3}(t)}{t}\) is non-decreasing on \(\mathbb{R}_{+}^{*}\), by Lemma 4.3, we have
\[\mu\left(\left\{z\in\mathbb{C}_{+}:|F(z)|>\lambda\|f\|_{L^{\Phi_{1} }}^{lux}\right\}\right) \lesssim\mu\left(\left\{z\in\mathbb{C}_{+}:|U_{f}(z)|>\lambda\|f\| _{L^{\Phi_{1}}}^{lux}\right\}\right)\] \[\lesssim\widetilde{\Omega}_{3}\left(|E_{\lambda}|\right)\] \[\lesssim\Phi_{1}(\lambda)\widetilde{\Omega}_{3}\left(\frac{1}{ \Phi_{1}(\lambda)}\right)|E_{\lambda}|\,.\]
As
\[\Phi_{1}(\lambda)\widetilde{\Omega}_{3}\left(\frac{1}{\Phi_{1}(\lambda)} \right)=\Phi_{1}(\lambda)\frac{1}{\Phi_{2}(\lambda)}=\frac{\Phi_{1}(\lambda)} {\lambda}\times\frac{\lambda}{\Phi_{2}(\lambda)}\approx\frac{\Phi_{1}^{\prime }(\lambda)}{\Phi_{2}^{\prime}(\lambda)}.\]
We deduce that
\[\mu\left(\left\{z\in\mathbb{C}_{+}:|F(z)|>\lambda\|f\|_{L^{\Phi_{1}}}^{lux} \right\}\right)\lesssim\frac{\Phi_{1}^{\prime}(\lambda)}{\Phi_{2}^{\prime}( \lambda)}\left|E_{\lambda}\right|,\ \forall\ \lambda>0.\]
We have
\[\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\frac{|F(z)|}{\|F\|_{H^ {\Phi_{1}}}^{lux}}\right)d\mu(z) =\int\limits_{0}^{\infty}\Phi_{2}^{\prime}(\lambda)\mu\left( \left\{z\in\mathbb{C}_{+}:|F(z)|>\lambda\|f\|_{L^{\Phi_{1}}}^{lux}\right\} \right)d\lambda\] \[\lesssim\int\limits_{0}^{\infty}\Phi_{2}^{\prime}(\lambda)\left( \frac{\Phi_{1}^{\prime}(\lambda)}{\Phi_{2}^{\prime}(\lambda)}\times|E_{\lambda }|\right)d\lambda\] \[=\int\limits_{0}^{\infty}\Phi_{1}^{\prime}(\lambda)\times|E_{ \lambda}|d\lambda=\int\limits_{\mathbb{R}}\Phi_{a}\left(\mathcal{M}_{HL}^{ \mathcal{D}^{\beta}}\left(\frac{|f|}{\|f\|_{L^{\Phi}}^{lux}}\right)^{a_{\Phi_{ 1}}/2}(x)\right)dx\] \[\lesssim\int\limits_{\mathbb{R}}\Phi_{1}\left(\frac{|f(x)|}{\|f \|_{L^{\Phi_{1}}}^{lux}}\right)dx\lesssim 1.\]
\((iv)\Rightarrow(i)\): Let \(I\) be an interval of nonzero length and \(Q_{I}\) its Carleson square.
Fix \(z_{0}=x_{0}+iy_{0}\in\mathbb{C}_{+}\) and we assume that \(x_{0}\) is the center of \(I\) and \(|I|=2y_{0}\). Put
\[F_{z_{0}}(\omega)=\Phi_{1}^{-1}\left(\frac{1}{\pi y_{0}}\right)\frac{y_{0}^{2 /\rho}}{(\omega-\overline{z_{0}})^{2/\rho}},\ \forall\ \omega\in\mathbb{C}_{+},\]
where \(\rho=1\) if \(\Phi\in\mathscr{U}\) and \(\rho=a_{\Phi}\) if \(\Phi\in\mathscr{L}\). By Proposition 3.25, we deduce that \(F_{z_{0}}\in H^{\Phi_{1}}(\mathbb{C}_{+})\) and \(\|F_{z_{0}}\|_{H^{\Phi_{1}}}^{lux}\leq 1\).
Let \(\omega=u+iv\in Q_{I}\). We have
\[|\omega-\overline{z_{0}}|^{2}=|(u-x_{0})+i(v+y_{0})|^{2}\leq y_{0}^{2}+(2y_{0} +y_{0})^{2}=10y_{0}^{2}\Rightarrow\frac{1}{10}\leq\frac{y_{0}^{2}}{|\omega- \overline{z_{0}}|^{2}}.\]
Since the function \(t\mapsto\frac{\Phi_{1}^{-1}(t)}{t^{1/\rho}}\) is non-increasing on \(\mathbb{R}_{+}^{*}\), we have
\[\Phi_{1}^{-1}\left(\frac{1}{|I|}\right)<\Phi_{1}^{-1}\left(\frac{1}{y_{0}} \right)\leq\pi^{1/\rho}\Phi_{1}^{-1}\left(\frac{1}{\pi y_{0}}\right).\]
We deduce that
\[\Phi_{1}^{-1}\left(\frac{1}{|I|}\right)<\left(\frac{\pi}{10}\right)^{1/\rho} \Phi_{1}^{-1}\left(\frac{1}{\pi y_{0}}\right)\frac{y_{0}^{2/\rho}}{|\omega- \overline{z_{0}}|^{2/\rho}}\leq\left(\frac{\pi}{10}\right)^{1/\rho}\frac{|F_{ z_{0}}(\omega)|}{\|F_{z_{0}}\|_{H^{\Phi_{1}}}^{lux}}.\]
Taking
\[\lambda:=\left(\frac{10}{\pi}\right)^{1/\rho}\Phi_{1}^{-1}\left(\frac{1}{|I|} \right),\]
it follows that
\[|F_{z_{0}}(\omega)|>\lambda\|F_{z_{0}}\|_{H^{\Phi_{1}}}^{lux},\ \forall\ \omega\in Q_{I}.\]
Therefore
\[Q_{I}\subset\left\{z\in\mathbb{C}_{+}:|F_{z_{0}}(z)|>\lambda\|F_{z_{0}}\|_{H^{ \Phi_{1}}}^{lux}\right\}.\]
Since inequality (2.7) is satisfied, we have
\[\mu(Q_{I})\leq\mu\left(\left\{z\in\mathbb{C}_{+}:|F_{z_{0}}(z)|>\lambda\|F_{z_{0}} \|_{H^{\Phi_{1}}}^{lux}\right\}\right)\leq\frac{C_{1}}{\Phi_{2}(\lambda)}.\]
As
\[\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{|I|}\right)=\Phi_{2}\left(\left(\frac{ \pi}{10}\right)^{1/\rho}\lambda\right)\leq C_{2}\Phi_{2}(\lambda).\]
We deduce that
\[\mu(Q_{I})\leq\frac{C_{3}}{\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{|I|}\right)}.\]
\(\Box\)
_Proof of Corollary 2.3._ The proof of Corollary 2.3 follows from Theorem 2.2 and Proposition 4.1 for \((s=1)\). \(\Box\)
_Proof of Theorem 2.4._ The equivalence \((i)\Leftrightarrow(ii)\) is given by Theorem 4.4. The implication \((iii)\Rightarrow(iv)\) is obvious. To conclude, it is enough to prove that \((i)\Rightarrow(iii)\) and \((iv)\Rightarrow(i)\).
\((i)\Rightarrow(iii)\): Let \(0\not\equiv F\in A_{\alpha}^{\Phi_{1}}(\mathbb{C}_{+})\). By Proposition 3.24, there exists \(\beta\in\{0,1/3\}\) such that
\[|G(z)|\lesssim\left(\mathcal{M}_{V_{\alpha}}^{\mathcal{D}^{\beta}}\left(|G|^{ \alpha\Phi_{1}/2}\right)(z)\right)^{2/a_{\Phi_{1}}},\ \forall\ z\in\mathbb{C}_{+},\]
where \(G:=\frac{|F(z)|}{\|F\|_{A_{\alpha}^{\Phi_{1}}}^{\alpha\Phi_{1}}}\). Put
\[\widetilde{\Omega}_{3}(t)=\frac{1}{\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{t} \right)},\ \forall\ t>0\quad\text{ and }\quad\widetilde{\Omega}_{3}(0)=0.\]
From Lemma 3.8, we deduce that \(\widetilde{\Omega}_{3}\in\mathscr{U}\). Since \(t\mapsto\frac{\widetilde{\Omega}_{3}(t)}{t}\) is non-decreasing on \(\mathbb{R}_{+}^{*}\), according to Proposition 3.11, for \(\lambda>0\), we have
\[\left|E_{\lambda}\right|_{\alpha}\leq\frac{1}{\Phi_{1}(\lambda)}\Rightarrow \widetilde{\Omega}_{3}\left(|E_{\lambda}|_{\alpha}\right)\leq\Phi_{1}( \lambda)\widetilde{\Omega}_{3}\left(\frac{1}{\Phi_{1}(\lambda)}\right)\left|E_ {\lambda}\right|_{\alpha}\lesssim\frac{\Phi_{1}^{\prime}(\lambda)}{\Phi_{2}^{ \prime}(\lambda)}\left|E_{\lambda}\right|_{\alpha},\]
where
\[E_{\lambda}:=\left\{z\in\mathbb{C}_{+}:\left(\mathcal{M}_{V_{\alpha}}^{ \mathcal{D}^{\beta}}\left(|G|^{a_{\Phi_{1}}/2}\right)(z)\right)^{2/a_{\Phi_{1 }}}>\lambda\right\}.\]
Since \(\mu\) is an \((\alpha,\Phi_{2}\circ\Phi_{1}^{-1})-\)Carleson measure, by Lemma 4.3, we deduce that
\[\mu(E_{\lambda})\lesssim\widetilde{\Omega}_{3}\left(|E_{\lambda}|_{\alpha} \right)\lesssim\frac{\Phi_{1}^{\prime}(\lambda)}{\Phi_{2}^{\prime}(\lambda)} \left|E_{\lambda}\right|_{\alpha},\ \forall\ \lambda>0.\]
Put
\[\Phi_{a}(t)=\Phi_{1}\left(t^{2/a_{\Phi_{1}}}\right),\ \forall\ t\geq 0.\]
From Proposition 3.3, we deduce that \(\Phi_{a}\in\mathscr{U}\cap\nabla_{2}\). We have
\[\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\frac{|F(z)|}{\|F\|_{A_{ \alpha}^{\Phi_{1}}}^{lux}}\right)d\mu(z) \lesssim\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\left(\mathcal{ M}_{V_{\alpha}}^{\mathcal{D}^{\beta}}\left(|G|^{a_{\Phi_{1}}/2}\right)(z) \right)^{2/a_{\Phi_{1}}}\right)d\mu(z)\] \[=\int\limits_{0}^{\infty}\Phi_{2}^{\prime}(\lambda)\mu(E_{ \lambda})d\lambda\] \[\lesssim\int\limits_{0}^{\infty}\Phi_{2}^{\prime}(\lambda)\left( \frac{\Phi_{1}^{\prime}(\lambda)}{\Phi_{2}^{\prime}(\lambda)}\left|E_{\lambda }\right|_{\alpha}\right)d\lambda\] \[=\int\limits_{\mathbb{C}_{+}}\Phi_{a}\left(\mathcal{M}_{V_{\alpha }}^{\mathcal{D}^{\beta}}\left(|G|^{a_{\Phi_{1}}/2}\right)(z)\right)dV_{\alpha} (z)\] \[\lesssim\int\limits_{\mathbb{C}_{+}}\Phi_{a}\left(|G|^{a_{\Phi_{1 }}/2}\right)dV_{\alpha}(z)\lesssim 1.\]
\((iv)\Rightarrow(i)\): Let \(I\) be an interval of nonzero length and \(Q_{I}\) its Carleson square.
Fix \(z_{0}=x_{0}+iy_{0}\in\mathbb{C}_{+}\) and we assume that \(x_{0}\) is the center of \(I\) and \(|I|=2y_{0}\). Put
\[G_{z_{0}}(\omega)=\Phi_{1}^{-1}\left(\frac{1}{C_{\alpha}y_{0}^{2+\alpha}} \right)\frac{y_{0}^{(4+2\alpha)/\rho}}{(\omega-\overline{z_{0}})^{(4+2\alpha) /\rho}},\ \forall\ \omega\in\mathbb{C}_{+},\]
where \(\rho=1\) if \(\Phi\in\mathscr{U}\) and \(\rho=a_{\Phi}\) if \(\Phi\in\mathscr{L}\), and \(C_{\alpha}\) is the constant in the Relation (3.31). From the Proposition 3.25, we deduce that \(G_{z_{0}}\in A_{\alpha}^{\Phi_{1}}(\mathbb{C}_{+})\) and \(\|G_{z_{0}}\|_{A_{\alpha}^{\Phi_{1}}}^{lux}\leq 1\).
For \(\omega=u+iv\in Q_{I}\), we have
\[|\omega-\overline{z_{0}}|^{2}=|(u-x_{0})+i(v+y_{0})|^{2}\leq y_{0}^{2}+(2y_{0} +y_{0})^{2}=10y_{0}^{2}\Rightarrow\frac{1}{10}\leq\frac{y_{0}^{2}}{|\omega- \overline{z_{0}}|^{2}}.\]
Since the function \(t\mapsto\frac{\Phi_{1}^{-1}(t)}{t^{1/\rho}}\) is non-increasing on \(\mathbb{R}_{+}^{*}\), we have
\[\Phi_{1}^{-1}\left(\frac{1}{|I|^{2+\alpha}}\right)<\Phi_{1}^{-1}\left(\frac{1} {y_{0}^{2+\alpha}}\right)\leq(C_{\alpha})^{1/\rho}\Phi_{1}^{-1}\left(\frac{1} {C_{\alpha}y_{0}^{2+\alpha}}\right).\]
We deduce that
\[\Phi_{1}^{-1}\left(\frac{1}{|I|^{2+\alpha}}\right)<\left(\frac{C_{\alpha}}{10 }\right)^{1/\rho}\Phi_{1}^{-1}\left(\frac{1}{C_{\alpha}y_{0}^{2+\alpha}} \right)\frac{y_{0}^{(4+2\alpha)/\rho}}{|\omega-\overline{z_{0}}|^{(4+2\alpha) /\rho}}\leq\left(\frac{C_{\alpha}}{10}\right)^{1/\rho}\frac{|G_{z_{0}}(\omega )|}{\|G_{z_{0}}\|_{A_{\alpha}^{\Phi_{1}}}^{lux}}.\]
Taking
\[\lambda:=\left(\frac{10}{C_{\alpha}}\right)^{1/\rho}\Phi_{1}^{-1}\left(\frac{ 1}{|I|^{2+\alpha}}\right),\]
it follows that
\[|G_{z_{0}}(\omega)|>\lambda\|G_{z_{0}}\|_{A_{\alpha}^{\Phi_{1}}}^{lux},\ \forall\ \omega\in Q_{I}.\]
Therefore
\[Q_{I}\subset\left\{z\in\mathbb{C}_{+}:|G_{z_{0}}(z)|>\lambda\|G_{z_{0}}\|_{A_ {\alpha}^{\Phi_{1}}}^{lux}\right\}.\]
Since inequality (2.11) is satisfied, we have
\[\mu(Q_{I})\leq\mu\left(\left\{z\in\mathbb{C}_{+}:|G_{z_{0}}(z)|>\lambda\|G_{z_ {0}}\|_{A_{\alpha}^{\Phi_{1}}}^{lux}\right\}\right)\leq\frac{C_{1}}{\Phi_{2} (\lambda)}.\]
As
\[\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{|I|^{2+\alpha}}\right)=\Phi_{2}\left( \left(\frac{C_{\alpha}}{10}\right)^{1/\rho}\lambda\right)\leq C_{2}\Phi_{2}( \lambda).\]
We deduce that
\[\mu(Q_{I})\leq\frac{C_{3}}{\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{|I|^{2+ \alpha}}\right)}.\]
Proof of Corollary 2.5.: The proof of Corollary 2.5 follows from Theorem 2.4 and Proposition 4.1 for \((s=2+\alpha)\).
The following result follows from the Lemma 3.23 and the Proposition 3.25. Therefore, the proof will not be written.
**Lemma 5.1**.: _Let \(\alpha,\beta>-1\), \(\Phi_{1},\Phi_{2}\in\mathscr{L}\cup\mathscr{U}\). There are constants \(C_{1}:=C_{\alpha,\Phi_{1},\Phi_{2}}>0\) and \(C:=C_{\alpha,\beta,\Phi_{1},\Phi_{2}}>0\) such that for all \(F\in\mathcal{M}\left(H^{\Phi_{1}}(\mathbb{C}_{+}),A_{\alpha}^{\Phi_{2}}( \mathbb{C}_{+})\right)\) and \(G\in\mathcal{M}\left(A_{\alpha}^{\Phi_{1}}(\mathbb{C}_{+}),A_{\beta}^{\Phi_{2} }(\mathbb{C}_{+})\right)\),_
\[|F(x+iy)|\leq C_{1}\frac{\Phi_{2}^{-1}\left(\frac{1}{y^{2+\alpha}}\right)}{\Phi _{1}^{-1}\left(\frac{1}{y}\right)},\ \forall\ x+iy\in\mathbb{C}_{+} \tag{5.2}\]
_and_
\[|G(x+iy)|\leq C_{2}\frac{\Phi_{2}^{-1}\left(\frac{1}{y^{2+\beta}}\right)}{\Phi _{1}^{-1}\left(\frac{1}{y^{2+\alpha}}\right)},\ \forall\ x+iy\in\mathbb{C}_{+}. \tag{5.3}\]
Proof of Theorem 2.6.: The inclusion \(\mathcal{M}(H^{\Phi_{1}}(\mathbb{C}_{+}),A_{\alpha}^{\Phi_{2}}(\mathbb{C}_{+}))\) in \(H_{\omega}^{\infty}(\mathbb{C}_{+})\) follows from Lemma 5.1. Conversely,
Fix \(0\not\equiv G\in H_{\omega}^{\infty}(\mathbb{C}_{+})\) and let \(z=x+iy\in\mathbb{C}_{+}\). Since \(\Phi_{2}\in\widetilde{\mathscr{L}}\cup\widetilde{\mathscr{U}}\), by Lemma 3.9, we have
\[\Phi_{2}(\omega(y))=\Phi_{2}\left(\frac{\Phi_{2}^{-1}\left(\frac{1}{y^{2+ \alpha}}\right)}{\Phi_{1}^{-1}\left(\frac{1}{y}\right)}\right)\lesssim\frac{ \Phi_{2}\left(\Phi_{2}^{-1}\left(\frac{1}{y^{2+\alpha}}\right)\right)}{\Phi_{ 2}\left(\Phi_{1}^{-1}\left(\frac{1}{y}\right)\right)}=\frac{1}{y^{2+\alpha} \Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{y}\right)}.\]
We deduce that
\[\Phi_{2}\left(\frac{|G(x+iy)|}{\|G\|_{H_{\omega}^{\infty}}}\right)\lesssim\Phi _{2}(\omega(y))\lesssim\frac{1}{y^{2+\alpha}\Phi_{2}\circ\Phi_{1}^{-1}\left( \frac{1}{y}\right)},\ \forall\ x+iy\in\mathbb{C}_{+}.\]
Put
\[d\mu(x+iy)=\frac{dxdy}{y^{2}\Phi_{2}\circ\Phi_{1}^{-1}(\frac{1}{y})},\ \forall\ x+iy\in\mathbb{C}_{+}.\]
Since \(\Phi_{2}\circ\Phi_{1}^{-1}\in\nabla_{2}\), from Proposition 4.2, we deduce that \(\mu\) is a measure \(\Phi_{2}\circ\Phi_{1}^{-1}-\)Carleson. Let \(0\not\equiv F\in H^{\Phi_{1}}(\mathbb{C}_{+})\). By the Theorem 2.2, we have
\[\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\frac{|G(x+iy)F(x+iy)|}{ \|G\|_{H_{\omega}^{\infty}}\|F\|_{H^{\Phi_{1}}}^{lux}}\right)dV_{\alpha}(x+iy) \lesssim\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\frac{|G(x+iy)|}{ \|G\|_{H_{\omega}^{\infty}}}\right)\Phi_{2}\left(\frac{|F(x+iy)|}{\|F\|_{H^{ \Phi_{1}}}^{lux}}\right)y^{\alpha}dxdy\] \[\lesssim\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\frac{|F(x+iy) |}{\|F\|_{H^{\Phi_{1}}}^{lux}}\right)d\mu(x+iy)\] \[\lesssim 1.\]
We deduce that \(G\in\mathcal{M}(H^{\Phi_{1}}(\mathbb{C}_{+}),A_{\alpha}^{\Phi_{2}}(\mathbb{C}_ {+}))\).
Proof of Theorem 2.7.: The inclusion \(\mathcal{M}(A_{\alpha}^{\Phi_{1}}(\mathbb{C}_{+}),A_{\beta}^{\Phi_{2}}(\mathbb{C}_ {+}))\) in \(H_{\omega}^{\infty}(\mathbb{C}_{+})\) follows from Lemma 5.1. Conversely,
Fix \(0\not\equiv G\in H_{\omega}^{\infty}(\mathbb{C}_{+})\) and let \(z=x+iy\in\mathbb{C}_{+}\). Since \(\Phi_{2}\in\widetilde{\mathscr{L}}\cup\widetilde{\mathscr{U}}\), by Lemma 3.9, we have
\[\Phi_{2}(\omega(y))=\Phi_{2}\left(\frac{\Phi_{2}^{-1}\left(\frac{1}{y^{2+\beta}} \right)}{\Phi_{1}^{-1}\left(\frac{1}{y^{2+\beta}}\right)}\right)\lesssim \frac{\Phi_{2}\left(\Phi_{2}^{-1}\left(\frac{1}{y^{2+\alpha}}\right)\right)}{ \Phi_{2}\left(\Phi_{1}^{-1}\left(\frac{1}{y^{2+\alpha}}\right)\right)}=\frac{1} {y^{2+\beta}\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1}{y^{2+\alpha}}\right)}.\]
We deduce that
\[\Phi_{2}\left(\frac{|G(x+iy)|}{\|G\|_{H_{\omega}^{\infty}}}\right)\lesssim\Phi_{ 2}(\omega(y))\lesssim\frac{1}{y^{2+\beta}\Phi_{2}\circ\Phi_{1}^{-1}\left(\frac{1} {y^{2+\alpha}}\right)},\ \forall\ x+iy\in\mathbb{C}_{+}.\]
\[d\mu(x+iy)=\frac{dxdy}{y^{2}\Phi_{2}\circ\Phi_{1}^{-1}(\frac{1}{y^{2+\alpha}})},\; \forall\;x+iy\in\mathbb{C}_{+}.\]
By Proposition 4.2, \(\mu\) is a \((\alpha,\Phi_{2}\circ\Phi_{1}^{-1})-\)Carleson measure. By the Theorem 2.4, we have
\[\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\frac{|G(x+iy)F(x+iy)|}{ \|G\|_{H^{\infty}_{\alpha}}\|F\|_{A^{\Phi_{1}}_{\alpha}}^{lux}}\right)dV_{ \beta}(x+iy) \lesssim\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\frac{|G(x+iy)|}{ \|G\|_{H^{\infty}_{\omega}}}\right)\Phi_{2}\left(\frac{|F(x+iy)|}{\|F\|_{A^{ \Phi_{1}}_{\alpha}}^{lux}}\right)y^{\beta}dxdy\] \[\lesssim\int\limits_{\mathbb{C}_{+}}\Phi_{2}\left(\frac{|F(x+iy) |}{\|F\|_{A^{\Phi_{1}}_{\alpha}}^{lux}}\right)d\mu(x+iy)\] \[\lesssim 1.\]
We deduce that \(G\in\mathcal{M}(A^{\Phi_{1}}_{\alpha}(\mathbb{C}_{+}),A^{\Phi_{2}}_{\beta}( \mathbb{C}_{+}))\).
|
2309.11461 | Digital twins of nonlinear dynamical systems: A perspective | Digital twins have attracted a great deal of recent attention from a wide
range of fields. A basic requirement for digital twins of nonlinear dynamical
systems is the ability to generate the system evolution and predict potentially
catastrophic emergent behaviors so as to providing early warnings. The digital
twin can then be used for system "health" monitoring in real time and for
predictive problem solving. In particular, if the digital twin forecasts a
possible system collapse in the future due to parameter drifting as caused by
environmental changes or perturbations, an optimal control strategy can be
devised and executed as early intervention to prevent the collapse. Two
approaches exist for constructing digital twins of nonlinear dynamical systems:
sparse optimization and machine learning. The basics of these two approaches
are described and their advantages and caveats are discussed. | Ying-Cheng Lai | 2023-09-20T16:57:11Z | http://arxiv.org/abs/2309.11461v1 | # Digital twins of nonlinear dynamical systems: A perspective
###### Abstract
Digital twins have attracted a great deal of recent attention from a wide range of fields. A basic requirement for digital twins of nonlinear dynamical systems is the ability to generate the system evolution and predict potentially catastrophic emergent behaviors so as to providing early warnings. The digital twin can then be used for system "health" monitoring in real time and for predictive problem solving. In particular, if the digital twin forecasts a possible system collapse in the future due to parameter drifting as caused by environmental changes or perturbations, an optimal control strategy can be devised and executed as early intervention to prevent the collapse. Two approaches exist for constructing digital twins of nonlinear dynamical systems: sparse optimization and machine learning. The basics of these two approaches are described and their advantages and caveats are discussed.
## 1 Introduction
In applications it is often the case that an accurate mathematical model of the underlying dynamical system is not available but time series measurements or observations of some key variables can be made. If the existing empirical data indicate that the underlying system has been functioning as designed or "healthy," how to anticipate any future potential collapse of the system, e.g., caused by slow drifting of a system parameter? Digital twins provide a viable solution. In particular, if a digital "copy" of the system can be faithfully constructed, then a computational bifurcation analysis with respect to variations in the parameter of interest can be performed to assess the possible future collapse of the system.
Recent years have witnessed a fast growing interest in building digital twins not only in many fields of science and engineering but also in industry, health care, and defense [1]. Historically, digital twins were first used for predicting the structural life of aircraft [2]. In dynamical systems, digital twins can be exploited for predicting the future states and anticipating emergent, potentially catastrophic behaviors [3]. In medicine and health care, for a certain type of disease, mechanistic knowledge, observational or diagnostic data, medical histories, and detailed physiological modeling can be combined to construct patient-specific digital twins [4; 5; 6]. Development of digital twins of the Earth for green transition is currently underway in Europe [7; 8].
The aim of this Perspective is to present an overview of the current approaches to digital twins for nonlinear dynamical systems. The need for digital twins can be appreciated through an illustrative example. As shown in Fig. 1, a dynamical system of interest generates two time series at two slightly different parameter values: one before a critical transition and another after. Before the transition, the system functions "normally" in the sense that the dynamical variable plotted has a finite mean value, in spite of the statistical fluctuations, as shown in the top panel. The variable can be, e.g., the population of a protected species in an ecosystem. After the transition, for an initial period of time, the variable exhibits statistically indistinguishable behaviors from that before the transition. However, in the long run the variable becomes zero, signifying, e.g., population extinction. If observations were made at any time before the variable begins to decrease systematically, any observation would suggest that the system is completely healthy and functional. Assume that a model of the system is not available and all information that can be obtained from the system are time series measurements. The question is, if at a time when all measurements or observations of the system give no indication of any "abnormal" behavior of the system, how can one tell that in one case the system will continue to be functional (the top panel in Fig. 1), but in another case, a catastrophic collapse will occur (the bottom panel in Fig. 1), based on measured time series only? This model-free prediction of system's future behavior is an extremely challenging problem in applied nonlinear dynamics. Digital twins provide a solution.
At the present, there are two main approaches to digital twins in nonlinear dynamical systems. One is based on reconstructing the system model by finding the
Figure 1: A challenging prediction problem that was previously deemed unsolvable in nonlinear dynamics. Shown are two time series from a chaotic system at two different parameter values, respectively. The system exhibits a crisis, a global bifurcation that destroys the chaotic attractor, at a critical parameter value \(p_{c}\). The parameter values corresponding to the time series in the top and bottom panels are before and after \(p_{c}\), respectively. In the observation time interval \([0,3000]\) (corresponding approximately to about 80 oscillation cycles of the dynamical variable), the two time series are statistically indistinguishable with approximately identical nonzero mean values (not extinction). Even when the observation time interval is twice as long (\([0,6000]\)), the two time series still cannot be distinguished. Only when the observation time extends to over 8000 (corresponding to about 250 cycles of oscillation - the red dashed vertical line) will the time series exhibit completely different behavior: one sustained (top) and another collapsed toward zero (bottom). Suppose the observation time is \(t=3000\) - the present time, so the only information available about the system is the two time series. How can the future behaviors of the two time series, i.e., one corresponding to sustained or healthy behavior while another to extinction, be predicted based on the time series that cannot be distinguished?
accurate equations governing the dynamical evolution from measurements. Crutchfield and McNamara [9] pioneered the problem of determining the system equations from measurements based on estimating the information contained in a sequence of observations to deduce an approximate set of equations of motion representing the deterministic portion of the system dynamics. Bollt proposed the idea of constructing a dynamical system "near" the original system with a desired invariant density by exploiting the Frobenius-Perron theorem [10]. Later, Yao and Bollt developed a least-squares approximation strategy to estimate the system model and parameters [11]. In the past decade or so, a leading approach to finding system equations [12; 13; 14; 15; 16; 17; 18; 19; 20] is based on sparse optimization such as compressive sensing [21; 22; 23; 24; 25; 26] in situations where these equations have a "sparse" structure 1. The basic idea is as follows. If the vector fields are smooth, they can be approximated by some series expansions such as power or Fourier series. The task then becomes that of estimating the various coefficients in the series expansion. If most of these coefficients are non-zero, the problem is not simplified as the total number of coefficients to be determined will be large. However, if the series expansion is sparse in the sense that the vast majority of the coefficients are zero, then well-developed sparse-optimization methods such as compressive sensing can be used to uniquely solve the few non-trivial coefficients even with a small amount of data [12; 13]. With those coefficients, the system equations described by the series expansions represent a "digital copy" of the original system.
Footnote 1: The idea of exploiting sparse optimization for discovering system equations was first published by the ASU group in 2011 [12; 13]. Five years later (in 2016), the same idea was republished and named as “SINDy” [S. L. Brunton, J. L. Proctor, and J. Nathan Kutz, “Discovering governing equations from data by sparse identification of nonlinear dynamical systems,” Proc. Nat. Acad. Sci. **113**, 3932-3937 (2016)]. Approximately five months before this 2016 paper was published, at a Program Review meeting, Prof. Kutz was made aware of the ASU work earlier and was provided the references.
The second approach to digital twin is machine learning [27]. The basic idea is that a dynamical system functions to evolve the state vector forward in time according to a set of mathematical rules, so a digital twin must also be able to evolve the state vector forward in time even without any input. Reservoir computing [28; 29; 30] is a suitable choice because its intrinsic recurrent neural network can be trained to execute closed-loop, self dynamical evolution with memory. In recent years, there is a great deal of interest in reservoir computing for predicting chaotic systems [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52]. The advantage of the machine-learning approach to digital twins is its applicability to any systems, regardless of the underlying mathematical structure of the governing equations (e.g., sparse or dense in terms of some series expansion). The disadvantage is that the amount of data required for training can be quite demanding.
The sparse-optimization approach to digital twin through discovering system equations has been previously reviewed [53; 54]. The focus of this Perspective article is on the general principle of the more recent machine-learning approach.
## 2 Digital twins of nonlinear dynamical systems: adaptable machine learning
Dynamical systems in the real world are not only nonlinear but also complex. Even if an approximate model of the system can be found, the underlying nonlinearity is likely to cause sensitive dependence on initial conditions, parameter variations, stochastic fluctuations, and perturbations, rendering ineffective any model-based prediction method. To predict characteristic changes in the system in advance of their
occurrence thus must rely on data collected during its normal functioning phase, for which machine learning is viable and potentially powerful.
Most previous studies on reservoir computing focused on the behavior of the target dynamical system at a fixed parameter setting, i.e., once the machine has been trained through learning for certain parameter values, it is utilized to predict the state evolution of the system but at the same set of parameter values. A digital twin of the system, by its nature, must be able to faithfully generate the change in the system behavior as some parameter varies. A basic requirement of digital twin is that it must be able to generate the correct bifurcation behaviors of the original system. That is, the digital twin must not only capture the "dynamical climate" of the original system, but also accurately reflect how the climate changes with the bifurcation or control parameter. Adaptable machine learning [45; 49] was developed to meet this challenge, where the term "adaptable" was introduced to mean that a machine trained with time series data in one parameter regime is capable of generating the dynamical behaviors of the target system in another, distinct parameter regime. The former is referred to as the parameter regime of normal system functioning from which the training data are collected, while the latter is the prediction regime in which system collapse can occur.
The adaptable machine learning framework is schematically shown in Fig. 2. Its working principle can be explained, as follows. Let \(p\) be the bifurcation parameter of the target nonlinear system. As \(p\) varies, a critical point arises: \(p_{c}\), where the system functions normally for \(p<p_{c}\) and it exhibits a transient towards collapse for \(p>p_{c}\). Training of the digital twin is done based on the time series taken from a small number of parameter values in the normal regime, e.g., \(p_{1}<p_{2}<p_{3}<p_{c}\). For each parameter value, adequate training is required in the sense that the twin is able to predict correctly and accurately the oscillatory behavior at the same parameter value for a reasonable amount of time. Suppose that, currently, the system functioning is normal and it operates at the parameter value \(p_{0}<p_{c}\). In the prediction phase, suppose a parameter change \(\Delta p>0\) has occurred. The new parameter value \(p_{0}+\Delta p\) is then fed into the digital twin through the parameter channel. The prediction is deemed successful if the twin generates normal oscillations for \(p_{0}+\Delta p<p_{c}\) but exhibits a transient towards collapse for \(p_{0}+\Delta p>p_{c}\).
A recent work demonstrated that the machine-learning architecture of reservoir computing is effective as digital twins for a variety of nonlinear dynamical systems [27]. A reservoir computing machine consists of three main components: an input layer, a hidden layer with a high-dimensional and complex neural network (the reservoir network), and an output layer. The input layer maps the typically low-dimensional time series data into the high-dimensional state space of the reservoir network, and the output layer projects the high-dimensional dynamical evolution of the neural network state back into low-dimensional time series (readout). Training is administered to adjust the parameters associated with the projection matrix of the output layer to minimize the difference between the output and the true input time series. Because of the nature of the recurrent neural network, the input matrix and the reservoir network structure and link weights are chosen _a priori_ according to the values of a few hyperparameters (e.g., the network spectral radius) and are fixed during the training and prediction phases. As a result, highly efficient learning can be achieved. In terms of hardware realization, reservoir computing can be implemented using electronic, time-delay autonomous Boolean systems [31] or high-speed photonic devices [32].
There are two major types of reservoir computing systems: echo state networks (ESNs) [28] and liquid state machines [29]. The architecture of an ESN is one that is associated with supervised learning underlying RNNs. The basic principle of ESNs is to drive a large neural network of a random or complex topology--the reservoir
network--with the input signal. Each neuron in the network generates a nonlinear response signal. Linearly combining all the response signals with a set of trainable parameters yields the output signal. A schematic illustration of the proposed adaptable reservoir computing scheme is shown in Fig. 3, where the training and testing configurations are illustrated in Figs. 3(a) and 3(b), respectively. The machine consists of three components: (i) an input layer that maps the low-dimensional (\(M\)) input signal into a (high) \(N\)-dimensional signal through the weighted \(N\times M\) matrix \(\mathcal{W}_{in}\), (ii) the reservoir network of \(N\) neurons characterized by \(\mathcal{W}_{r}\), a weighted network matrix of dimension \(N\times N\), and (iii) an output layer that converts the \(N\)-dimensional signal from the reservoir network into an \(L\)-dimensional signal through the output weighted matrix \(\mathcal{W}_{out}\), where \(L\sim M\ll N\). The matrix \(\mathcal{W}_{r}\) defines the structure of the reservoir neural network in the hidden layer, where the dynamics of each node are described by an internal state and a nonlinear (e.g., hyperbolic tangent) activation function. For constructing a digital twin, it is necessary to set \(M=L\). As mentioned, the matrices \(\mathcal{W}_{in}\) and \(\mathcal{W}_{r}\) are generated randomly prior to training, whereas all elements of \(\mathcal{W}_{out}\) are to be determined through training.
Figure 2: Training scheme of adaptable machine learning. The target system of interest has two characteristically distinct operational regimes: normal/oscillatory and collapse regimes which are separated by a critical transition point \(p_{c}\), where \(p\) is a bifurcation parameter. As \(p\) increases through \(p_{c}\), the system transitions from the normal to the collapse regime. Suppose the parameter drifts slowly with time, and let \(p_{0}\) be its value at the present time. The parameter values \(p_{1}\), \(p_{2}\), and \(p_{3}\), as indicated by the three vertical blue dashed lines, thus occur in the past, from which observational data or time series have been obtained. Training of the neural machine is done using these time series in the normal or pre-transition regime. The future behavior of the system can be predicted by adding a parameter variation \(\Delta p\) (corresponding to a specific time in the future) to \(p_{0}\) and observing the dynamical state of the machine under the parameter value \(p_{0}+\Delta p\). For \(p_{0}+\Delta p<p_{c}\), a well trained machine shall predict that the system will still be in the normal functional regime. For \(p_{0}+\Delta p>p_{c}\), the machine would generate dynamical evolution that is indicative of system collapse.
Consider the setting where the system and environmental variations are characterized by the changes in a single parameter - the "bifurcation parameter." The idea is to designate an additional input channel to feed the parameter value into each and every artificial neuron in the hidden-layer network, as shown in Fig. 3, which makes the reservoir computing machine "cognizant" of the parameter variations. The basic considerations are as follows. To predict critical transitions and system collapse, a requirement is that the time series data must be obtained while the system is still in normal operation, and it is necessary to collect data from multiple values of the bifurcation parameter in the normal phase. Because the training data come from several distinct bifurcation parameter values, it is necessary that the machine "know" the parameter values at which the data are taken, which can be accomplished by "injecting" the parameter value to all nodes of the recurrent dynamical neural network in the hidden layer.
Figure 3: Basic structure of adaptable reservoir computing. (a) Training phase. Time series data provide the input to the machine. The input matrix \(\mathcal{W}_{in}\) maps the \(M\)-dimensional input data to a vector of much higher dimension \(N\), where \(N\gg M\), and the matrix \(\mathcal{W}_{p}\) feeds the bifurcation parameter value into each and every neuron in the hidden layer as denoted by the dashed circle. The complex neural network of \(N\) interconnected neurons in the hidden layer is characterized by the \(N\times N\) weighted matrix \(\mathcal{W}_{r}\). The dynamical state of the \(i^{th}\) neuron in the reservoir is \(r_{i}\), for \(i=1,\ldots,N\), constituting the state vector \(\mathbf{r}(t)\). The output matrix \(\mathcal{W}_{our}\) converts the \(N\)-dimensional state vector of the reservoir network into an \(L\)-dimensional output vector, where \(N\gg L\). For constructing a digital twin, it is necessary to set \(M=L\). During the training phase, the vector \(\mathbf{u}(t)\) is the input data, so the system is in open-loop operation. (b) In the prediction phase, the external input is cut off and the output vector \(\mathbf{v}(t)\) is directly fed back as the input to the reservoir, generating a closed-loop, self-evolving dynamical system.
## 3 Examples of digital twins of nonlinear dynamical systems
### Systems for which sparse optimization methods fail
Recall that the basic requirement of any sparse optimization technique for finding the system equations is _sparsity_: when the system equations are expanded into a power series or a Fourier series, it must be that only a few terms are present so that the coefficient vectors to be determined from data are sparse [12; 53]. However, there are physical and biological systems that violate this sparsity requirement. An example is the two-dimensional Ikeda map describing the dynamics of a laser pulse propagating in a nonlinear cavity [55; 56; 57]:
\[z_{n+1}=\mu+\gamma z_{n}\exp{\left(i\kappa-\frac{i\nu}{1+|z_{n}|^{2}}\right)}, \tag{1}\]
where the dynamical variables \(x\) and \(y\) are the real and imaginary parts of the complex variable \(z\), \(\mu\) is the dimensionless laser input amplitude (a convenient bifurcation parameter), \(\gamma\) is the reflection coefficient of the partially reflecting mirrors of the cavity, \(\kappa\) is the cavity detuning parameter, and \(\nu\) characterizes the detuning contributed by the nonlinear medium in the cavity. If the map functions are expanded into a power series or a Fourier series, an infinite number of terms will be present. In fact, for the Ikeda map it remains infeasible to find a suitable mathematical base to expand the map functions into a sparse series, rendering inapplicable the sparse optimization method for constructing a digital twin. It was demonstrated [49] that adaptable reservoir computing provides an effective approach to creating a digital twin of the Ikeda map, which can be used to predict bifurcation behaviors and critical transitions of the optical-cavity system.
Another example is a three-species ecosystem described by [58]
\[\frac{dR}{dt} = R(1-\frac{R}{K})-\frac{x_{c}y_{c}CR}{R+R_{0}},\] \[\frac{dC}{dt} = x_{c}C[\frac{y_{c}R}{R+R_{0}}-1]-\frac{x_{p}y_{p}PC}{C+C_{0}}, \tag{2}\] \[\frac{dP}{dt} = x_{p}P(\frac{y_{p}C}{C+C_{0}}-1),\]
where the dynamical variables \(R\), \(C\), \(P\) are the population densities of the three species: resource, consumer, and predator, respectively, and the system parameters are \(K\) (the carrying capacity), \(x_{c}\), \(y_{c}\), \(x_{p}\), \(y_{p}\), \(R_{0}\), and \(C_{0}\). For a wide range of the parameter values, the system exhibits a critical transition to species extinction. A power-series expansion of the vector field on the right side of Eq. (2) contains an infinite number terms, rendering inapplicable any sparse optimization method. It was demonstrated [45] that adaptable reservoir computing can be used to construct a digital twin of the ecosystem to predict the critical transition and the dynamical behaviors about the transition.
### Predicting amplitude death
In nonlinear dynamical systems, it can happen that, when a bifurcation parameter of the system changes through a critical point, the oscillatory behaviors of the state variables halt suddenly and completely - a phenomenon called amplitude death [59; 60]. From the point of view of bifurcation, amplitude death is caused by a sudden transition of the system from an oscillatory state to a steady state. If the normal function
of the system relies on oscillations, then this phenomenon will be undesired and it is important to be able to predict amplitude death before its actual occurrence. For example, in biological systems, normal conditions are often associated with oscillations, and amplitude death marks the onset of pathological conditions. To anticipate amplitude death in advance of its occurrence based on oscillatory time series collected during normal functioning is important. It was demonstrated that adaptable reservoir computing as a digital twin of the system of interest can be effective for this prediction task [61].
### Predicting onset of synchronization
In complex dynamical systems consisting of a number of coupling elements, synchronization is coherent motion among the elements. Depending on the specific form of the coherent motion, different types of synchronization can emerge, including complete chaotic synchronization [62], phase synchronization [63], and generalized synchronization [64]. The occurrence of synchronization has significant consequences for the system behavior and functions. An example is the occurrence of epileptic seizures in the brain neural system, where a widely adopted assumption is that hypersynchrony is closely associated with the occurrence of epileptic seizures [65], during which the number of independent degrees of freedom of the underlying brain dynamical system is reduced. In the extensive literature in this field, there was demonstration that partial and transient phase synchrony can be exploited to detect and characterize (but not to predict) seizure from multichannel brain data [66; 67; 68]. Reliable seizure prediction remains a challenge. In general, it is of interest to predict or anticipate synchronization before its actual occurrence based on time series data obtained before the system evolves into some kind of synchronous dynamical state. In particular, given that the system operates in a parameter regime where there is no synchronization, would it be possible to predict, without relying on any model, the onset of synchronization based solely on the dynamically incoherent time series measurements taken from the parameter regime of desynchronization? A digital twin of the original system represents a viable solution.
It was demonstrated [48] that adaptable reservoir computing can be used to construct a digital twin for predicting synchronization. In particular, the digital twin can predict, with a given amount of parameter change, whether the system would remain asynchronous or exhibit synchronous dynamics. Systems tested include representative chaotic and network systems that exhibit continuous (second-order) or abrupt (first-order) transitions. Of special interest are network dynamical systems exhibiting an explosive (first-order) transition and a hysteresis loop, and it was shown [48] that the digital twin possesses the power to accurately predict these features including the precise locations of the transition points associated with the forward and backward transition paths.
## 4 Discussion and outlook
There exist two approaches to digital twins in nonlinear dynamical systems: sparse optimization and machine learning, where the former relies on finding the exact governing equations of the system and its applicability is thus limited. This Perspective explains the difficulty with the sparse-optimization approach and focuses on the machine-learning approach. An issue concerns the type of machine-learning scheme that can be exploited for constructing digital twins for nonlinear dynamical systems.
Since a dynamical system evolves its state forward in time according to a set of mathematical rules, its digital twin must be able to evolve forward in time by itself. In this regard, reservoir computing is capable of closed-loop, self dynamical evolution with memory, so it provides a base for developing digital twins of nonlinear dynamical systems.
An important contribution to explainable machine learning as applied to nonlinear dynamical system is the mathematical understanding of the inner workings of reservoir computing by Bollt [50], leading to the development of "next-generation reservoir computing" [51]. A foundational problem underlying the development of a physical understanding of the workings of reservoir-computing based digital twin is searching for scaling laws between the complexities of a chaotic system and its digital twin. In particular, in order for the digital twin to predict the state evolution of the target system, the complexity of the former must "overpower" that of the latter. What is the meaning of "overpowering" and how can it be characterized? Are there scaling laws quantifying the relationship? Answers to these questions will provide a deeper understanding of the inner workings of reservoir-computing based digital twin.
For a chaotic system, its state evolution is determined by the trajectory movement on a dynamically invariant set, e.g., a chaotic attractor. The complexity of the chaotic system can be faithfully characterized by the information dimension of the chaotic invariant set [69; 70]. Likewise, the complexity of the digital twin is determined by its "inner" dynamical system, which is typically a complex dynamical network in the hidden layer of the reservoir computer. For a complex network, in general its complexity increases with its size. As the information dimension of the target chaotic system increases, the size of the reservoir network must increase accordingly to warrant its predictive power over the former. A universal scaling law between the network size required for accurate prediction and the information dimension of the chaotic system, if it indeed exists, would represent a meaningful way to characterize the digital twin's overpowering the target chaotic system.
## Data Availability Statement
No Data associated in the manuscript.
## Acknowledgment
I thank L.-W. Kong for discussions and for assisting with Fig. 1. This work was supported by the Army Research Office through Grant No. W911NF-21-2-0055.
|
2309.07203 | On the Impossibility of Precise Verification of Models of Quantum
Gravity | We argue that no theoretical model of quantum gravity in a causal diamond
whose boundary has finite maximal area, can be verified with arbitrary
precision by experiments done in that diamond. This shows in particular that if
our own universe remains in an asymptotically future de Sitter state for a time
long enough for our local group of galaxies to collapse into a black hole, then
no information processing system with which we can communicate could ever
distinguish between many competing models of the AsdS universe. This article is
written in an attempt to be accessible to a wide audience, so certain
elementary facts about quantum mechanics are reviewed, briefly. | T. Banks | 2023-09-13T17:29:03Z | http://arxiv.org/abs/2309.07203v1 | # On the Impossibility of Precise Verification of Models of Quantum Gravity
###### Abstract
We argue that no theoretical model of quantum gravity in a causal diamond whose boundary has finite maximal area, can be verified with arbitrary precision by experiments done in that diamond. This shows in particular that if our own universe remains in an asymptotically future de Sitter state for a time long enough for our local group of galaxies to collapse into a black hole, then no information processing system with which we can communicate could ever distinguish between many competing models of the AsdS universe. This article is written in an attempt to be accessible to a wide audience, so certain elementary facts about quantum mechanics are reviewed, briefly.
RUNHETC-2023-39
## 1 Classical Dreams and Quantum Measurements
From the time that Newton and Leibniz invented calculus, the implicit goal of theoretical physics has been to construct a model that could, in principle, make infinitely precise predictions about the future state of the universe, given infinitely precise knowledge of its present state. This was stated most succinctly in a famous sentence of Laplace
_Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective positions of the beings which compose it, if moreover this intelligence were vast enough to submit these data to analysis, it would embrace in the same formula both the movements of the largest bodies in the universe and those of the lightest atom; to it nothing would be uncertain, and the future as the past would be present to its eye._
Laplace was of course also one of the creators of the theory of probability, for he recognized the impossibility of actually knowing everything about the initial state of the universe with sufficient precision to make accurate prediction possible. The utility of the theory of probability rests on an assumption, whose mathematical statement is that the probability of a system going from state A to state B in time \(t\) is the sum of the probabilities of all possible histories by which the system could have gotten between A and B in time \(t\). It's this rule that allows the weatherperson to make more accurate predictions about the future track of a hurricane, after they know whether it has hit New Orleans or Galveston on a particular day. Their equations predicted similar probabilities for both events.
Quantum mechanics throws a wrench into this scheme for precision prediction limited only by the precision of one's knowledge of the present. QM does not obey the sum over histories rule for probabilities. This rule is so embedded in our ordinary experience that we consider it part of "logic" and all of the confusion about the foundations of quantum mechanics has to do with the fact that it violates the sum over histories rule.
It was understood intuitively by Bohr and Heisenberg, and on a much more technical level by at least _some_ quantum physicists since the 1970s, that the essence of "quantum measurement theory" was the fact that certain quantum systems have a large variety of _collective variables_\(C_{i}\). These variables have two interesting properties. They're defined as averages of local variables over volumes that are "large in microscopic units". The quantum statistical uncertainties in these variables are of order the inverse square root of the large volume. Even more important, the violation of the sum over histories rule for the probabilities of these variables is _exponentially small as a function of the large volume_. To get a feeling for what we mean by large volume, if we talk about a cube that's one tenth of a centimeter on each side, then the quantum uncertainties are of order.0000000001 and the violations of the sum over histories rule are of order \(10^{-100000000000000000000000}\). We've had to use exponential notation for the last number because if we wrote it out in decimal form on \(8\times 10\) sheets of paper in a normal font it would take a stack of pages from here to the planet Saturn to fit it in.
Such collective variables appear very naturally in quantum systems that are composed of lots of individual variables at independent points of space. It's convenient to think of space as a very fine grid of points with independent variables at each one. We'll return to the question of whether this is a good model of what space is really like, but since the middle of the 19th century, physics has been based on models like this, which are called _field theories_. Field theories naturally have lots of collective variables defined as averages of fields over many points. Quantum mechanical field theories are the basis of the standard model of particle physics, which accounts for all known experimental data within the accuracy of theoretical computation and experimental precision1
Footnote 1: We’re assuming that we’ve added terms to the standard model to account for neutrino masses and possibly other terms to account for recent discrepancies between theoretical and experimental values of the magnetic moment of the muon. These terms fall within the well understood formalism of quantum field theory.
In the 19th century, there were three field theory models know to theoretical physicists: Maxwell's theory of electromagnetism, Newton's theory of gravitation, and the theory of hydrodynamics. Hydrodynamics had many fathers and should really be seen as encompassing the motion not only of liquids, but the theory of elasticity in solids and the propagation of sound. It turned out that Maxwell's theory was a fundamental quantum theory, while hydrodynamics was a very universal phenomenological theory describing the propagation of long wavelength
disturbances in any kind of matter. In most circumstances, the quantum behavior of the matter was not properly described by applying the rules of quantum mechanics to the equations of hydrodynamics2.
Footnote 2: This _is_ a proper way to treat the very low energy excitations of the ground states of many quantum systems.
The complete field theory model of gravitation, General Relativity (GR), was discovered in 1916 by Albert Einstein and it introduced an entirely new feature into the story. In all previous theories, the geometry of space was that of Euclid. Even Einstein's revolutionary Special Theory of Relativity (1905) did not change that. It only proposed that the description of spatial geometry used by systems in relative motion differed by a scale factor. GR says that the spatial geometry is generally non-Euclidean (_i.e._ curved), responds to the matter embedded in it, and changes with time!
One way of describing the geometry of space in GR, which uses Einstein's principle that nothing can travel faster than the speed of light, \(c\), is to imagine some sort of information gathering system traveling through it, in such a way that at each time its velocity is less than that of light. The system has a clock on it, which measures what we call its _proper time_. In any given interval of proper time light can only have traveled out as far as some maximally distant surface, and we record the area of that surface for each interval of time. Do this for all possible intervals of time and all possible information gathering systems, and you've completely determined the geometry of space for all time.
The strange dynamical geometry of space shows up in the simplest possible non-trivial solution of Einstein's gravitational field equations: the analog of the Newtonian gravitational field of a point mass. This solution was first found by Schwarzschild a few years after Einstein published his field equations, but was not properly understood until the 1960s. Newton's gravitational constant \(G_{N}\) and the mass \(M\) of an object define a length scale, \(G_{N}M\). \(2G_{N}M=R_{S}\) is now called the Schwarzschild radius of an object of mass \(M\). Schwarzschild found that outside the Schwarzschild radius one could choose coordinates for space and time such that the spatial geometry is static and gives rise to the Newtonian potential at large distances from the center. Inside the Schwarzschild radius the spatial geometry is rapidly time dependent. An invariant way to characterize what is going on is again to look at an arbitrary information gathering system that falls through that radius. It cannot send a light signal out to a different system that remains outside the Schwarzschild radius. However one defines the "space inside" it is expanding away from the Schwarzschild radius "faster than the speed of light". Secondly, two different information gathering devices thrown in to the Schwarzschild radius at the same time but at different angles, can meet only if they do so in a time less than \(R_{S}/c\). Another way to say this is that as the clock on any of those interior systems ticks away, the area of the surface that it can explore by sending out light rays and getting back their reflection, shrinks to zero in a time about \(R_{S}/c\).
If we have a star of mass \(M\) and radius \(R\gg R_{S}\) then the interior Schwarzschild region is buried inside the matter of the star and the simple Schwarzschild solution does not apply. However, work beginning with that of Tolman and Oppenheimer and Volkoff and culminating in a tour de force paper by Chandrasekhar, showed that any sufficiently massive star would have a similar "black hole region", which would swallow up the whole star. In general the star has angular momentum and it could have non-zero charge, so one needs a more general solution of Einstein's equations than Schwarzschild's, but those solutions have similar properties.
Quantum Theories of Gravitation
Einstein's GR taught us that, in contrast to Maxwell's theory of electromagnetism, which was viewed as a model of waves moving a fixed space-time, the theory of gravitation made space-time dynamical. This disparity was removed by Kaluza and Klein, who showed that electromagnetism could be the consequence of dynamical geometry in \(4+1\) space-time dimensions, if the fourth spatial dimension was a circle whose radius always remained very small in normal units. Modern string theory models have shown that in principle the entire structure of the standard model of particle physics plus Einstein's GR, could be a consequence of dynamical geometry in \(10+1\) dimensions. No model that precisely fits the standard model has yet been found in string theory, but the list of possible models is far from complete and the existing list is vast and contains many examples that come very close to reality. This makes the question of how to "quantize" GR the central question of high energy theoretical physics.
As a first step in thinking about this question we should talk about units in physics. If you've ever taken an elementary physics class you've been bewildered by all the names of units for different physical quantities, energy,mass,temperature,electric field, magnetic field, length, time, and so on. This confusion is historical and reflects our initial ignorance about how different things were connected together. With the advent of Einstein's theories of relativity, everything was reduced to a single unknown unit, a unit of length. When Max Planck introduced his famous formula for the spectrum of radiation from a hot oven, which signalled the discovery of quantum mechanics, he considered that one of the most important aspects of it was the introduction of his fundamental constant \(\hbar\), which gives a minimal energy for a given frequency of light, because together with Newton's gravitational constant and the speed of light it defines a fundamental unit of length. Newton's constant, in "natural units",where \(\hbar=c=1\), has the dimensions of an area \(G_{N}=L_{P}^{2}=10^{-66}\) cm\({}^{2}\).
Now we can ask the fundamental questions: _What are space and time?_. As emphasized by J.L. Borges in a paradoxically entitled essay[1] one cannot say a sentence in any human (or computer) language without implicitly referring to the passage of (proper) time. Einstein taught us that time is relative. Different information gathering systems may have different measurements of how time passes when viewing the same set of information. But for any given system, proper time is a primitive concept without which we (or the system) can't express a thought.
Space, on the other hand, might be replaced by a more primitive concept, namely information. J.A. Wheeler invented a clever motto for this idea "It from Bit", which has been updated to "It from q-bit". A _bit_ is the smallest amount of information one can think about, the answer to a Yes/No question, and a _q-bit_ is the quantum version of a bit. Alan Turing, the genius who broke the German Enigma code, realized that any finite set of information could be encoded in some number of bits. For example, 2 bits have 4 possible states, but if we make the rule that we don't allow the state where both answers are Yes, then there are only 3. In a similar way, any finite set of possible answers can be thought of as answers to a bunch of independent Yes/No questions, with _a priori_ constraints that certain combinations of Yes answers are not allowed.
To get an idea of what a q-bit is draw a picture where an arrow of length 1 pointing up represents Yes, and an arrow pointing to the right represents No.
You can think of the arrow as being the lever on a valve in a water pipe, with the up direction being the direction that lets the water flow and the sideways direction the one that
blocks it. Now draw a picture with the two arrows rotated by an angle \(a\). If you remember your high school trigonometry, the projection of the new Yes direction on the old one is \(\cos a\) and on the old No direction is \(\sin a\) and
\[\sin^{2}a+\cos^{2}a=1. \tag{1}\]
In QM this is interpreted as the existence of a new state of the system, in which the probability of the answer to the original question being Yes is \(\cos^{2}a\). A q-bit is just the statement that we allow all of these new states with arbitrary angles, so that not every question has a definite answer. It's actually a little more complicated than that: we have to introduce complex numbers, but it would take us too far afield to explain that. Since you're reading this archive, I'm going to assume that you know enough about QM to go on and that further explanation would just bore you. Basically a q-bit is just a bit that can be looked at in many ways that are mutually incompatible. When one version of the q-bit's question is answered with absolute certainty, then the system is in a state where one can only make probabilistic predictions about what the result of a "measurement" of any of the different versions of the question will be. In order to make those measurements and verify the probabilistic predictions we have to make repeated correlations of the q-bit system with a collective variable of some much larger quantum system and record the frequency with which we get Yes and No answers.
In the real world, q-bits or quantum information are carried by physical systems. For example, the spin of an electron can be a q-bit. So it makes sense to ask whether there's a
Figure 1: Mnemonic for a q-bit. Different orientations of the axes represent different measurements that can be done on the q-bit, with the absolute squares of the projections of one set of axes on the others giving the probabilities that one set of measurements will have particular answers when the answers to the other set are definite.
maximum number of q-bits that fits into a certain region of space. This is a modern version of the medieval question of how many angels can dance on the head of a pin. We can actually turn the question around though and _define space_ by the number of q-bits that fit into it. We do this using the description of GR that we mentioned at the end of the previous section. For every information gathering system in a space-time and every proper time interval \(T\) on its clock, we have a pure number \(A(T)/4G_{N}\) where \(A(T)\) is the area of the maximally distant surface that the system could have explored by bouncing light beams off of it. We call the region of space-time explorable by the system the _causal diamond of the experiment_. For more or less flat space like that near us, the area of the causal diamond is about \(4\pi T^{2}\). This pure number should be related to the number of q-bits accessible to the system. More q-bits, more area. Jacob Bekenstein first conjectured a formula like this based on the properties of black holes and the laws of thermodynamics.
Stephen Hawking had shown that the total area of black hole horizons in the universe always grew with time, just like the quantity called "entropy" in thermodynamics. Entropy had been shown by Boltzmann in the 19th century to be the amount of information hidden in complex systems, which led to apparent violations of the law of conservation of energy by processes like friction. The energy doesn't disappear, but goes into complex motions of the microscopic constituents of the systems, which we perceive only as "heat". More modern investigations have revealed that entropy counts the logarithm of number of accessible quantum states of the microscopic constituents in a given set of macroscopic conditions. Hawking, who initially dismissed Bekenstein's conjecture, showed that black holes had a temperature, and computed the coefficient in the entropy formula. He found that the entropy was exactly \(A/4G_{N}\). In 1995, Jacobson showed3 that the _Covariant Entropy Principle_ (CEP)implied all of Einstein's equations except the so-called cosmological constant term. The CEP states that the Bekenstein-Hawking relation between area and entropy holds for _every_ causal diamond in every space-time, not just the causal diamonds of systems that have fallen inside a black hole horizon.
Footnote 3: Because of certain mis-statements in current literature, I feel compelled to insert a fairly lengthy footnote here. The area law for entanglement entropy of a causal diamond was first written down by Sorkin in 1983[2] and rediscovered by Srednicki[3] and Callan and Wilczek[4]. This led Susskind and Uglum[5] and Jacobson[6] to make the conjecture that this was somehow related to the renormalization of Newton’s constant in the Bekenstein-Hawking area law for black holes. No one commented on the revolutionary leap being made, since there were no black holes in sight, but Jacobson surely understood because he soon showed that the hydrodynamics of this law was equivalent to Einstein’s equations doubly projected on arbitrary null vectors[7]. Jacobson’s paper was written in terms of small changes in the size of a causal diamond, so he never made the explicit Covariant Entropy Conjecture. That was first made for cosmological space-times by Fischler and Susskind in 1998 and for general space-times by Bousso in 1999. This led Fischler and myself[10], independently, to postulate that the density matrix of empty dS space was the unit matrix on a finite dimensional Hilbert space, with dimension determined by the Gibbons Hawking entropy formula. When we later extended this hypothesis to the general CEP, Bousso pointed out that he had made the same conjecture in one of his big reviews on the Holographic Principle in 1999. The most important consequence of this observation, that localized states in dS reduce the entropy, giving an explanation of the dS temperature, was something Fischler and I recognized immediately, but which did not get put into print until B. Fiol showed me how to make an explicit quantum mechanical model of the effect in 2006[11].
The CEP is in tension with the mathematics of quantum field theory. The standard model of particle physics, or any other quantum field theory, would tell us that the logarithm of the number of quantum states that could fit into the causal diamond of an experiment done over proper time \(T\), scales like \((T/L)^{3}\), where \(L\) is the shortest wavelength we allow in the fields. It's plausible that \(L\) is about \(L_{P}\). On the other hand, most of those states have very high energy
and high energy creates strong gravitational fields, which means black holes. If we throw away states that would have created black holes with area larger than about \((T/L_{P})^{2}\), then the log of the number of states is cut down to \((T/L_{P})^{3/2}\), _which is much less than the entropy implied by the CEP_.
In 1998, Cohen, Kaplan and Nelson[12], showed that one can omit all of the states that would have created large black holes from quantum field theory, without having any detectible effect on the most precisely known agreement between quantum field theory and experiment. So it's extremely likely that, whatever the theory of quantum gravity in the region accessible to the information gathering device is, only a tiny fraction of its quantum states are described by quantum field theory. The rest are black holes.
A fundamental insight into the nature of black hole quantum mechanics appeared in several publications by Lindesay, Susskind, Hayden, Preskill and Sekino[13]. Susskind and Sekino gave the phenomenon the name of _fast scrambling of quantum information_. It basically has to do with the fact that perturbations of a black hole disappear exponentially rapidly, leaving over only the macroscopic information about the hole's charge, mass and angular momentum. Hydrodynamic flows on the black hole horizon are incompressible, which means that there is no propagation of information. This means that although black holes, like quantum field theories, have many quantum states, they are not good information processing or storage devices. Averages of quantities over part of the black hole horizon will, almost all of the time, just be fractions of the charge, mass, and angular momentum of the black hole. The system does not have a complex set of collective variables that can measure the properties of a microscopic quantum system.
The final piece of our story is the discovery of what is called _the accelerated expansion of the universe_. The simplest way to explain this is to add a positive _cosmological constant_ to Einstein's equations. Recall that the value of the cosmological constant was the one term that couldn't be determined from Jacobson's demonstration that the equations followed from local variations of the BH area law for general causal diamonds. This is because the cosmological constant controls the relation between the limits of large proper time and large area. It is not a local energy density. When it is positive, the area remains finite as proper time goes to infinity, while if it's negative the opposite is true. If it's exactly zero then they go to infinity together, with \(A\sim T^{2}\). Our universe appears to be approaching a so called de Sitter universe with a maximal radius \(R\) about \(10^{61}L_{P}\).
We've now come to the fundamental conundrum of a theory of quantum gravity in a finite de Sitter universe, with radius \(R\). No matter how long an information gathering system exists, the total amount of information accessible to it is finite, but the total number of _useful_ q-bits in which it can store and process that information is smaller by a factor \((R/L_{P})^{-1/2}\sim 10^{-30}\) than that accessible information. The number of semi-classical collective variables which can actually make reliable records of that information is smaller still. Thus, there is a limit, _in principle_ to the accuracy with which the information processing system can check any particular mathematical model of the entire system. _A fortiori_ a model based on infinite dimensional algebras is uncheckable because this requires an infinite number of measurements.
There have been many suggestions in the literature that de Sitter space is unstable and claims that stable de Sitter space poses paradoxes because of the recurrences that occur in finite systems. As long as the instabilities take place on a long enough time scale (and changes in the system sufficient to avoid recurrence paradoxes certainly take place on a long enough time scale) they do not change the conclusions of this article[14]. The aim of theoretical
physics is to make predictions about potential observations. If the universe continues its present evolution for about 100 times its current age, our local group of galaxies will become causally disconnected from the rest of the universe. Some time after that, the local group will collapse into a black hole, and theoretically possible measuring devices in our causal diamond will have ceased to exist. Unless there is a drastic change in the evolution of the universe before that time, quantum gravitational theorists will have to content themselves with imprecise, finite theories. It would be a good idea to concentrate on things that can actually be compared to experiment/observation. In[15], Fischler and I suggested that finite time analogs of scattering amplitudes would be the correct observables for an asymptotically dS universe. We did not appreciate at the time the extent to which these failed to exhaust the available quantum states in a dS universe. These observables will, according to the arguments presented here, be adequately explained by a quantum mechanical model with a finite number of q-bits, whose details can never be precisely verified. Laplacian "dreams of a final theory" were always meant to be a goal that could only be reached asymptotically. The dual constraints of quantum mechanics and black hole formation imply that even that asymptotic goal is out of reach. An information gathering system that exists for a finite proper time cannot, _in principle_ perform a precise experimental check of a quantum theory of all the quantum states with which it is in causal contact. An IGS in a future asymptotically dS universe cannot perform such a check even if the IGS persists forever.
To conclude, for aficionadas of string theory, we should explain how what we have said is consistent with the existence of precise formulae for the quantum gravitational S matrix in perturbative string theory in asymptotically flat space, and non-perturbative formulae in AdS space. If we think of asymptotically flat space as the limit of dS space, then it is clear that the horizon states have to be thought of as converging to states of arbitrarily soft massless particles. This leads one to contemplate a formulation of scattering theory in which states with non-zero momentum are defined in terms of constraints that set zero momentum operators to zero in certain regions on the sphere at null infinity[16]. The Hilbert space of the theory is infinitely larger than what is captured by the S matrix, but one hopes that the infinitely soft sector decouples, at least from inclusive cross sections with a total missing energy cutoff. Above four dimensions this problem may not arise until one attempts to go beyond perturbation theory. Models in AdS space which are derived as decoupling limits of brane configurations in asymptotically flat space can be explained in a similar fashion, though here the decoupling of soft physics is much more transparent.
|
2306.17695 | A New Task and Dataset on Detecting Attacks on Human Rights Defenders | The ability to conduct retrospective analyses of attacks on human rights
defenders over time and by location is important for humanitarian organizations
to better understand historical or ongoing human rights violations and thus
better manage the global impact of such events. We hypothesize that NLP can
support such efforts by quickly processing large collections of news articles
to detect and summarize the characteristics of attacks on human rights
defenders. To that end, we propose a new dataset for detecting Attacks on Human
Rights Defenders (HRDsAttack) consisting of crowdsourced annotations on 500
online news articles. The annotations include fine-grained information about
the type and location of the attacks, as well as information about the
victim(s). We demonstrate the usefulness of the dataset by using it to train
and evaluate baseline models on several sub-tasks to predict the annotated
characteristics. | Shihao Ran, Di Lu, Joel Tetreault, Aoife Cahill, Alejandro Jaimes | 2023-06-30T14:20:06Z | http://arxiv.org/abs/2306.17695v1 | # A New Task and Dataset on Detecting Attacks on Human Rights Defenders
###### Abstract
The ability to conduct retrospective analyses of attacks on human rights defenders over time and by location is important for humanitarian organizations to better understand historical or ongoing human rights violations and thus better manage the global impact of such events. We hypothesize that NLP can support such efforts by quickly processing large collections of news articles to detect and summarize the characteristics of attacks on human rights defenders. To that end, we propose a new dataset for detecting **Attacks** on **H**uman **R**ights **D**efenders (HRDsAttack) consisting of crowdsourced annotations on 500 online news articles. The annotations include fine-grained information about the type and location of the attacks, as well as information about the victim(s). We demonstrate the usefulness of the dataset by using it to train and evaluate baseline models on several sub-tasks to predict the annotated characteristics.
## 1 Introduction
It is essential for human rights organizations to track, analyze and summarize attacks on human rights defenders over time and across locations for better personnel protection and situational analysis. To do so, multiple event attributes denoting different aspects of the attacking event need to be extracted from textual sources. However, this would be a time-consuming process if done manually. Figure 1 gives an example of the kinds of information that such organizations need to extract.
In order to train and evaluate an NLP model to extract this information automatically, a relevant dataset is necessary. The ideal dataset requires accurate annotations for both the breadth (the number of extracted event attributes) and depth (the levels of granularity for each event attribute) of the events. However, all existing Event Extraction (EE) datasets (e.g. ACE05 Doddington et al. (2004), ERE Song et al. (2015), ACE05-E Wadden et al. (2019), ACE05-E+ Lin et al. (2020)) do not contain annotations at a sufficiently fine-grained level. Although some existing ontologies and datasets do include annotations related to attacking events, e.g. the Attack event type in the ACE05 dataset along with the associated Agent attribute, they are incomplete with respect to many of the details of interest to human rights organizations and do not contain annotations relevant to victim characteristics or the time/location of the attacking event. As a result, existing open-source EE models trained on these datasets Honnibal et al. (2020); Wadden et al. (2019); He et al. (2019) are unable to predict the complete set of relevant information.
To mitigate the gap in existing resources, we present HRDsAttack, a new dataset containing
Figure 1: An example of the input/output to an NLP model for extracting event attributes about an attacking event on human rights defenders.
crowdsourced annotations on 500 online news articles (including article title, article body text, and publication time). Each news article is annotated with 13 different event attributes to capture critical information about attacks on human rights defenders, including the type and location of the attacks, as well as information about the victim(s) and the perpetrator. With HRDsAttack, we hope to support more research opportunities for including NLP in applications related to human rights, as well as for broader AI for Social Good (AI4SG) efforts.
To summarize, our contributions are threefold:
1. We present a new dataset (HRDsAttack) that includes annotations for fine-grained event details on attacks on human rights defenders. By focusing on expanding the breadth and depth of the attacking event relative to existing EE ontologies, we aim to address the limited scope of existing NLP resources. The complete ontology for our dataset is shown in Table 1;
2. We propose a new NLP task to extract fine-grained event details on attacks on human rights defenders.
3. We demonstrate the usefulness of HRDsAttack with a strong baseline model based on Question Answering (QA) using the T5 model (Raffel et al., 2020) as the backbone in a multi-task setting.
The HRDsAttack dataset along with the code for model training and evaluation is available at [https://github.com/dataminr-ai/HRDsAttack](https://github.com/dataminr-ai/HRDsAttack).
## 2 Related Work
### Event Extraction
Event Extraction (EE) is an NLP task that aims to extract key information such as _who, what, where, and when_ from a text. The most commonly used dataset for EE is the ACE05 English corpus (Doddington et al., 2004) which consists of 33 event types and 22 event argument roles across 599 documents from newswires, web blogs, and broadcast conversations. While the ACE ontology covers a large range of event types, only two of them are related to attacking events: the Life.Injure event and the Conflict.Attack event. Some of the other datasets that focus on extracting event triggers or event arguments are based on the ACE05 ontology (Wadden et al., 2019; Lin et al., 2020), and only cover limited aspects of the information that HRDsAttack covers, e.g. the Attacker and Target attributes in the Life.Injure and Conflict.Attack events. The Armed Conflict Location and Event Data (ACLED) dataset (Raleigh et al., 2010) covers political violence and protest events with annotations for event type, actors and targets, but it does not cover victim-dependent attributes. In comparison, HRDsAttack focuses on attacking events on human rights defenders and provides more event attributes for the attacks, along with more granular information regarding each event attribute.
In terms of modeling approaches, early work on EE formulated the task as a token-based classification problem which leveraged different types of features (Ahn, 2006; Liao and Grishman, 2010, 2013; Li et al., 2013). More recent approaches focus on applying neural models to EE tasks, such as CNNs (Chen et al., 2015), RNNs (Liu et al., 2019), and other advanced model structures (Nguyen and Nguyen, 2019; Zhang et al., 2019).
### NLP Research for Human Rights
Existing NLP research resources around event detection and extraction related to Human Rights are extremely limited. Previous work has focused on identifying potential human rights abuse incidents from social media posts (Alhelbawy et al., 2020; Pielankar et al., 2022), alongside more general applications such as detecting abusive language (Golbeck et al., 2017; Djuric et al., 2015; Aroyo et al., 2019), or procedure-focused applications (e.g. data modeling processes for human rights data (Miller et al., 2013; Fariss et al., 2015)), or predicting judicial decisions of the European Court of Human Rights using NLP (O'Sullivan and Beel, 2019). To our knowledge, there are no event extraction datasets which target human rights issues, which makes HRDsAttack a first in this research area.
## 3 Dataset
In this section, we describe the construction of the HRDsAttack dataset, which contains 500 annotated news articles, including article title, article body text, and publication time. We select news articles as the data source rather than other data sources (such as social media posts) since online news articles generally have higher accessibility, better trustworthiness of the source, and longer content
length.
In our work, we sample online news articles from the GDELT database1, which we discuss in more detail in Section 3.2.
Footnote 1: [https://www.gdeltproject.org/](https://www.gdeltproject.org/)
### Annotation Labels
To ensure the comprehensiveness of the annotations regarding capturing event details, we first identify the event attributes or labels required for annotation. As shown in Table 1, according to the UN Human Rights SDG 16.10.1 Guidance Note2, we identify the following 5 categories of attributes: Perpetrator, Violation, Victim, Location, and Time. Each category has one or more associated event attributes, all denoting key information about the primary event described in the original article3. If there are multiple events mentioned in the article, only the primary event (i.e. the event that happened closest to the publication time) is annotated. We also specify that the Victim category could have multiple entries per article, while other categories can only have one entry per article (i.e. only one entry for the primary attack event). The ontology
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Category & Event Attribute & \multicolumn{2}{|c|}{Labels} \\ \hline \hline & Perpetrator & Yes & There is one or more explicit mention of the perpetrator in the news article. \\ & Mention & No & There is no explicit mention of the perpetrator in the news article. \\ \cline{3-4} & & State Security Forces & Anyone employed by or representing a state institution. \\ \cline{3-4} & & Other State Actors & Other actors that are a part of the state or other non-military authorities of a state. \\ Perpetrator & Perpetrator & Other non-state & Other actors/Private actors that are not a part of the state and act without the state’s \\ & Type & Other actors with & Armed actors that are not a part of the state but act with the state’s permission, support \\ & & permissions & or acquiescence. \\ & & Other actors without & Other actors that are not a part of the state. \\ & & permissions & or group working for a regional or international organization. \\ & & Regional Organizations & There is insufficient information available to determine one of the categories \\ & & Insufficient Information & described above. \\ & & None & Not applicable, when Perpetrator mention is No. \\ \hline & & Arbitrary Detention & Arrest or detention not in accordance with national laws. \\ \cline{3-4} & & Enforced Disappearance & Unlawful deprivation of liberty enforced or authorized by the state, that is \\ & & not acknowledged by the state of the location of the victim is kept secret. \\ \cline{3-4} Violation & Violation Type & Killing & Unlawful death inflicted upon a person with the intent to cause death or serious injury. \\ \cline{3-4} & & Kidnapping & Deprination of liberty that is not enforced or authorized by the state. \\ & & Torture & The action or practice of indiffecting severe pain or suffering on someone as a \\ & & punishment or in order to force them to do or say something. \\ \cline{3-4} & & Other & Sexual violence or other acts causing or intending to cause harm, such as coercion or \\ & & discrimination. & \\ \cline{3-4} & & Unknown & No harmful acts were conducted or there is insufficient information to \\ & & determine the harmful acts. \\ \hline & Victim Name & - & Name of the victim. \\ \cline{3-4} & & Human Rights Defender & A person exercising their right, to promote and strive for the protection and \\ & Victim Type & Trade Unioist & realization of human rights and fundamental freedoms. \\ & & & \\ \cline{3-4} Victim & & Trade Unioist & A person exercising their right to form and join trade unions to protect \\ & & & their interests. \\ \cline{3-4} Victim & & Journalist & A person observing events, statements, policies, etc. that can affect society, with the purpose \\ & & of systemitizing such information to inform society. \\ \cline{3-4} Victim & & Insufficient Information & There is insufficient information available to make select one of the categories \\ & & & described above. \\ \cline{3-4} & & Victim Population & Individual & A named individual victim. \\ & Type & Multiple & Multiple unmanned individuals. \\ \cline{3-4} & & Adult & Age = 18. \\ \cline{3-4} & & Child & Age \(<\)17. \\ \cline{3-4} & & Other & A mixture of age groups, when Victim Population Type is Multiple. \\ & & Unknown & There is insufficient information available to determine the age group. \\ \cline{3-4} & & Man & Male. \\ \cline{3-4} & & Woman & Female. \\ \cline{3-4} & & Other & Other gender types. \\ \cline{3-4} & & Unknown & There is insufficient information available to determine the sex group. \\ \hline & Country & - & Country in which the attack occurred \\ \cline{2-4} Location & Region & - & Region in which the attack occurred, such as a state or a province \\ \cline{2-4} & City & - & City in which the attack occurred \\ \hline & Year & - & Year the attacking event occurred \\ \cline{2-4} Time & Month & January,..., December & Month the attacking event occurred \\ \cline{2-4} & Day & 1, 2, 3,..., 31 & Day (of the month) the attacking event occurred \\ \hline \end{tabular}
\end{table}
Table 1: Labeling ontology of HRDsAttack.
for the annotation labels is shown in Table 1.
### Data Sampling
To build HRDsAttack, we first scrape 80,112 online news articles in the time range of 2019/09/01 to 2022/05/01 from the GDELT database following the CAMEO codebook (Schrodt, 2012), a standard framework for coding event data (Yuan, 2016). These scraped news articles are identified as relevant to human rights defenders by an existing human rights monitoring workflow.
During our pilot studies, we identified a data imbalance issue from the annotations under random sampling. Specifically, we observed significantly skewed label distributions in event attributes Violation Type and Victim Type, the minority classes being Torture and Kidnapping for Violation Type, and Human Rights Defenders and Trade Unionists for Victim Type. To address this issue, we apply keyword filtering and targeted sampling to ensure HRDsAttack is well-balanced across classes in each event attribute.
To include more samples with a higher probability of containing events associated with these minority attributes, we first reduce the original 80,112 samples into four smaller, targeted sample sets. Each targeted sample set corresponds to the articles that contain the keyword for each of the minority classes. We then randomly sample 25 articles from each targeted sample set to form a batch of 100 samples for each round of full annotation. Table 2 shows the keywords used for minority class targeted sampling.
### Annotation Process
The annotation is done by qualified workers (Turkers) on Amazon Mechanical Turk (AMT). We design and implement a separate qualification task to recruit top-performing Turkers, and we only release the full annotation tasks to the Turkers that surpass a predefined performance bar based on the qualification tasks.
#### 3.3.1 Qualification Tasks
For the qualification task, all US-based Turkers that have a HIT (Human Intelligence Task 4) approval rate greater than 90% and a total number of HITs approved greater than 500 are able to participate. In the qualification task, we sample three different news articles and ask all participant Turkers to annotate every event attribute for each news article through three questionnaires (each HIT contains three questionnaires, one for each news article). We then evaluate their performance on this annotation task. All three news articles are also annotated by domain experts, and we use their annotations as the ground truth answers for calculating the Turker accuracy. We only recruit Turkers who have 75% or higher average accuracy across all three news articles. We launched three rounds of qualification tasks with 50 assignments in total, and ten Turkers passed the qualification tasks.
Footnote 4: A HIT represents a single, self-contained, virtual task that a Turker can work on, submit an answer, and collect a reward for completing.
The instructions and the task interface for the qualification tasks are shown in Figures 4 to 11 in Appendix A.
#### 3.3.2 Full tasks
In the full task, each HIT only contains a single news article. The instructions and the annotation interface are identical to the qualification task. We launched all 500 samples in 5 batches, each batch containing 100 HITs. During our pilot studies, we did not observe a significant quality improvement with replication factor 3 due to relatively high agreement scores between the Turkers (Table 8 in Appendix C). We hypothesize that this is because the annotation task itself is highly objective. Therefore, we did not apply replication factors during the full task.
We compensate each Turker with $7.50 per assignment in the qualification task (three news articles per assignment) and $2.00 per assignment in the full task (one news article per assignment). We also provide an additional bonus to all participant Turkers of $0.5 per assignment. The final pay rate is $15.00 per hour, which is over the US national minimal wage of $7.505.
Footnote 5: [https://www.dol.gov/general/topic/wages/minimummwage](https://www.dol.gov/general/topic/wages/minimummwage)
The annotation instructions and the task interface for the full tasks are shown in Figures 12 to 15 in Appendix A.
### Data Statistics
To create a benchmark dataset from HRDsAttack, we randomly split the 500 annotated samples into train, dev, and test set with a 3:1:1 ratio. Table 3 shows the statistics of the splits. A breakdown of the label-level statistics for each event attribute can be found in Table 7 in Appendix B.
## 4 Our Model
With the construction of HRDsAttack, we now turn to developing a model for the task. We noted earlier that existing state-of-the-art EE models are not suitable as baselines, as they rely on extensive human annotations based on token-level annotations, hence cannot easily be re-trained and evaluated on this dataset. For instance, AMR-IE Zhang and Ji (2021) and GraphIE Qian et al. (2018) are trained on the ACE05 dataset and ERE dataset. Some recent research casts the EE task as QA tasks or Seq2seq tasks, such as RCEE_ER Liu et al. (2020) and Text2Event Lu et al. (2021). In this section, we propose a new model for extracting fine-grained details regarding attacks on human rights defenders.
### Overall Framework
Given the limited amount of training data and the range and variety of event attributes, we propose using a single Seq2Seq Question Answering (QA) model. Training a unified model has the advantageous property that it shares the training data across all the sub-tasks thus potentially leading to better performance for each sub-task. Figure 2 shows the overall framework of our proposed baseline model.
We formulate all of the subtasks as a generation task following T5 Raffel et al. (2020), which proposes reframing all NLP tasks into a unified text-to-text format. The input to the T5 model is a natural language sentence composed of (1) a task prefix (e.g. _'extract victims'_), (2) an attribute-oriented question (e.g. _'Who is the victim of the violation?'_), and (3) a context which is the original article. The output is a text string which explicitly refers to the value of the concerned event attribute (e.g. _'Abdelhakim Setouane'_).
### Input-Output Design
We group the event attributes into three categories: general article-dependent attributes, victim-dependent attributes, and publication time-dependent attributes, and we design input and output formats for them respectively. For all of the three categories, the output is a text string that explicitly refers to the value of the relevant event attribute, e.g. _'Yes'_ for Perpetrator Mention, or _'state security forces'_ for Perpetrator Type. The input formats for the three categories have minor differences 6:
Footnote 6: The complete lists of input and output formats are provided in Table 9 in Appendix D.
* **General Article-dependent Attributes:** Most of the event attributes depend on the general information contained within the article (i.e. do not rely on additional input other than article's body text). These include Perpetrator Mention, Perpetrator Type, and Violation Type. For these attributes, the input is the concatenation of a task prefix, an attribute-oriented question, and the original article (e.g. the top three examples in Figure 2).
* **Victim-dependent Attributes:** Some event attributes, such as Victim Sex Type, depend on the information related to a specific victim. Thus we incorporate the victim name into the input question, as exemplified in the fourth and fifth examples in Figure 2.
* **Publication Time-dependent Attributes:** In some cases, the Year, Month, and Day attributes related to the attack event are not explicitly present in the article, and we need to infer them based on a combination of the article publication time and the relevant time mentioned in the article (e.g. _last month, two weeks ago, yesterday_). The article publication time is available as metadata in the GDELT dataset (e.g. _2021-03-29 00:00:00_). For these attributes, we add publication time information into the input, as shown in the last example of Figure 2.
\begin{table}
\begin{tabular}{l r r r r} \hline & Train & Dev & Test & Total \\ \hline \hline No. of Articles & 300 & 100 & 100 & 500 \\ \hline Total No. of Tokens & 287,911 & 97,038 & 124,658 & 509,607 \\ \hline Avg. No. of Tokens & 959,70 & 970,38 & 1,246,58 & 1,019,21 \\ \hline Total No. of Victims & 687 & 272 & 204 & 1,163 \\ \hline Avg. No. of Victims & 2.29 & 2.72 & 2.04 & 2.33 \\ \hline \end{tabular}
\end{table}
Table 3: Textual statistics of HRDsAttack splits. The average number of tokens and victims is averaged per news article.
**Task Prefix.** Following the multi-task setting in the original T5 work, we add a task prefix at the beginning of the input text. The task prefix is used to instruct the T5 model to perform a particular task. It could be any arbitrary text. In our work, we use a brief task description as the task prefix for each event attribute, e.g. _'detect perpetrator'_ for Perpetrator Mention or _'extract violation type'_ for Violation Type (Figure 2). The complete list of all the task prefixes is shown in Table 10 in Appendix D.
### Long Document Resolution
The maximum input length allowed by the T5 model is 512 tokens, but around 75% of the articles from the GDELT dataset exceed that length limit. We explore two options to deal with articles with more than 512 tokens: **Truncation** and **Knowledge Fusion**. Additional methods for handling long documents are discussed in Appendix E.
**Truncation.** We only use the first 512 tokens of the input text. The articles from GDELT are news articles, and the first several sentences from a news article usually contain the most important information. Thus a simple solution is to truncate the article and ignore the cut content.
**Knowledge Fusion.** To mitigate the information loss in the Truncation method, we adopt a split-fuse approach (Figure 3) by (1) splitting the documents into short paragraphs using the spaCy (Honnibal et al., 2020) tokenizer7; (2) applying the model to each of the paragraphs; and then (3) merging the results from each paragraph to obtain the final results for the original article. For event attributes that allow more than one value (e.g. Victim Names), we keep all of the unique results, and for other attributes, we only keep the one with the highest confidence score (beam search score).
Footnote 7: We use the en_core_web_sm spaCy pipeline.
## 5 Experiments
### Evaluation Metrics
We consider the following metrics for evaluating different event attributes:
* **Precision, Recall, and F1 Score**: we use Precision, Recall, and F1 score to evaluate the
Figure 3: Knowledge Fusion approach.
Figure 2: Overall framework of the proposed Sequence-to-Sequence Question-Answering model.
model performance on Perpetrator Mention and Violation Type.
* **Accuracy**: we use accuracy (i.e. percentage correct) to evaluate the model performance on Perpetrator Type, Victim Type, Victim Sex Type, Victim Age Group, Country, Region, City, Year, Month, and Date.
* **Fuzzy Match Precision, Recall, and F1 Score**: For the Victim Name attribute, we use precision, recall, and F1 score based on exact matching and fuzzy matching, respectively. For exact matching, one predicted victim name is counted as correct only if it exactly matches with a victim name in the ground truth. For fuzzy matching, one predicted victim name is counted as correct if it has overlapping tokens with a victim name in the ground truth. For example, a predicted victim name _Jordan_ is counted as correct when it matches with a ground truth name _Michael Jordan_.
### Baseline Models
We consider the following models in our evaluation:
* **DyGIE++**(Wadden et al., 2019): a joint Information Extraction (IE) model and we use the checkpoint trained on the ACE05 dataset. It requires mapping from the ACE event ontology8 to HRDsAttack. As a result, it only covers two attributes: Perpetrator Mention and Victim Name as there is no available mapping for the other event attributes in HRDsAttack. Footnote 8: The ACE ontology covers event types such as Attack and Injure.
* **T5 w/ Truncation**: our proposed T5-based model with truncation. Footnote 9: [https://huggingface.co/t5-large](https://huggingface.co/t5-large)
* **T5 w/ Knowledge Fusion**: our proposed T5-based model with knowledge fusion.
* **Hybrid (final model)**: a hybrid model based on T5 w/ Truncation and T5 w/ Knowledge Fusion. The model only applies knowledge fusion to Perpetrator Mention, Victim Name, and Victim Age Group attributes. This hybrid strategy is decided based on the evaluation results on the dev set.
We recognize that it would be ideal to have more baseline models for comparison, such as a retrained version of DyGIE++ on HRDsAttack. However, many existing EE models are trained on token-level annotations and are not designed for the additional event attributes that HRDsAttack covers (e.g. Victim Types). Therefore, we had to design a specialized model for this task. We plan to benchmark more Sequence-to-Sequence based models on HRDsAttack in future work.
### Training Implementation
We use the T5-large checkpoint 9 provided by Huggingface (Romero, 2021) to initialize the model and all experiments are run on a single AWS g5.xlarge instance. The AWS g5.xlarge instance is equipped with a single NVIDIA A10G GPU with 24 GB of GPU memory. Table 4 shows the hyperparameters we use to train the model.
Footnote 9: [https://huggingface.co/t5-large](https://huggingface.co/t5-large)
### Overall Performance
Table 5 shows the performance of the four models on the test set: the DyGIE++ baseline, T5 w/ Truncation, T5 w/ Knowledge Fusion, and the Hybrid model. Both T5-based models significantly outperform the DyGIE++ baseline, except for the precision of Perpetrator Mention. In addition, we get further improvement from the Knowledge Fusion method for the Perpetrator Mention, Victim Name, and Year attributes. For other attributes, we get results that are slightly worse than those without Knowledge Fusion. This aligns with our assumption that violation events may be elaborated in the later parts of the news articles with specific victim names and violation types. So by applying the Knowledge Fusion method, we can significantly improve the recall of some event attributes. But for other information such as violation time and location, they usually appear in the first several sentences of the news article. The
\begin{table}
\begin{tabular}{c c} \hline Hyperparameter & Value \\ \hline \hline Learning rate & 1e-4 \\ \hline Learning rate decay & 1e-5 \\ \hline Epoch & 20 \\ \hline Batch size & 4 \\ \hline Gradient accumulation steps & 16 \\ \hline \end{tabular}
\end{table}
Table 4: Hyperparameter settings for model training.
time and location information appearing in the later parts may not be related to the primary attacking event. So based on the evaluation results on the dev set (Table 11 in Appendix F), we propose a hybrid model as our final baseline model. The hybrid model only applies Knowledge Fusion to Perpetrator Mention, Victim Name, and Victim Age Group attributes. We notice that the hybrid model designed based on the dev set does not achieve the best performance for Victim Age Group and Year attributes on the test set. It might be the fact that the hybrid strategy is overfitted on the dev set. And we leave the optimization of the hybrid model as future work.
While the hybrid model outperforms the DryingIE++ baseline in almost all of the event attributes and unlocks the extraction of new attributes, we do see a relatively lower model performance in attributes such as Region and Day. We hypothesize that the ambiguity in Region labels and the large number of classes in Day labels introduce additional challenges to the model, especially with a limited amount of training data. For instance, some annotators mistakenly put _London_ under Region instead of City. We acknowledge that the annotation instructions could be further improved to address this issue.
We also evaluate the end-to-end performance on the victim-dependent attributes with the model-predicted victim names (Table 6). And we use F1 scores as the evaluation metric. One victim-dependent attribute is counted as correct only when both the predicted victim name and the predicted attribute value match with the ground truth.
## 6 Conclusion
In this paper, we present a new dataset that supports extracting detailed information about attacks on human rights defenders under a new task setting. Compared with existing event extraction resources, we focus on the human rights domain and expand to more event attributes for capturing event details more comprehensively. Our new dataset (HRDsAt
\begin{table}
\begin{tabular}{l l l} \hline Event Attribute & Metric & Hybrid \\ \hline \hline Victim Type & F1 & 22.89 \\ \hline Victim Sex Type & F1 & 33.33 \\ \hline Victim Age Group & F1 & 46.01 \\ \hline \end{tabular}
\end{table}
Table 6: End-to-end performance of the Hybrid model on HRDsAttack (%) for victim-dependent attributes with model predicted victim names. All experiments are based on a single run with a preset random seed.
\begin{table}
\begin{tabular}{l l c c c c} \hline Event Attribute & Metric & DyGIE++ & T5 w/ Truncation & T5 w/ Knowledge Fusion & Hybrid \\ \hline \hline \multirow{4}{*}{Perpetrator Mention} & Precision & **100.00** & 93.68 & 93.81 & 93.81 \\ & Recall & 36.54 & 97.80 & **100.00** & 100.00 \\ & F1 & 53.52 & 95.70 & **96.81** & 96.81 \\ \hline Perpetrator Type & Accuracy & - & **62.00** & 60.00 & 62.00 \\ \hline \multirow{4}{*}{Victim Name} & Exact Match Precision & 9.41 & **75.61** & 59.30 & 59.30 \\ & Exact Match Recall & 9.19 & 24.03 & **39.53** & 39.53 \\ & Exact Match F1 & 9.30 & 36.47 & **47.44** & 47.44 \\ & Fuzzy Match Precision & 17.65 & **85.37** & 63.95 & 63.95 \\ & Fuzzy Match Recall & 17.24 & 27.13 & **42.64** & 42.64 \\ & Fuzzy Match F1 & 17.44 & 41.18 & **51.16** & 51.16 \\ \hline Victim Type & Accuracy & - & **72.41** & 71.67 & 72.41 \\ \hline Victim Sex Type & Accuracy & - & **89.66** & 86.67 & 89.66 \\ \hline Victim Age Group & Accuracy & - & **93.10** & 92.50 & 92.50 \\ \hline \multirow{4}{*}{Violation Type} & Precision & - & **67.91** & 61.24 & 67.91 \\ & Recall & - & 75.26 & **81.44** & 75.26 \\ & F1 & - & **71.39** & 69.91 & 71.39 \\ \hline Country & Accuracy & - & **66.00** & 65.00 & 66.00 \\ \hline Region & Accuracy & - & **3.00** & 2.00 & 3.00 \\ \hline City & Accuracy & - & **23.00** & 12.00 & 23.00 \\ \hline Year & Accuracy & - & 46.00 & **50.00** & 46.00 \\ \hline Month & Accuracy & - & **33.00** & 29.00 & 33.00 \\ \hline Day & Accuracy & - & **14.00** & 8.00 & 14.00 \\ \hline \end{tabular}
\end{table}
Table 5: Overall performance of the baseline models on HRDsAttack test set (%). All experiments are based on a single run with a preset random seed.
tack) contains 500 human-annotated news articles with 13 different event attributes regarding the victim(s), the type of perpetrator and violation(s), as well as the time and location of the attacks. We demonstrate the usefulness of the dataset by developing a Sequence-to-Sequence-based Question Answering model tailored for this task. While it achieves decent performance on some event attributes, there are many where there is much room for improvement. We view this model as a strong baseline for future work. We believe models trained with HRDsAttack could be generalized to detect attacking events in other domains or targeting a different population. And we hope that this work encourages additional research on the development of new AI4SG NLP resources in the future.
## Acknowledgements
We would like to thank Jessie End at Datamir for her support during this project. We also want to thank all the reviewers for their valuable and constructive feedback during the review phase.
## Limitations
While HRDsAttack is, to the best of our knowledge, the first dataset on extracting attacks on human rights defenders, there are some limitations. For one, while being the first corpus of its kind, our dataset is English-only. Second, the number of documents is limited. While the sample size of HRDsAttack (500) is on par with some of the other EE datasets, such as ACE05 (599), we do see more samples being beneficial to subsequent model training and supporting other future studies. In addition, despite the effort to balance the class labels in the event attributes, some of the labels still remain imbalanced, such as Perpetrator Type.
## Ethics Statement
The construction of HRDsAttack involves human annotations on AMT. The Turkers are provided with clear annotation instructions and are informed of the conditions where they would be qualified or disqualified. We compensate the Turkers with a final paid rate of $15.00 per hour which is over the US national minimal wage of $7.50.
|
2308.16741 | Socratis: Are large multimodal models emotionally aware? | Existing emotion prediction benchmarks contain coarse emotion labels which do
not consider the diversity of emotions that an image and text can elicit in
humans due to various reasons. Learning diverse reactions to multimodal content
is important as intelligent machines take a central role in generating and
delivering content to society. To address this gap, we propose Socratis, a
societal reactions benchmark, where each image-caption (IC) pair is annotated
with multiple emotions and the reasons for feeling them. Socratis contains 18K
free-form reactions for 980 emotions on 2075 image-caption pairs from 5
widely-read news and image-caption (IC) datasets. We benchmark the capability
of state-of-the-art multimodal large language models to generate the reasons
for feeling an emotion given an IC pair. Based on a preliminary human study, we
observe that humans prefer human-written reasons over 2 times more often than
machine-generated ones. This shows our task is harder than standard generation
tasks because it starkly contrasts recent findings where humans cannot tell
apart machine vs human-written news articles, for instance. We further see that
current captioning metrics based on large vision-language models also fail to
correlate with human preferences. We hope that these findings and our benchmark
will inspire further research on training emotionally aware models. | Katherine Deng, Arijit Ray, Reuben Tan, Saadia Gabriel, Bryan A. Plummer, Kate Saenko | 2023-08-31T13:59:35Z | http://arxiv.org/abs/2308.16741v3 | # Socratis: Are large multimodal models emotionally aware?
###### Abstract
Existing emotion prediction benchmarks contain coarse emotion labels which do not consider the diversity of emotions that an image and text can elicit in humans due to various reasons. Learning diverse reactions to multimodal content is important as intelligent machines take a central role in generating and delivering content to society. To address this gap, we propose Socratis, a societal reactions benchmark, where each image-caption (IC) pair is annotated with multiple emotions and the reasons for feeling them. Socratis contains 18K free-form reactions for 980 emotions on 2075 image-caption pairs from 5 widely-read news and image-caption (IC) datasets. We benchmark the capability of state-of-the-art multimodal large language models to generate the reasons for feeling an emotion given an IC pair. Based on a preliminary human study, we observe that humans prefer human-written reasons over 2 times more often than machine-generated ones. This shows our task is harder than standard generation tasks because it starkly contrasts recent findings where humans cannot tell apart machine vs human-written news articles, for instance. We further see that current captioning metrics based on large vision-language models also fail to correlate with human preferences. We hope that these findings and our benchmark will inspire further research on training emotionally aware models. Our dataset can be found at [https://kdeng55.github.io/socratis-website/](https://kdeng55.github.io/socratis-website/).
## 1 Introduction
A crucial prerequisite for effective communication and collaboration is the ability to possess emotional awareness [7]. As intelligent machines become increasingly prevalent ranging from generative content creation [19] to collaborative [13] and embodied AI [15], they need to possess emotional awareness for effective communication, and greater trust and acceptance [7]. Emotionally unaware messaging undermine efforts to inform people of global crises [4], spread political division [23], and fail to engage people for the correct social causes [5]. Existing work on emotion prediction oversimplifies the problem by categorizing emotions into coarse buckets [14, 25, 3], ignoring the nuance that the same content can elicit various shades of emotional reactions for various reasons. For instance, as shown in Figure 1, an image and caption can generate two conflicting emotions with valid reasons. Learning this diversity of reactions is crucial to tailor a machine's interactions to individual emotional states.
To encourage further research on emotionally aware AI, we propose the SOCRATIS benchmark - a dataset of detailed diverse reactions written by humans on images and captions. SOCRATIS includes 980 shades of emotions and 18K free-form reactions written by humans on 2075 image and caption pairs collected from 5 existing news and image-caption datasets. Given an image, a news caption and an emotion word, our task is to generate a "reaction" caption that explains why the image and caption may elicit the specified emotion in a person.
Unlike related benchmarks that focus on reactions to artistic images [12, 1], niche topics such as gun violence [18], emotionally stylized captions [2], or on morality [9], we focus on reasons why humans might feel various emotions on real-life images and captions from widely used news and image-caption datasets. Our task has higher prac
Figure 1: **Socratis benchmark. The same image and caption pair can evoke different emotions and reactions. We release a benchmark dataset of diverse human-annotated emotions and reactions to images and captions. We show that current state-of-the-art language models and metrics fail to capture the nuance of this task.**
tical relevance since models that fare well on this benchmark can be used to create more effective and inclusive messaging by news agencies and social workers. For instance, as shown in Figure 1, a writer can look at why someone may be "scared" and update the content to reflect that transporting the rhinoceros by helicopter is safe in this context to effectively mitigate the doubt. While a similar benchmark [6] focuses on reactions to reading a text caption, we focus reactions to both images and text since content on the web is largely multimodal.
Using Socratis as a testbed, we evaluate a state-of-the-art multimodal language model [10] and check if commonly used metrics in language-generation evaluation can distinguish between good "reactions" and poor ones. We generate "reactions" given an image, caption, and emotion and ask human raters to blindly choose between the machine generation and the human annotation.
Our results show a stark gap in current capabilities for this task. Humans prefer human-written reactions two times more often than machine-generated ones. This starkly contrasts recent findings where humans fare poorly to tell the difference between machine-generated and AI-generated news articles [27] or images [19]. This illustrates that while large generative models may be good at producing believable articles or images, it lacks the nuance required for emotional awareness. Furthermore, when we separate the generations into two groups - good generations as rated by humans (when humans couldn't tell the difference between human-written and machine-generated) and poor generations (when humans picked human-written over machine-generated), we observe a negligible difference in the scoring by commonly used metrics like BART and CLIP score, which are also based on large language and vision models. This illustrates the difficulty of the problem since we need better emotionally aware models to make better metrics.
Hence, we hope these initial results spark further research and discussion into improving the emotional awareness of large language models. Adding to the recent discussion that large language models seem to lack rich social intelligence [20] and theory of mind [21], our benchmark shows that they lack nuanced emotional awareness as well.
## 2 Socratis Dataset
We propose a benchmark for evaluating the emotional awareness of vision-language models. Specifically, our task is to predict the reaction a human may have of a certain emotion while looking at an image and caption (IC) pair. To this end, we collect a dataset showing human workers an IC pair and ask them to write the emotion words they feel and the reasons for feeling them. We interchangeably call these reasons as "reactions" the humans feel of a certain emotion. We have 18,378 annotated reactions for 980 emotions to 2075 IC pairs rated by an average of 8 independent workers per IC pair. The most common emotion words follow that of standard emotion datasets [14, 3] - happy, sad, excited, angry. We also have a variety of more subtle emotion words such as inspire, hopeful, and nostalgic that feature among the top 30 emotion words. However, the novelty of our dataset isn't the variety of emotion words, but the free-form reasons for feeling the different emotions for the same IC pair. Some examples are shown in Figure 2. We will make our dataset publicly available.
### Data Annotation
**Image-Caption pair collection** We collect image-caption pairs from the Visual News [11] and the Conceptual Captions [22] dataset. We first randomly sample 1800 images from the Visual News Dataset [11], which consists of an image and a news headline caption from 4 news datasets - BBC, USA-Today, Guardian and The Washington Post. We specifically choose news datasets since news headlines and visuals usually elicit stronger reactions from humans than generic stock images and are of more practical use. Additionally, to also understand how people react to generic IC pairs, we sample images and captions from a widely used image-captioning dataset, Conceptual Captions [22]. IC pairs that explicitly convey a certain emotion (eg, stock photo of someone smiling) are less interesting due to a lack of ambiguity from diverse people. Hence, to sample IC pairs that are likely to elicit diverse emotional reactions, we choose 500 samples where the emotion of the image does not match the emotion of the caption. The emotion of the image is predicted by a CLIP [16] model, fine-tuned on the WebEmo [14] dataset, and a T5 model [17] predicts the text emotions.
**Human Reaction Annotation** We show users the IC pairs we collect as described above. We then ask them to write up to three emotions a person is likely to feel while viewing the IC pair, along with a reason for why they might feel each of the emotions they entered. We collect our dataset on Amazon Mechanical Turk. All data is cleared of personally identifiable information and only aggregate statistics of model training are reported in the paper. We incentivize the annotators by awarding them a small bonus if they can match the most popular emotion word for an IC pair. This encourages workers not to enter noisy reactions (or extremely uncommon or made-up words) since they are not likely to match other workers' responses. To control the quality of responses, we choose workers with a greater than 98% approval rate (on at least 50 Human Intelligence Tasks or HIT's). We also restrict the geographical location to the US or UK since the images and captions are sourced from news articles from these two countries. We collect 10 independent annotations for each image-caption pair to get a representative, diverse set of reactions.
### Annotation Quality
We compute some heuristic automatic measures to judge the quality of the reactions.
**Emotion-reaction match** We check how often the entered emotion word matches the emotion of the reaction. For instance, the emotion of "sad" reaction should also be "sad". To compute the sentiment, we use a T5 sentiment model [17], which predicts positive or negative sentiment. We observe an \(87.15\%\) accuracy of sentiment match between the emotion word and the reaction. We manually check some examples where the sentiments of reaction and emotion words mismatch and find that the T5 model [17] may be noisy. For instance, for the emotion word "hungry", the reaction annotated is "the cocoa actually makes me want to eat something sweet.". We believe this is a reasonable reaction for "hungry". However, the sentiment predicted for "hungry" is "negative", whereas the same for the reaction is "positive", resulting in a mismatch. This further illustrates that coarse emotion buckets like positive and negative don't capture the nuances of reactions of what people feel.
**Agreement on reactions** To judge the agreement of humans over the reactions for a given image, caption, and emotion, we further compute the BART [26] scores of the reactions for the same (I,C,emotion) tuple. We contrast this with the BART score [26] of 1000 randomly sampled reaction pairs from different IC-pairs and emotions. We note that the BART scores between reactions for the same emotion for an IC pair are higher \(78\%\) of the time than that of the random pairs.
## 3 Approach
To understand the emotional awareness of large multimodal models, we benchmark the capability of a state-of-the-art vision-language model on our proposed benchmark without further fine-tuning. Specifically, we use the FLAN-T5 variant of BLIP-2 [10] from Hugging-face [24]. Our task is to predict the reason of feeling a certain emotion given an image-caption pair. For a given image, caption text and emotion word, we prompt BLIP-2 with the image and a query using the following template: Question: Why does a person feel {emotion} after seeing this image and reading the news caption '{caption}'? Answer: We input this prompt along with the image to the BLIP-2 model, and use a greedy approach to generate the response following the standard procedure outlined on the Hugging-face [24] model page. Given an image \(i\), caption \(c\) and emotion \(e\), we can formulate the likelihood of the generated reaction text \(t\) as:
\[P(t|i,c,e)=\Pi_{j=1}^{n}P(t_{<j}|i,c,e) \tag{1}\]
where \(n\) denotes the number of tokens in the generated text.
## 4 Experiments
Our goal is to evaluate whether large multimodal models are emotionally aware to generate plausible reasons for why humans might feel a certain reaction for a given image and text. A pragmatic formulation of this evaluation is whether a human finds it hard to distinguish a machine generated reason from a human-written reason. Hence, we first conduct a human evaluation on the multimodal BLIP-2
Figure 2: **Qualitative examples from our Socratis dataset and a state-of-the-art multimodal model, BLIP-v2 generations.**
generated reasons to see how often humans prefer machine-generated reasons, human-written reasons, or both.
**Human Evaluation** To understand human preference, we conduct a preliminary human study on 500 randomly sampled data points from our dataset. We show an image, caption, emotion tuple to a user. We then show two choices of reactions - human-written and machine-generated. The user is unaware of which of the reactions is machine-generated or human-written. We ask 3 independent workers to choose the best reaction for the given tuple of image, caption and emotion. They also have the choice of either choosing that both reactions are reasonable or that neither is reasonable. Hence, there are four choices for each image-caption-emotion tuple: We split the human study image-caption-emotion tuples into three groups:
* **Machine-better**: machine was picked over human, indicating that the machine-generations are good.
* **Both-Good**: both human-written and machine-generated reactions are equally valid.
* **Both-Bad**: We discard these examples from further study since the data annotations are likely noisy.
**Metrics** Since human evaluations are slow and expensive, we aim to determine if commonly used captioning metrics can be used to judge good reaction generations from poor ones to speed up research. We define a good generation as one which is aligned to or better than human preferences. Hence, a machine generation is good if a human cannot tell the difference from a human-written one, or if a human prefers it over a human-written one. Hence, a good metric should score such cases (machine-preferred and both-good) higher than the cases where a human-written reason was preferred over a machine-generated one (human-preferred). We compute three commonly used metrics. First, we compute **BART Score** between the machine-generated reaction and the human-generated reference to measure human-likeness of generations as described in [26]. Next, we compute **CLIP-Score** and **RefCLIP-Score**[8] to see if image-relevance plays a major factor in distinguishing good generations from poor ones. We use a prompt like "Human feels {emotion}, when seeing this image with {caption} because {explanation}" and compute the cosine similarity to the image as described in [8].
## 5 Results
**Humans prefer human-written reactions to machine generations.** In Table 1, we observe that workers pick human reactions over two times more often than machine-generated ones (233 vs 91). This suggests that state-of-the-art large vision-and-language models are still limited at extrapolating contextual information beyond simply correlating the visual concepts in the image with relevant words.
**Current captioning metrics cannot distinguish between good and bad reactions** In Table 2, we see that BART scores do not follow human preferences. The scores of the machine generations for when the machine was rated better or equally good (both-good) by humans are not higher than the scores when the machine generations are poor (human-better). CLIP and RefCLIP also do not seem to differ across the three sets. This suggests that visual similarity may also not be important in distinguishing good from poor reactions. Further investigation is required to check if we can train a custom BART metric based on a few examples in our dataset.
**Multimodal models are slightly more image-relevant** We also check the performance of a language-only model compared to the multimodal BLIP-2 by using only the langauge model in BLIP2, which is a FLAN-T5 model. In Table 3, we see that the relevance to the image is understandably higher (acc to CLIP scores) for BLIP-2. However, based on results in Table 2, this doesn't necessarily mean that the generations are more preferred by humans.
**Discussion and Conclusion** We propose the Socratis benchmark to generate reactions for why images and captions elicit certain emotions in humans. Our initial experiments indicate that a state-of-the-art vision-language model is unable to extrapolate the required information to generate reasonable reactions. However, further research is required to investigate biases that exist in our dataset and how these
\begin{table}
\begin{tabular}{c c c c} \hline \hline Machine-better & Human-better & Both-Good & Both-Bad \\ \hline
91 & 233 & 47 & 11 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number of times majority (\(\frac{2}{3}\)) humans prefer human-written reactions, BLIP-2 (machine) reactions, or both out of 382 examples.
\begin{table}
\begin{tabular}{l c c} \hline \hline & CLIP \(\uparrow\) & Ref-CLIP \(\uparrow\) \\ \hline BLIP-2 & **0.75** & 0.42 \\ FLAN-T5 & 0.73 & 0.42 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluations with BART and CLIP-Score on subsets where humans prefer human-written reactions, BLIP-2 (machine) reactions, or both. We want machine-better or both-good generations scored higher than human-better.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Machine-better & Human-better & Both-Good \\ \hline BART & -5.48 & -5.42 & -5.54 \\ CLIP-Score & 0.74 & 0.76 & 0.75 \\ RefCLIP & 0.42 & 0.43 & 0.42 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Multimodal vs a text-only model on relevance of generation to the image
are perpetuated in the models that are benchmarked on it. Further investigation is also required to see if quick fixes like changing the generation strategy, adapting a few layers, or in-context learning (showing examples in prompts) can make these models more emotionally aware. We hope our Socratis benchmark will encourage future research on training and evaluating emotionally aware AI algorithms.
**Acknowledgements**: We are thankful to Praneeth Chandra Bogineni for valuable discussions in the initial phase of the project. This material is based upon work supported, in part, by DARPA under agreement number HR00112020054. The findings in the paper do not reflect the opinions of the US Government or DARPA.
|
2309.05372 | Peaceman Well Block Problem For Time-Dependent Flows of Compressible
Fluid | We consider sewing machinery between finite difference and analytical
solutions defined at different scales: far away and near the source of the
perturbation of the flow. One of the essences of the approach is that coarse
problem and boundary value problem in the proxy of the source model two
different flows. In his remarkable paper Peaceman propose a framework how to
deal with solutions defined on different scale for linear \textbf{time
independent} problem by introducing famous, Peaceman well block radius. In this
article we consider novel problem how to solve this issue for transient flow
generated by compressiblity of the fluid. We are proposing method to glue
solution via total fluxes, which is predefined on coarse grid and changes in
the pressure, due to compressibility, in the block containing
production(injection) well. It is important to mention that the coarse solution
"does not see" boundary. From industrial point of view our report provide
mathematical tool for analytical interpretation of simulated data for
compressible fluid flow around a well in a porous medium. It can be considered
as a mathematical "shirt" on famous Peaceman well-block radius formula for
linear (Darcy) transient flow but can be applied in much more general scenario.
In the article we use Einstein approach to derive Material Balance equation, a
key instrument to define $R_0$. We will enlarge Einstein approach for three
regimes of the Darcy and non-Darcy flows for compressible fluid(time
dependent): $\textbf{I}. Stationary ; \textbf{II}. Pseudo \ Stationary(PSS) ;
\textbf{III}. Boundary \ Dominated(BD).$ | A. Ibraguimov, E. Zakirov, I. Indrupskiy, D. Anikeev, A. Zhaglova | 2023-09-11T10:48:49Z | http://arxiv.org/abs/2309.05372v1 | # Peaceman Well Block Problem for Time-Dependent Flows of Compressible Fluid
###### Abstract.
We consider sewing machinery between finite difference and analytical solutions defined at different scales: far away and near the source of the perturbation of the flow. One of the essences of the approach is that coarse problem and boundary value problem in the proxy of the source model two different flows. In his remarkable paper Peaceman propose a framework how to deal with solutions defined on different scale for linear **time independent** problem by introducing famous, Peaceman well block radius. In this article we consider novel problem how to solve this issue for transient flow generated by compressiblity of the fluid. We are proposing method to glue solution via total fluxes, which is predefined on coarse grid and changes in the pressure, due to compressibility, in the block containing production(injection) well. It is important to mention that the coarse solution "does not see" boundary.
From industrial point of view our report provide mathematical tool for analytical interpretation of simulated data for compressible fluid flow around a well in a porous medium. It can be considered as a mathematical "shirt" on famous Peaceman well-block radius formula for linear (Darcy) transient flow but can be applied in much more general scenario.
In the article we use Einstein approach to derive Material Balance equation, a key instrument to define \(R_{0}\).
We will enlarge Einstein approach for three regimes of the Darcy and non-Darcy flows for compressible fluid(time dependent):
\(\mathbf{I}.Stationary;\mathbf{II}.Pseud\ Stationary(PSS);\mathbf{III}.Boundary\ Dominated(BD)\).
Note that in all known authors literature, rate of the production on the well is time independent. **Our MB equation tuned to prove that corresponding Peaceman well block radius for each of the regime of flow is time independent, and converge to Peaceman well block radius when exterior reservoir radius is vanishing.** For clarity we will first derive Peaceman Well Block formula for each regimes of the flows in 1-D Euclidean case and then more difficult and more practical 2-D radial cases.
###### Contents
* 1 Introduction
Introduction
Peaceman well-block radius [7] is routinely used by engineers, but was not rigorously studied even for steady-state flows. Detailed review on fundamentals Peaceman well bock radius for linear and non-linear stationary flows in porous media was recently accepted for publication in Applied and Computational Mathematics, an international journal and was published in the archive [9]. Here we just want we often refer on to mention that concept of equivalent well block radius was introduced in Russia(see [4],and [5]), but was not translated, and therefore was not cited in modern literature, as it often happen. In the basis of the idea behind Peaceman well block radius lay material balance equation which enable to sew analytical solution with simulated one, and interpreted result of computed value of the pressure in the block containing well. In this section we will describe the paradigm for material balance(MB) as an algebraic set of equations, and indicate our intended application. To introduce MB system of equation let first consider the finite set of dependant variables
\[\mathcal{P}=\left\{p_{\pm r_{0},0}(s);p_{0,\pm r_{0}}(s);p_{\pm 1,0}(s);p_{0,\pm 1 }(s);q_{x}^{\pm}(s);q_{y}^{\pm}(s)\right\}. \tag{1.1}\]
Let
\[\mathcal{K}=\left\{K_{x}^{\pm};K_{y}^{\pm}\right\}\text{ and }\mathcal{Q}= \left\{Q_{x}^{\pm};Q_{y}^{\pm}\right\}. \tag{1.2}\]
be inputs, which in this study are considered to be constants. To make discussion more motivated we will highlight intended application. Consider diffusive process in the domain \(\Omega\ni 0\) with source/sink - \(0\), which is igniting process and \(\Omega_{N}=\sum_{i=1}^{N}B_{i}\) grid approximating \(\Omega\). Let \(\Omega_{N}\supset B_{0}\ni 0\) to be characterised by blocks \(B_{i}\), of the "size" \(\Delta\), and contains \(0\), for all \(i\). Major assumption is that process of the transport and changes of the fluid is much "faster" than geological process, and therefore dependents of the \(\mathbf{K}\) and \(\mathcal{Q}\) to be ignored. Assume that conductivity at blocks of the interest are fixed flow generated by source(_well_ which fixed and located in the box \(B(\Delta)\) of size \(\Delta\), and this property holds for each \(\Delta\). Let set \(\mathcal{P}\) contains only parameters defined only in center \(B_{0}=B_{0,0}\)( domain of the parameters
\(p_{\pm r_{0},0}(s);p_{0,\pm r_{0}}(s),\cdots\) are in \(B_{0}\)) and nearest surrounding four blocks \(B_{i,J}(\) domain of the parameters \(p_{\pm r_{0},0}(s);p_{0,\pm r_{0}}(s)\) are in \(B_{\pm 1,0}\)\(B_{0,\pm 1}.)\). Consider the filtration, which controlled by Material Balance (MB) equation as an algebraic equation w.r.t. unknown variable \(p_{a,b}(s)\), depending on parameter \(s\) and input variable \(q_{a}^{b}(s)\) also depending on parameter \(s\). \(s\) is the model for time. System also featured by input parameter \(\tau,\) which associate to changes in properties of variables \(p\) on time interval \(\left[s,s+\tau\right].\) This \(\tau\) in some sens connect our equation to Einstein equation of material balance(see [2], [8]), is predefined and set to be very small.
**Remark 1**.: _Note that Einstein equation of material balance is naturally stochastic, whether ours is deterministic. In spite of that we think that Einstein's method can be extended stochastic processes defined on THE stochastic grid. We will leav this for further research._
Dependent variables fro set \(\mathcal{P}\) and \(\mathcal{P}\) w.r.t. parameters \(\mathcal{K}\),\(\mathcal{Q}\), and \(\tau\) are subject to algebraic equation:
\[\tau\cdot K_{x}^{-}\cdot\left(p_{-r_{0},0}(s)-p_{-1,0}(s)\right) =\tau\cdot q_{x}^{-}(s)+Q_{x}^{-}\left(p_{-r_{0},0}(s+\tau)-p_{-r_{0},0}(s)\right) \tag{1.3}\] \[\tau\cdot K_{x}^{+}\cdot\left(p_{r_{0},0}(s)-p_{1,0}(s)\right) =\tau\cdot q_{x}^{+}(s)+Q_{x}^{+}\left(p_{-r_{0},0}(s+\tau)-p_{-r_{0},0}(s)\right)\] (1.4) \[\tau\cdot K_{y}^{-}\cdot\left(p_{0,-r_{0}}(s)-p_{0,-1}(s)\right) =\tau\cdot q_{y}^{-}(s)+Q_{y}^{-}\left(p_{-r_{0},0}(s+\tau)-p_{-r_{0},0}(s)\right)\] (1.5) \[\tau\cdot K_{y}^{+}\cdot\left(p_{0,r_{0}}(s)-p_{0,1}(s)\right) =\tau\cdot q_{y}^{+}(s)+Q_{y}^{+}\left(p_{0,r_{0}}(s+\tau)-p_{0,r_{0}}(s)\right) \tag{1.6}\]
Denote:
\[q_{x}(s)=q_{x}^{-}(s)+q_{x}^{+}(s)\ \ q_{y}(s)=q_{y}^{-}(s)+q_{y}^{+}(s),Q_{x}=Q _{x}^{-}+Q_{x}^{+}\ \ Q_{y}=Q_{y}^{-}+Q_{y}^{+} \tag{1.7}\]
and
\[q(s)=q_{x}(s)+q_{y}(s)\ ;\ Q=Q_{x}+Q_{y}. \tag{1.8}\]
Assume symmetry condition w.r.t. \(+\) and \(-\), which we state as follows:
**Definition 1**.: _Symmetry structural constrains w.r.t. \(+,\ -\)._
* \(K\) _coefficient_ \[K_{x}^{-}=K_{x}^{+}=K_{x}\ ;\ K_{y}^{-}=K_{y}^{+}=K_{y}.\] (1.9)
* \(q\) _parameter_ \[q_{x}^{-}(s)=q_{x}^{+}(s)=\frac{q_{x}(s)}{2}\ ;\ q_{y}^{-}(s)=q_{y}^{+}(s)= \frac{q_{y}(s)}{2}.\] (1.10)
* \(Q\) _coefficient_ \[Q_{x}^{-}=Q_{x}^{+}=\frac{Q_{x}}{2}\ ;\ Q_{y}^{-}=Q_{y}^{+}=\frac{Q_{y}}{2}.\] (1.11)
* \(p\) _variable w.r.t first index_ \[p_{-r_{0},0}(s)=p_{r_{0},0}(s)=p_{r_{0}}^{x}(s)\ ;\ p_{-1,0}(s)=p_{1,0}(s)=p_{1}^{x}(s).\] (1.12)
_(v)_ \(p\) _variable w.r.t second index_
\[p_{0,-r_{0}}(s)=p_{0,r_{0}}(s)=p_{r_{0}}^{y}(s)\ ;\ p_{0,-1}(s)=p_{0,1}(s)=p_{1}^{y}(s). \tag{1.13}\]
Then from (1.3), and some basic algebraic manipulations follows
\[\tau\cdot 2\cdot K_{x}\cdot\left(p_{r_{0}}^{x}(s)-p_{1}^{x}(s)\right) =\tau\cdot q_{x}(s)+Q_{x}\cdot 2\cdot\left(p_{r_{0}}^{x}(s+\tau)-p_{r_{0}}^ {x}(s)\right), \tag{1.14}\] \[\tau\cdot 2\cdot K_{y}\cdot\left(p_{r_{0}}^{y}(s)-p_{1}^{y}(s)\right) =\tau\cdot q_{y}(s)+Q_{y}\cdot 2\cdot\left(p_{r_{0}}^{y}(s+\tau)-p_{r_{0} }^{y}(s)\right). \tag{1.15}\]
If one will assume that \((p_{r_{0}}^{y}(s)-p_{1}^{y}(s)=0\), \(\left(p_{r_{0}}^{y}(s+\tau)-p_{r_{0}}^{y}(s)\right),\text{and},q_{y}(s)=0\) then we will get a precursor for 1-D MB which in the case of symmetry in \(x\) - direction will take a form
\[\boxed{\tau\cdot 2\cdot K_{x}\cdot\left(p_{r_{0}}^{x}(s)-p_{1}^{x}(s)\right) =\tau\cdot q_{x}(s)+Q_{x}\cdot 2\cdot\left(p_{r_{0}}^{x}(s+\tau)-p_{r_{0} }^{x}(s)\right).} \tag{1.16}\]
As a precursor for 2-D MB which in case of symmetry and isotropy we will assume that \(p_{r_{0}}=p_{r_{0}}^{x}=p_{r_{0}}^{y},\ \cdots\) and anisotropy: \(K_{x}=K_{y}\) letting \(q(s)=q_{x}(s)+q_{y}(s)\) and \(Q(s)=Q_{x}(s)+Q_{y}(s)\) we will take a form
\[\boxed{\tau\cdot 4\cdot K\cdot\left(p_{r_{0}}(s)-p_{1}(s)\right)=\tau\cdot q (s)+Q(s)\cdot 4\left(p_{r_{0}}(s+\tau)-p_{r_{0}}(s)\right).} \tag{1.17}\]
In general setting algebraic variable(letter) of interest \(p_{i}^{x,y,\cdots}\), \(i=0,1,2,\cdots\) may depend on parameter, and it is common in algebraic geometry structure. Assume that \(i=0,1\) then using above arguments, we are considering Algebraic Parametric Structure as a sewing machinery between numerical and "analytical" solutions :
\[\tau\cdot\left(J_{1,0}^{p}\cdot\left(p_{0}(s)-p_{1}(s)\right)-I_{q}\cdot q(s) \right)=L_{q}^{p_{0}}\cdot\left(p_{0}(s+\tau)-p_{0}(s)\right). \tag{1.18}\]
Value of the coefficients and their dependence on input parameters can vary depending on the intended applications, dimension, geometry and dynamics of the process, discretization etc.
**Remark 2**.: _In a view of the algebraic structure equation of intended application \(p_{i}(s)\) are dependant variables, (1.18) \(q(s)\) is main function defining process, and three others coefficients \(J_{1,0}^{p}\), \(I_{q},\) and \(L_{q}^{p_{0}}\) contains essential characteristics of the algebraic and geometrical structure of the media of the flow and its discretization by domain \(\Omega_{N}\).We will choose this coefficients in next paragraph._
## 2. Motivation for Material Balance Equations and Application to Numerical Scheme
Consider flow in the reservoir \(\Omega\), and corresponding mathematical model. Numerical simulator of the flow provide three basic information
1. Geometric approximation of the domain of multi-component and multi-phase flow
2. Numerical Value of the functions of interest in each block
3. Value of the parameters characterising domain w.r.t. chemical and physical of the fluids and media
To motivate the algebraic structure of MB (1.18) equation, consider orthogonal grid of dimension \(M\times N\) and size \(\Delta_{x}\) and \(\Delta_{y}.\) Let \(P_{(M,N)}\) be \(M\times N\) matrix of the pressure value, with an elements \(p_{i,j}(t),\) which associate block \(B_{i,j}.\) Assume that block \(B_{0,0}\) contains source at center \((0,0),\) which generate differences in the function \(p_{i,j}(t).\) Here \(D=\Omega\times(0.h)\) is 3-dimensional cylinder and there non flow in \(z\) direction. Assume Green type function \(p(x,y,t)\) be a solution of the the basic modeling problem
\[L\cdot\frac{\partial p(x,y,t)}{\partial t}-J\cdot\left(\frac{ \partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}\right)p=I \cdot\delta(x,y)\ \ \mbox{in}\ \ \ (\Omega\setminus(0,0))\times(-\infty,\infty)\, \tag{2.1}\] \[B(p)=0\ \mbox{on}\ \partial\Omega\times(-\infty,\infty). \tag{2.2}\]
Here \(B\) to be boundary operator, which in our case will be Dirichlet, or Newman operator. To approximate function \(p(x,y,t)\) consider finite different solution of the problem in rectangular domain
\[L\cdot\frac{p_{i,i}(t+\tau)-p_{i,i}(t)}{\tau}- \tag{2.3}\] \[J\cdot\left(\frac{p_{i-1,j}(t)-2p_{i,j}(t)+p_{i+1,j}(t)}{\Delta _{x}^{2}}+\frac{p_{i,j-1}(t)-2p_{i,j}(t)+p_{i,j+1}(t)}{\Delta_{y}^{2}}\right)=\] \[I\cdot\frac{\delta_{i,j}}{\Delta_{x}\cdot\Delta_{y}\cdot h}\ \ \mbox{in}\ \ \Omega_{N}\setminus(0,0)\,\] \[B(p)=0\ \mbox{on}\ \partial\Omega_{N}\times(-\infty,\infty) \tag{2.4}\]
or
\[L\cdot(\Delta_{x}\cdot\Delta_{y}\cdot h)\cdot(p_{i,j}(t+\tau)-p_ {i,j}(t))= \tag{2.5}\] \[\tau\cdot\left[Jh\left(\frac{\Delta_{y}}{\Delta_{x}}(p_{i-1,j}(t) -2p_{i,j}(t)+p_{i+1,j}(t))+\frac{\Delta_{x}}{\Delta_{y}}(p_{i,j-1}(t)-2p_{i,j}( t)+p_{i,j+1}(t))\right)+I\delta_{i,j}\right],\] \[B(p)=0\ \mbox{on}\ \partial\Omega_{N}\times(-\infty,\infty).\]
Here \(\delta_{i,j}\) is Kronecker symbol. Equation above is basic and can be applied in \(1-D\) and \(2-D\) cases, although has in both cases many similarities but it differ. Namely:
1. 1-D Material Balance in "last blocks" \(B_{0},B_{1}\) Under assumption 1-D Symmetry let \(\Delta_{y}=const\), and \(h=const\) for all \(\Delta=\Delta_{x}\) then MB takes form \[L\cdot\Delta\cdot\Delta_{y}h\left(p_{0}(t+\tau)-p_{0}(t)\right)=\] (2.6) \[\tau\left(2\cdot J\Delta_{y}\cdot h\cdot\frac{\left(p_{1}(t)-p_{0}(t) \right)}{\Delta}+I\delta_{0,0}\right)=\tau\left(2\cdot\left(J\cdot\left(\Delta _{y}\cdot h\right)\right)\cdot\frac{\left(p_{1}(t)-p_{0}(t)\right)}{\Delta}+q \delta_{0,0}\right),\]
2. Radial Material Balance Equation in "last blocks" \(B_{0},B_{pm1,0},B_{0,pm1}\) Under assumption of \(2-D\) symmetry if one will will let that \[\boxed{\Delta=\Delta_{x}=\Delta_{y}}.\] (2.7) We will also assume that thickness of the reservoir is constant and \[\boxed{I=q}\] (2.8) Then equation (2.5) can be simplified as \[L\cdot\Delta^{2}\cdot h\left(p_{0}(t+\tau)-p_{0}(t)\right)=\] (2.9) \[\tau\left(4\cdot J\cdot h\cdot\left(p_{1}(t)-p_{0}(t)\right)+I \delta_{i,j}\right)=\tau\left(4\cdot\left(J\cdot h\right)\cdot\left(p_{1}(t)- p_{0}(t)\right)+q\delta_{i,j}\right),\]
For convenience let summarize comments on properties of the media w.r.t. the flow as itemized remarks
**Remark 3**.: _Physical consideration_
1. _In above_ \(q\) _is total rate on the well(well production), which is time dependant_ \((q=q(s))\) _for PSS and BD regimes._
2. _In this article for 2-D flows we assume isotropy and symmetry flows:_ \[\boxed{K=K_{x}^{-}=K_{x}^{+}=K_{y}^{-}=K_{y}^{+}\ ;\ q_{x}^{-}=q_{x}^{+}=q_{y}^{-}=q_{y}^{+}\ ;\ Q_{x}^{-}=Q_{x}^{+}=Q_{y}^{-}=Q_{y}^{+}.}\] (2.10) _(iii) In this article for 1-D flows we assume isotropy and symmetry flows:_ \[\boxed{K=K_{x}^{-}=K_{x}^{+}\ ;\ q=q_{x}^{-}=q_{x}^{+}\ q_{y}^{-}=q_{y}^{+}=0 \ ;Q=Q_{x}^{-}=Q_{x}^{+}\ \ Q_{y}^{-}=Q_{y}^{+}=0.}\] (2.11)
_._
4. _All above assumptions was made enable use analytical solution, which can be constructed explicitly. In case when analytical solution is unavailable in the gluing machinery one can use the numerical solution on fine scale for the corresponding IBVP._
5. _MB equation "does not see boundary_ \(\Omega_{N}\) _and used to glue analytical solution by solving Peaceman problem. But analytical solution will take into account impact of boundary condition on value of Peaceman Radius. We will see that in linear case_ \(R_{0}\) _will depend on size of the domain only in time dependent problem. Once more_ \(R_{0}\) _as it was shown by Peaceman will be independent on size of the domain, and this is quite remarkable finding by Peaceman. This issue was in the detail discussed in our article_ _[_9_]___
### 1-D Material Balance
Now consider Grid given in Fig. 1 with no flow condition in \(y\) direction. Assume \(h\cdot\Delta_{y}=1\) and let \(\Delta_{x}=\Delta\). Then \(1-D\) approximation of the Grid as in Figure 2'. Corresponding MB on 1-D grid in \(y\) dirrection is considered to be "trivial"
Let in material balance equation (1.18)
\[I_{q}(s)=q\cdot\frac{1}{h\cdot\Delta_{y}\cdot\Delta_{x}}, \tag{2.12}\] \[J_{1,0}^{p}=2K\cdot\frac{1}{\Delta_{x}^{2}},\] (2.13) \[L_{q}^{p_{0}}=\phi\cdot C_{p}. \tag{2.14}\]
In above we let
\[\Delta_{y}\cdot h=1,\ C^{0}=\phi\cdot C_{p},q(s)=q_{x}(s). \tag{2.15}\] \[\Delta_{x}=\Delta. \tag{2.16}\]
Then MB equation (1.18) takes form
\[2K\cdot(p_{0}(s)-p_{1}(s))=-q\cdot\Delta+C^{0}\cdot\frac{p_{0}(s+\tau)-p_{0}( s)}{\tau}\cdot\Delta^{2}. \tag{2.17}\]
Note that if we will use the finite difference(2.9) as a MB one will get
\[L\cdot\Delta\cdot\Delta_{y}h\left(p_{0}(t+\tau)-p_{i,i}(t)\right)= \tag{2.18}\] \[\tau\left(2\cdot(J\cdot(\Delta_{y}\cdot h))\cdot\frac{(p_{1}(t)- p_{0}(t))}{\Delta}+q\delta_{i,j}\right),\] \[B(p)=0\ \text{on}\ \partial\Omega\times(-\infty,\infty),\]
which is equivalent to (2.17) under (2.15),(2.16) if \(J=K\,\text{and}\,L=C^{0}\).
### Linear Steady State Material Balance(Algebraic) \(p\) does not depend on \(s\)
Constrain that \(p_{i}\) for all \(i\)-s are \(s\) independent can be changed to more physical one: One of the multiplier in equation (2.19) is equal zero, namely
\[\phi C_{p}\frac{V_{0}}{V}\cdot\frac{p_{0}(s+\tau)-p_{0}(s)}{\tau}\equiv 0. \tag{2.19}\]
Physically (2.19) it can be stated that flow of fluid or fluid itself is such that if one can assume that
* the time interval of compression of the fixed volume \(V_{0}\) with respect to whole volume \(V\) of flow filtration is too big;
* porosity \(\phi\) is negligible ;
* compressibility \(C_{p}\) is negligible;
* changes of the pressure in the block \(V_{0}\) w.r.t. \(\tau\) is negligible.
**Remark 4**.: _Although we consider fracture \(\frac{V_{0}}{V}=1\) we keep this factor for interpretation in (2.19)for mathematical generality._
**Definition 2**.: _We will say that MB balance is steady state if condition (2.19) holds for all \(s\) and \(\tau\)._
Then symmetrical, isotropic and steady-state steady state 1-D Balance Equation has the form
\[2K\cdot(p_{0}-p_{1})=q\Delta. \tag{2.20}\]
**Remark 5**.: _Note that in steady state MB \(p_{i}\) is parameter(time in our intended application) independent._
To sew value of \(p_{0}\) with pressure trace on the boundary of flow with given rate \(q\) let us consider 1-D flow of non-compressible fluid towards gallery \(x=0.\) The flow is subjected to: (i) linear Darcy equation with fixed, (ii) \(s\) independent pressure \(p=p_{e}\) on the reservoir boundary \(x=r_{e}\), and (iii) production rate \(q\) on the gallery at \(x=0\) as the inner boundary of the flow. Corresponding analytical model for \(1-D\) pressure has a form
\[\frac{d}{dx}\left(K\frac{d}{dx}p_{an}(x)\right)=0; \tag{2.21}\] \[p_{an}(x)\Big{|}_{x=R_{e}}=p_{e};\] (2.22) \[-K\cdot\frac{dp_{an}}{dx}\Big{|}_{x=0}=q. \tag{2.23}\]
**Definition 3**.: _Let \(1-D\) domain \((0;R_{e})\) is split by grid \([0,\Delta,2\cdot\Delta,\cdots N\cdot\Delta]\) were \(N\Delta=r_{e}\). We will say that Peaceman problem is well posed w.r.t. MB (2.20) for \(1-D\) flows if for any given \(\Delta\) exist \(R_{0}\) depending on \(\Delta\) s.t. analytical solution of the 1-D SS problem (2.21)-(2.23) satisfies equation._
\[-2K\cdot(p_{an}(\Delta)-p_{an}(R_{0}))=q\cdot\Delta. \tag{2.24}\]
**Theorem 1**.: _In order Peaceman problem to be well posed w.r.t. MB (2.20) for \(1-D\) flows it is necessary and sufficient that_
\[R_{0}=\frac{\Delta}{2}. \tag{2.25}\]
Proof.: First analytical solution \(p_{an}(x)\) has a form
\[p_{an}(x)=A\cdot x+B,\ A=-\frac{q}{K},\ B=p_{e}+\frac{q}{K}r_{e}. \tag{2.26}\]
Substituting \(p_{an}(x)\) into (2.24)one will get
\[2KA((\Delta)-R_{0}+(B-B))=q\Delta\ \text{or},\ 2(\Delta)=\Delta+2R_{0}\ \text{or},\ \Delta=2R_{0}. \tag{2.27}\]
From above formula (2.25) follows.
**Remark 6**.: _This theorem is elementary but we brought it here to highlight reasoning why Peaceman formula for \(R_{0}\) for SS regime does depend only on size of the block \(\Delta\) but does not depend on size of the domain, conductivity and rate of production. In in-fact follows from mean-value Lagrange theorem, Darcy Law and Divergence Theorem (Conservation law) for not-compressible Fluid._
### 1-D Linear PSS Material Balance(Algebraic) and corresponding \(R_{0}\)
Let define the PPS constrain for the solution of algebraic material balance equation assuming in addition that \(p_{0}\)\(p_{1}\) and \(q\) are conditioned as follows
**Definition 4**.: _We will say that MB is steady state if_
\[q(s)=q\ \text{is}\ s\ \text{independent}. \tag{2.28}\] \[p_{0}(s+\tau)-p_{0}(s)=q\cdot C_{0}\cdot\tau.\ \text{and}\ \ C_{0}\ \text{is}\ s\ \text{independent}. \tag{2.29}\]
From above obliviously follows that difference
\[p_{1}(s)-p_{0}(s)\ \text{is}\ s\ \text{independent}. \tag{2.30}\]
Then linear 1-D PSS Material Balance will have form
\[2K\cdot\left(p_{1}-p_{0}\right)=q\cdot\Delta\left(1-\phi c_{p}\cdot 1\cdot C_{0} \Delta\right)=q\Delta\left(1-C_{1}\Delta\right). \tag{2.31}\]
For simplicity we will let \(C_{1}=1\)
#### 2.3.1. Analytical model for PSS problem \(1-D\)
Analytical model for the PSS regime has a form
\[\frac{\partial}{\partial x}\left(K\frac{\partial}{\partial x}p(x, t)\right)=\frac{\partial p}{\partial t};\text{ on }(0;r_{e})\times(0,\infty) \tag{2.32}\] \[\frac{\partial p}{\partial x}\Big{|}_{x=r_{e}}=0\] (2.33) \[-K\cdot\frac{\partial p_{an}}{\partial x}\Big{|}_{x=0}=q\] (2.34) \[p(x,0)=p_{an}(x) \tag{2.35}\]
In (2.35) \(p_{an}(x)\)is solution BV problem
\[\frac{d}{dx}\left(K\frac{d}{dx}p_{an}(x)\right)=Q=\frac{q}{r_{e}} ;\text{ on }(0;r_{e}) \tag{2.36}\] \[K\cdot\frac{dp_{an}}{dx}\Big{|}_{x=r_{e}}=0\] (2.37) \[-K\cdot\frac{dp_{an}}{dx}\Big{|}_{x=0}=q \tag{2.38}\]
It is evident that analytical solution for PSS IBVP has a form
\[p_{pss}(x,t)=w(x)+A_{0}t,\text{ \ function }w(x)=p_{an}(x). \tag{2.39}\]
were \(A_{0}=\frac{1}{r_{e}}\cdot q\). Solution \(p_{an}(x)\) of BVP (2.36)-(2.38) is denoted as \(w(x)\) for convenience. PSS regime, generate pressure \(p_{ss}(x,t)\) is called pss-pressure.
General solution for \(p_{an}(x)\) has a form
\[p_{an}(x)=w(x)=Ax^{2}+Bx. \tag{2.40}\]
Here
\[B=\frac{q}{K}, \tag{2.41}\]
and is recovered from BC (2.38). Parameter \(A\) consequently from RHS of (2.36) and BC (2.37)
\[A=\frac{B}{2r_{e}}=-\frac{q}{2Kr_{e}}. \tag{2.42}\]
**Remark 7**.: _Note that auxiliary function \(w(x)\) by construction is vanishing on the well at \(x=0\)._
**Definition 5**.: _We will say that Peaceman problem for PSS is well posed w.r.t. time dependent MB (2.17) for \(1-D\) flows if for any given \(\Delta,\) and \(r_{e}\) exist \(R_{0}^{pss}(\Delta,r_{e})\) depending on \(\Delta\) and \(r_{e}\) s.t. analytical solution of the 1-D PSS problem (2.36)-(2.38) satisfies equation and in addition constrains (2.28) and (2.29) hold._
**Theorem 2**.: _Peaceman problem is well posed w.r.t. time dependent MB for \(1-D\), a.e. exists \(R_{0}^{pss}(\Delta,r_{e})\) s.t. \(p_{1}(t)=p(\Delta,t)\), \(p_{0}(t)=p(R_{0},t)\) and \(q\) s.t. \(p(x,t)\) and \(q\) a subject to time dependent MB equation and both constrains in the definition 4. Moreover the following limiting result follows_
\[\boxed{\lim_{r_{e}\to\infty}R_{0}^{pss}(\Delta,r_{e})=R_{0}.} \tag{2.43}\]
Proof.: Proof follows from straightforward calculations that \(R_{0}^{pss}(\Delta\,r_{e})\) is subject to equation
\[2K\cdot(p_{pss}(\Delta,t)-p_{pss}(R_{0},t))=2K\cdot(w(\Delta)-w(R_{0}))=q \Delta\left(1-\phi c_{p}\cdot 1\cdot C_{2}\frac{\Delta}{r_{e}}\right). \tag{2.44}\]
From explicit presentation for \(w(x)\) follows that \(R_{0}\) should be subject for equation
\[\frac{\Delta^{2}}{2r_{e}}+\Delta-\frac{(R_{0}^{pss})^{2}}{2r_{e}}-R_{0}^{pss} =\frac{\Delta}{2}-C_{3}\frac{\Delta^{2}}{2r_{e}}, \tag{2.45}\]
for constant \(C_{3}\) depending on \(c_{p},\ C_{2}\) only. From above we will get explicitly formula for \(R_{0}^{PSS}:\)
\[(1+C_{3})\frac{\Delta^{2}}{2r_{e}}+\frac{\Delta}{2}=R_{0}^{pss}+\frac{(R_{0}^ {pss})^{2}}{2r_{e}}. \tag{2.46}\]
Assuming in above for simplicity \(C_{3}=0\) one can get
\[\frac{(R_{0}^{pss})^{2}}{1}+2r_{e}\cdot R_{0}^{pss}-\left(r_{e}\cdot\frac{ \Delta}{1}+\frac{\Delta^{2}}{1}\right)=0. \tag{2.47}\]
From here follows that positive branch of the root satisfies chain of the equations
\[R_{0}^{pss}=\frac{-2r_{e}+\sqrt{4r_{e}^{2}+4\Delta r_{e}+8\Delta^{2}}}{2}= \frac{\Delta}{1+\sqrt{1+\frac{\Delta}{r_{e}}+2\frac{\Delta^{2}}{r_{e}^{2}}}}+ \frac{2\Delta^{2}}{r_{e}}. \tag{2.48}\]
From above chain of equations one can get statement of the Theorem 2.
The following theorem follows straight forward from implicit differentiation of (2.45)
**Theorem 3**.: _Let \(\Delta\) and all parameters bur \(r_{e}\) of the problem are fixed. Then for \(r_{e}\) big enough \(R_{0}^{pss}\) as a function of \(r_{e}\) decreasing._
### 1-D linear MB-constrains for \(Bd\) regime and corresponding well box radius \(R_{0}\)
MB for BD regime can be stated in form of the definition for convenience in terms of keys input constrains on \(q(s),\ p_{i}(s)\) and \(p_{0}(s+\tau)\).
**Definition 6**.: _Algebraic MB-constrains for boundary dominated is stated as follows. Exist constants \(Q_{0},\ \mathbf{P}_{1}.\mathbf{P}_{0},\) s.t. items below hold varables \(p_{i}\) and \(q\) in MB equation_
1. \[\frac{q(s)}{p_{0}(s)}=Q_{0}(r_{e})\] (2.49) _in above_ \(Q_{0}(r_{e})\) _is_ \(\Delta\) _and_ \(s\) _independent constant_
2. \[\frac{p_{1}(s)}{p_{0}(s)}=\mathbf{P}_{1}(\Delta,r_{e})\] (2.50) _in above_ \(\mathbf{P}_{1}(\Delta,r_{e})\) _is constant depending on_ \(\Delta\) _and_ \(r_{e}\)_, but not_ \(s\)_._
3. \[\frac{p_{0}(s+\tau)}{p_{0}(s)}=\mathbf{P}_{0}(\Delta,r_{e})\frac{e^{-C(K,r_{e} )\cdot\tau}-1}{\tau}\] (2.51) _in above_ \(\mathbf{P}_{0}(\Delta,r_{e})\) _is constant depending on_ \(\Delta\) _and_ \(r_{e}\)_, but not_ \(s\)_._
#### 2.4.1. Boundary dominated analytical problem
Consider analytical problem
\[\frac{\partial}{\partial x}\left(K\frac{\partial}{\partial x}u_{ 0}(x,t)\right)=c_{0}\cdot\frac{\partial u_{0}(x,t)}{\partial t} \tag{2.52}\] \[u(x,t)\Big{|}_{x=0}=0\] (2.53) \[K\frac{\partial u_{0}(x,t)}{\partial x}\Big{|}_{x=r_{e}}=0\] (2.54) \[u_{0}(x,0)=\phi_{0}(x)\text{ here }\phi_{0}(x)\text{is first eigenfunction.}\] (2.55) \[u_{0}(x,t)\Big{|}_{t=0}=\phi_{0}(x) \tag{2.56}\]
Assuming for simplicity that \(c_{0}=1\) It is not difficult to prove the following
**Proposition 1**.: _Let \(u_{0}(x,t)\) be an analytical solution of IBVP (2.52):_
\[u_{0}(x,t)=e^{-K\lambda_{0}t}\sin(\lambda_{0}x). \tag{2.57}\]
_Define variable in MB equation as_
\[p_{0}(s)=u_{0}(R_{0},s); \tag{2.58}\] \[p_{1}(s)=u_{0}(\Delta,s);\] (2.59) \[q(s)=K\frac{\partial u_{0}}{\partial x}\Big{|}_{x=0}. \tag{2.60}\]
_Then all items in Definition 6 well defined for specific constants \(Q_{0},\ \mathbf{P}_{0},\ \mathbf{P}_{1}\) for any \(R_{0}\) and \(\Delta,\) and_
\[\lambda_{0}=\frac{\pi}{2\cdot r_{e}} \tag{2.61}\]
**Remark 8**.: _Note that all initial Data for all_ **three analytical problems** _are assigned in such way that corresponding productivity index are time independent._
It important to state that existence of the constants is of main interest, and it will addressed below. We brought the Proposition 1 in order to follow frame of the construction and motivate Peaceman well-posedness as
**Definition 7**.: _We will say that Peaceman problem for BD regime is well posed w.r.t. time dependent MB (2.52) for \(1-D\) flows if for any given \(\Delta,\) and \(r_{e}\) exist \(R_{0}^{BD}(\Delta,r_{e})\) depending on \(\Delta\) and \(r_{e}\) s.t. analytical solution of the 1-D BD problem (2.52) satisfies equation and in addition constrains in the definition (6) hold._
**Lemma 1**.: _Assume that \(R_{0}^{bd}<\Delta\) then for Peaceman's well posed for BD regime of the filtration it is sufficient_
\[\sin(\lambda_{0}R_{0}^{bd})-\sin(\lambda_{0}\cdot\Delta)+\frac{\lambda_{0} \Delta}{2}=\sin(\lambda_{0}\cdot R_{0}^{bd})\cdot\frac{1}{2K}\cdot\frac{e^{- \lambda_{0}^{2}\tau}-1}{\tau}, \tag{2.62}\]
_Moreover as \(r_{e}\to\infty,\) and \(\tau\to 0\)\(R_{0}^{bd}(\lambda_{0},\tau)\) converges to Peaceman steady state \(R_{0}.\) Here \(\lambda_{0}\) is the first eigenvalue, and \(R_{0}^{bd}\), which deliver solution to transcendent equation (2.62) defines by value of \(\Delta\), but and addition depend on \(r_{e},\) and \(\tau\ \ K\ \ c_{p}.\)_
Proof.: To proof the Theorem it is suffice write down solution of the problem 2.52 and calculate
\[q(s)=K\frac{\partial u_{0}(x,t)}{\partial x}\Big{|}_{x=0}=e^{-K\lambda_{0}s} \lambda_{0}\cos(\lambda_{0}x)\Big{|}_{x=0}=K\cdot e^{-K\lambda_{0}s}\lambda_{0} \tag{2.63}\]
Here \(\lambda_{0}\) is First eigenvalue.
For connivance let rewrite MB
\[2K\cdot(p_{0}(s)-p_{1}(s))=-q(s)\cdot\Delta+1\cdot\frac{p_{0}(s+\tau)-p_{0}(s )}{\tau}\cdot\Delta \tag{2.64}\]
and let \(R_{0}=R_{0}^{bd}<\Delta\) be unknown then
\[p_{0}(s)=e^{-K\lambda_{0}^{2}s}\sin(\lambda_{0}R_{0}^{bd}), \tag{2.65}\] \[p_{1}(s)=e^{-K\lambda_{0}^{2}s}\sin(\lambda_{0}^{2}\Delta)\] (2.66) \[p_{0}(s+\tau)=e^{-K\lambda_{0}^{2}(s+\tau)}\sin(\lambda_{0}R_{0 }^{bd}),\] (2.67) \[q(s)=K\cdot e^{-K\lambda_{0}^{2}s}\lambda_{0} \tag{2.68}\]
Let \(\lambda=\lambda_{0}\) be the first eigenvalue. Using (2.65)-(2.68) MB (2.64) takes form
\[2K\cdot e^{-\lambda^{2}s}\left(\sin(\lambda R_{0}^{bd}-\sin(\lambda\Delta) \right)=-K\lambda\cdot\Delta e^{-\lambda^{2}s}+1\cdot\sin(\lambda R_{0}^{bd}) \cdot\frac{e^{-\lambda^{2}(s+\tau)}-e^{-\lambda^{2}s}}{\tau}\cdot\Delta. \tag{2.69}\]
After some factoring by\(e^{-\lambda^{2}s}\) equation (2.69) takes form
\[2K\cdot\left(\sin(\lambda R_{0}^{bd}-\sin(\lambda\Delta)\right)=-K\lambda\cdot \Delta+1\cdot 1\cdot\sin(\lambda R_{0}^{bd})\cdot\frac{e^{-\lambda^{2}\cdot\tau}-1}{\tau}\cdot\Delta \tag{2.70}\]
Dividing (2.70) by \(2K\) we will get (2.62) in the main Theorem 1.
In order analytical to satisfy MB equation it is sufficient to assume that \(R_{0}^{bd}(\Delta,r_{e},\tau)\) solve equation (2.62). One can simplify above equation (2.70). Namely
\[2K\cdot 2\left(\sin\frac{\lambda\cdot(R_{0}^{bd}-\Delta)}{2}\cos\frac{ \lambda(R_{0}^{bd}+\Delta)}{2}\right)=-K\lambda\cdot\Delta+1\cdot\sin(\lambda R _{0}^{bd})\cdot\lambda^{2}\cdot\frac{e^{-\lambda^{2}\cdot\tau}-1}{\lambda^{2 }\tau}\cdot\Delta. \tag{2.71}\]
As \(\tau\to 0\) from (2.71) on can have
\[4K\cdot\left(\sin\frac{\lambda\cdot(R_{0}^{bd}-\Delta)}{2}\cos\frac{\lambda( R_{0}^{bd}+\Delta)}{2}\right)=-K\lambda\cdot\Delta+1\cdot\sin(\lambda R_{0}^{ bd})\cdot\lambda^{2}\cdot(-1)\cdot\Delta. \tag{2.72}\]
Under assumption that \(\lambda\) to be such that \(\cos\frac{\lambda(R_{0}^{bd}+\Delta)}{2}\approx 1\) Last equation provide the compact approximation for \(R_{0}^{bd}\) computation is small enough:
\[R_{0}^{bd}\approx\frac{\Delta}{2}-\frac{1}{2K}\lambda^{2}R_{0}^{bd}\Delta \tag{2.73}\]
or equivalently
\[R_{0}^{bd}\approx\frac{\frac{\Delta}{2}}{1+\frac{1}{2K}\lambda^{2}\Delta}. \tag{2.74}\]
Finally taking into account explicit formula for \(\lambda=\frac{\pi}{2r_{e}}\) and assuming that \(\frac{\Delta}{8K}\frac{\pi^{2}}{r_{e}^{2}}\)is small enough we will get the approximate formula for \(R_{0}^{bd}\)
\[\boxed{R_{0}^{bd}\approx\frac{\frac{\Delta}{2}}{1+\frac{1}{8K}\frac{\pi^{2}}{ r_{e}^{2}}\Delta}} \tag{2.75}\]
**Remark 9**.: _It is evident that for small enough \(\tau\), small \(\lambda\Delta\) and small fraction \(\frac{\phi c_{p}}{2K}\) the appropriate \(R_{0}^{BD}\) which solves equation (2.70) exists._
Using Lagrange mean value result similar to aove theorem to conclude if reservoir is unbounded then our obtained Peaceman well block radius is the same as in steady state (Classical) case. We state formulation and proof of the the following theorem for future more generic implementation in a view of Landis's multidimensional mean-value theorem [1].
**Theorem 4**.: _Well block radius \(R_{0}^{bd}(r_{e},\Delta)\) which deliver Peaceman well posedness asymptotically converges \(\frac{\Delta}{2}=R_{0}\) as \(r_{e}\to\infty\)for any fixed \(\tau\)._
Proof.: First observe that from Lagrange mean value theorem follows that
\[\cos\xi\cdot\lambda\left(R_{0}^{bd}-\Delta\right)+\frac{\lambda\Delta}{2}=\sin( \lambda\cdot R_{0}^{bd})\cdot\frac{1}{2K}\cdot\frac{e^{-\lambda^{2}\tau}-1}{\tau}, \tag{2.76}\]
were
\[\lambda R_{0}^{bd}<\xi<\lambda\Delta,\text{and consequently }\cos\xi=1+O( \lambda). \tag{2.77}\]
After division by \(\lambda\) in (2.76) we will get
\[\cos\xi\cdot\left(R_{0}^{bd}-\Delta\right)+\frac{\Delta}{2}=\lambda\cdot\sin( \lambda\cdot R_{0}^{bd})\cdot\frac{1}{2K}\cdot\frac{e^{-\lambda^{2}\tau}-1}{ \lambda^{2}\tau}, \tag{2.78}\]
or assuming as before that \(\lambda\) to be such that \(\cos\xi\approx 1\)
\[R_{0}^{bd}-\frac{\Delta}{2}=\lambda\cdot\sin(\lambda\cdot R_{0}^{bd})\cdot \frac{1}{2K}\cdot\frac{e^{-\lambda^{2}\tau}-1}{\lambda^{2}\tau}+O(\lambda). \tag{2.79}\]
Evidently in RHS of (2.80) term \(-1\cdot\frac{e^{-\lambda^{2}\tau}-1}{\lambda^{2}\tau}=O(1)\) therefore statement of theorem follows (2.61) and equation (2.80) below.
\[R_{0}^{bd}-\frac{\Delta}{2}=O(\lambda). \tag{2.80}\]
## 3. Pseudo Steady-State Material Balance
Consider flow toward well \(\Gamma_{w}\) in isolated reservoir \(V\) of height \(h=1\), and \(\phi\cdot c_{p}=1\). Material balance equation for transient flow for slightly compressible fluid in a block with dimensions \(\Delta\times\Delta\cdot 1\), volume \(V_{0}=\Delta^{2}\cdot 1\) and pressure \(p_{0}\) containing a well (source/sink) with flow rate \(q\) (positive for source and negative for sink):
\[-4K\cdot\left(p_{0}(s)-p_{1}(s)\right)+\frac{q}{h}=\Delta^{2}\cdot 1\cdot \frac{1}{\tau}\left(p_{0}(s+\tau)-p_{0}(s)\right), \tag{3.1}\]
Let the reservoir domain \(U\) with volume V, boundary \(\partial U=\Gamma_{e}\cup\Gamma_{w}\) and thickness \(h\).
**Assumption 1**.: _Assume PSS constrain for slightly compressible fluid of compressiblity \(c_{p}\)._
1. \[\left(p_{0}(s+\tau)-p_{0}(s)\right)=q\cdot\frac{\tau}{1\cdot V},\] (3.2)
Figure 1. Einstein Mat Balance for 1-D Flow
_
2. \[\text{difference}\ :\ p_{0}(s)-p_{1}(s)=constant\left(s\ \ \text{ independent}\right).\] (3.3)
_Under above constrain in (3.2) in the assumption (5) will take a form_
\[4K\cdot(p_{0}(s)-p_{1}(s))=\frac{q}{1}\cdot\left(1-\frac{\Delta^{2}}{V}\right), \tag{3.4}\]
_where \(q\) is given constant in time rate during given(fixed) time \(\tau\), is considered to be the same for any time step \(s\)._
**Remark 10**.: _Note that \(p_{i}(s)\) depend on parameter (time in our application) \(s\), but difference in PSS MB does not depend on \(s\). It is remarkable difference in compare to Steady State regime._
Consider transient 2-D radial flow in the isolated the annual domain \(U\)for slightly compressible fluid towards well \(\Gamma_{w}\) with given production rate and non-flow condition on the radius \(\Gamma_{\varepsilon}\):
Figure 2. Einstein Mat balance equation on the 5 spots grid, 2-D case
\[K\cdot\Delta p=1\cdot\frac{\partial p}{\partial t}\text{ in }U=U(0,r_{w},r_{e}); \tag{3.5}\] \[K\cdot\frac{\partial p}{\partial\nu}=0\text{ on }\Gamma_{e},\ \ r=r_{e}\ ;\] (3.6) \[K\cdot\int_{\Gamma_{w}}\frac{\partial p}{\partial\nu}ds=-\tilde{ q}\text{ on }\Gamma_{w}\,\ r=r_{w}. \tag{3.7}\]
Here
\[U(0,r_{w},r_{e})=\{x:r_{w}<|x|<r_{e}\}\,\Gamma_{w}=\{x:|x|=r_{w}\,\Gamma_{e}= \{x:|x|=r_{e}\}\ \ x=(x_{1},x_{2}),\]
and
\[\tilde{q}=\frac{q}{1},V=1\cdot|U|\,\ K=\frac{k}{\mu},\ \frac{\partial p}{ \partial\nu}\text{ external derivative in co-normal direction}\.\]
For generic case in order to deal with problem it is natural to consider the mixed boundary value problem for elliptic equation which is well-defined from mathematical point of view the following approach, and can be generalised for different scenarios.
In order \(R_{0}\) to be time independent we will split approach for IBVP. Namely consider PSS solution of the above problem (4.5)-(4.7) defined as follows:
\[p_{pss}(x,t)=w(x)+At, \tag{3.8}\]
In above
\[A=\frac{\tilde{q}}{1\cdot|U|}, \tag{3.9}\]
and \(w(x)\) is solution of steady state problem:
\[\nabla\cdot(K\nabla w(x))=\frac{\tilde{q}}{|U|}\text{ in }U, \tag{3.10}\] \[w(x)=0\text{ on }\Gamma_{w}\,\] (3.11) \[K\frac{\partial w}{\partial\vec{\nu}}=0\text{ on }\Gamma_{e}. \tag{3.12}\]
In radial axial-symmetrical(radial flow) cases pseudo-steady state \(p_{pss}(r,t)\) will take a form
\[p_{pss}(r,t)=w(r)+At \tag{3.13}\]
Using representation for \(p_{pss}(r,t)\) It is not difficult to prove
**Theorem 5**.: _In order \(\left[R_{0}^{PSS}(r_{e},\Delta)\right]\) - Peaceman radius for PSS problem it is sufficient to find \(\left[R_{0}^{PSS}(r_{e},\Delta)\right]\) s.t._
\[4K\cdot(w(\Delta)-w([R_{0}^{pss}(r_{e},\Delta)]))=-\frac{q}{1\cdot V}\left(V- \Delta^{2}\right). \tag{3.14}\]
From explicit form of the solution of the problem(3.10)-(3.12) one can find such \(\left[R_{0}^{pss}(r_{e},\Delta)\right].\) We will use another approach based on the formulation of the problem in term of velocity field. This velocity framework will help in future to work on nonlinear flows and some cases explicitly obtain associated Peaceman well block radius. First let us start with
**Remark 11**.: _In generic set \(U\) be domain with split boundary \(\partial U=\Gamma_{e}\cup\Gamma_{w},\) were_
\[\Gamma_{e}\cap\Gamma_{w}=\emptyset,\]
_and \(\Gamma_{e},\,\Gamma_{w}\) are compacts._
we will say that velocity field \(v(x)\) has PSS profile if following assumption holds
**Assumption 2**.: _We will say that velocity field subject for PSS regime of flows if velocity is time-independent and the solves the following BVP_
\[\nabla\cdot\vec{v}=C\ \mathrm{in}\ U \tag{3.15}\] \[\vec{v}\cdot\vec{\nu}=0\ \mathrm{on}\ \Gamma_{e}. \tag{3.16}\]
_Note that due to divergence theorem constant \(C\) form (3.15) satisfies_
\[\int_{\Gamma_{w}}\vec{v}\cdot\vec{\nu}ds=\tilde{q}=C\cdot|U| \tag{3.17}\]
**Remark 12**.: _Note that in our original research PSS regimes [3] was defined in term of pressure function. Namely we assumed that flow is PSS if \(\frac{\partial p}{\partial t}=constant,\) and no flow condition on exterior boundary. In our intended application both definitions are equivalent to each other._
**Theorem 6**.: _Exists solution of Peaceman problem for time dependant PSS regime of the production, and corresponding \(R_{0}^{PSS}(r_{e},\Delta)\) defined by the equation_
\[-\pi+\frac{\left[R_{0}^{PSS}(r_{e},\Delta)\right]^{2}}{r_{e}^{2}}+\pi\frac{r_ {w}^{2}}{r_{e}^{2}}=-2\cdot\left(\ln\frac{\Delta}{\left[R_{0}^{PSS}(r_{e}, \Delta)\right]}\right). \tag{3.18}\]
_Moreover_
\[\lim_{r_{e}\rightarrow\infty}\left[R_{0}^{PSS}(r_{e},\Delta)\right]=R_{0}^{SS }=R_{Peaceman} \tag{3.19}\]
Proof.: In general vector field solution of the BVP (3.15)-(3.16) is not unique, but in radial case is well defined :
\[-\frac{1}{r}(rv(r))_{r}=C=\frac{\tilde{q}}{|U|}\, \tag{3.20}\] \[\vec{v}\cdot\vec{r}\big{|}_{r=r_{e}}=0 \tag{3.21}\]
Then for \(r_{w}\leq r\leq r_{e}\):
\[v(r)=-\frac{C}{2}\cdot r+\frac{C_{1}}{r}. \tag{3.22}\]
In above due to BC constants can be selected as:
\[C=\frac{q}{\pi(r_{e}^{2}-r_{w}^{2})}=\frac{q}{|U|},\text{ and }C_{1}=\frac{C}{2}r_{e}^{2}. \tag{3.23}\]
Then due to Darcy equation - PSS solution \(w(r)=-\frac{\mu}{k}\int v(r)dr\) and consequently
\[w(r)=-K^{-1}\left[-\frac{C}{4}\cdot r^{2}+C_{1}\ln r+C_{2}\right]. \tag{3.24}\]
In above constant \(C_{2}\) have chosen s.t. \(w(r)|_{r=r_{w}}=0\) and therefore :
\[C_{2}=\left[\frac{C}{4}\cdot r_{w}^{2}-C_{1}\ln r_{w}\right]. \tag{3.25}\]
Consequently expression for pressure function to choose \(R_{0}^{PSS}(r_{e},\Delta)\) using Theorem 5 in case of PSS regime.
\[-\tilde{q}\cdot\left(1-\frac{\Delta^{2}}{V}\right)=4K\cdot(p_{1}-p_{0})=4K \cdot[w(\Delta)-w(R_{0})] \tag{3.26}\]
Then due to (3.24) and (3.23) one has
\[-\tilde{q}\cdot\left(1-\frac{\Delta^{2}}{|U|}\right)=4\cdot KK^ {-1}. \tag{3.27}\] \[\cdot\left[\left(C\cdot\frac{\Delta^{2}}{4}-C_{1}\ln(\Delta)-C_{2 }\right)-\left(C\cdot\frac{\left[R_{0}^{PSS}(r_{e},\Delta)\right]^{2}}{4}-C_{ 1}\ln(\left[R_{0}^{PSS}(r_{e},\Delta)\right])-C_{2}\right)\right]\] \[=\left[C\cdot\left(\Delta^{2}-R_{0}^{2}\right)-4\cdot\left(C_{1} \ln\frac{\Delta}{\left[R_{0}^{PSS}(r_{e},\Delta)\right]}\right)\right]=\] \[C\cdot\left[\left(\Delta^{2}-\left[R_{0}^{PSS}(r_{e},\Delta) \right]^{2}\right)-2\cdot\left(r_{e}^{2}\ln\frac{\Delta}{\left[R_{0}^{PSS}(r_ {e},\Delta)\right]}\right)\right].\]
After simplification one can get
\[\Delta^{2}-|U|=\left(\Delta^{2}-\left[R_{0}^{PSS}(r_{e},\Delta)\right]^{2} \right)-2\cdot\left(r_{e}^{2}\ln\frac{\Delta}{\left[R_{0}^{PSS}(r_{e},\Delta )\right]}\right), \tag{3.28}\]
or
\[\left[R_{0}^{PSS}(r_{e},\Delta)\right]^{2}-\pi\left(r_{e}^{2}-r_{w}^{2}\right) =\left[R_{0}^{PSS}(r_{e},\Delta)\right]^{2}-|U|=-2\cdot\left(r_{e}^{2}\ln\frac {\Delta}{\left[R_{0}^{PSS}(r_{e},\Delta)\right]}\right). \tag{3.29}\]
From later main equation (3.18) follows. Second statement of the theorem follows follows from (3.18).
From Theorem 6 formula (3.18) follows monotonicity property of Peaceman PSS radius \(R_{0}^{PSS}(r_{e},\Delta)\) w.r.t. external radius \(r_{e}\)
**Theorem 7**.: _Under condition of applicability of our formulae for PSS problem - \(r_{e}\geq R_{0}^{PSS}(r_{e},\Delta)\) function \(R_{0}^{PSS}(r_{e},\Delta)\)-monotonically decreasing._
Proof.: Taking derivative of left and right hand side of the equation (3.18) one can get
\[2\frac{\left[r_{e}^{2}-\left(R_{0}^{PSS}(r_{e},\Delta)\right)^{2}\right]}{R_{ 0}^{PSS}(r_{e},\Delta)\cdot r_{e}^{2}}\cdot\frac{d}{dr_{e}}R_{0}^{PSS}(r_{e}, \Delta)=-2\left[\frac{\left(R_{0}^{PSS}(r_{e},\Delta)\right)^{2}}{r_{e}^{3}}+ \pi\frac{r_{w}^{2}}{r_{e}^{3}}\right]<0 \tag{3.30}\]
But in a view of applicability of the framework bracket \(\left[r_{e}^{2}-\left(R_{0}^{PSS}(r_{e},\Delta)^{2}\right)\right]>0\), therefor statement of the theorem follows from (3.30).
**Remark 13**.: _It is evident that as \(r_{e}>>R_{0}\) one can get good approximation using Peaceman well block radius_
\[\frac{\pi}{2}\approx\left(\ln\frac{\Delta}{R_{0}}\right). \tag{3.31}\]
## 4. Linear Boundary Dominate Material Balance
Linear Material Balance for slightly compressible fluid the same as before, and flow toward well \(\Gamma_{w}\) in isolated reservoir \(V\) of height \(h=1\), and \(\phi\cdot c_{p}=1\).
Assume boundary dominated (BD) constrain for slightly compressible fluid the same as for PSS. Namely
\[-4K\cdot\left(p_{0}(s)-p_{1}(s)\right)+\frac{q(s)}{h}=1\cdot\frac{V_{0}}{1} \cdot\frac{1}{\tau}\left(p_{0}(s+\tau)-p_{0}(s)\right), \tag{4.1}\]
Once again let \(V\) be the volume of the reservoir domain \(U\) with boundary \(\partial U=\Gamma_{e}\cup\Gamma_{w}\). In order Peaceman radius for boundary dominated regime to be time independent assume the following
**Assumption 3**.: _Assume that: A-1._
\[\frac{q(s)}{p_{1}(s)}=C_{1} \tag{4.2}\]
_Constant \(C_{1}\) is \(s\) independent A-2._
\[\frac{p_{0}(s)}{p_{1}(s)}=C_{2} \tag{4.3}\]
_Constant \(C_{2}\) is \(s\) independent_
_A-3_.
\[\tau^{-1}\cdot\left(\frac{p_{0}(s+\tau)}{\cdot p_{0}(s)}-1\right)\approx C_{3}\,\ \text{for}\ \ \tau<<1. \tag{4.4}\]
_Constant \(C_{3}\) is \(s\) and \(\tau\) independent._
BD IBVP is defined as
\[K\cdot\Delta p=c_{p}\phi\frac{\partial p}{\partial t}\ \text{in}\ U=U(0,r_{e},r_{w})=B(0,r_{e})\setminus B(0,r_{w}); \tag{4.5}\] \[K\cdot\frac{\partial p}{\partial\nu}=0\ \text{on}\ \Gamma_{e},\ \ r=r_{e}\ ;\] (4.6) \[p(x)=p_{w}\ \text{on}\ \Gamma_{w}\,\ r=r_{w}. \tag{4.7}\]
For simplicity we assume that \(p_{w}=0\).
Once more assuming that radial flow towards well of radius \(r_{w}\), let base solution of the problem above to be in the form of
\[p(x,t)=u_{0}(x,t)=e^{-\lambda_{0}\cdot t\cdot\frac{K}{t}}\varphi_{0}(x). \tag{4.8}\]
Here \(\varphi_{0}(x)\) first eigenfunction and first \(\lambda_{0}\) eigenvalue of the problem in the domain \(U,\) with split boundaries \(\partial U=\Gamma_{w}\cup\Gamma_{e}\).
\[-\Delta\varphi_{0}(x)=\lambda_{0}\varphi_{0}(x)\ \text{in}\ \ U; \tag{4.9}\] \[\varphi_{0}(x)=0\ \text{on}\ \Gamma_{w}\,\text{(in radial case when}\ r=r_{w})\,;\] (4.10) \[\frac{\partial\varphi_{0}(x)}{\partial\nu}=0\ \text{on}\ \Gamma_{e}.\ \text{(in radial case when}\ \ r=r_{e}) \tag{4.11}\]
**Remark 14**.: _Motivation to consider this type of solution comes from paper [3], in which we proved that corresponding productivity index is time independent._
First let us check constrains in the 3 w.r.t. analytical solution (4.8)
Indeed it is not difficult to see that all three conditions : \(A_{1}\), \(A_{2}\), and \(A_{3}\) in Assumption 3 are satisfied with
\[C_{1}(\Delta,\!R_{0})=\frac{\varphi_{0}(\Delta)}{\varphi_{0}(R_{0})}\ C_{2}= \Lambda\frac{\int\varphi_{0}dx}{\varphi_{0}(R_{0})}\ C_{3}=\phi\cdot V_{0} \cdot c_{p}\cdot\Lambda \tag{4.12}\]
Finally assuming that \(c_{p}=1\)for given \(\Delta\) one has equation for \(R_{0}^{bd}\) for
\[\frac{\varphi_{0}(\Delta)}{\varphi_{0}(R_{0})}+\lambda_{0}\frac{\int\varphi_{ 0}dx}{\varphi_{0}(R_{0})}=\lambda_{0}\cdot V_{0},\ \text{or}\ \frac{\varphi_{0}(\Delta)}{\Delta^{2}}+\frac{1}{c_{p}}\cdot\frac{\int_{\Gamma _{w}}\frac{\partial\varphi_{0}}{\partial\nu}ds}{\Delta^{2}}=\lambda_{0}\varphi _{0}(R_{0}). \tag{4.13}\]
Function \(\varphi_{0}(r)\) satisfying conditions (4.10)-(4.11) is a solution of the Sturm-Liouville problem for the Helmholtz equation in an annual domain with Dirichlet and Neumann conditions:
\[\frac{1}{r}\frac{\partial}{\partial r}\left(r\frac{\partial\varphi_{0}(r)}{ \partial r}\right)+\lambda_{0}\varphi_{0}(r)=0,\ r_{w}<r<r_{e} \tag{4.14}\]
\[\varphi_{0}(r_{w})=0,\frac{\partial\varphi_{0}(r)}{\partial r}\Big{|}_{r=r_{e} }=0. \tag{4.15}\]
We are interesting in the non-negative solution of the above boundary value problem which has the form (see [10])
\[\varphi_{0}(r)=J_{0}(\sqrt{\lambda_{0}}r_{w})N_{0}(\sqrt{\lambda_{0}}r)-N_{0}( \sqrt{\lambda_{0}}r_{w})J_{0}(\sqrt{\lambda_{0}}r), \tag{4.16}\]
were \(\lambda_{0}\) to be first eigenvalue, which is solution of the transcendent equation:
\[N_{0}(\sqrt{\lambda_{0}}r_{w})J_{0}^{\prime}(\sqrt{\lambda_{0}}r_{e})-N_{0}^{ \prime}(\sqrt{\lambda_{0}}r_{e})J_{0}(\sqrt{\lambda_{0}}r_{w})=0 \tag{4.17}\]
Consequently, the solution of the problem (4.5)-(4.7) has a form
\[u_{0}(r,t)=e^{-\lambda_{0}\cdot t\cdot\frac{K}{1}}[J_{0}(\sqrt{\lambda_{0}}r_ {w})N_{0}(\sqrt{\lambda_{0}}r)-N_{0}(\sqrt{\lambda_{0}}r_{w})J_{0}(\sqrt{ \lambda_{0}}r)] \tag{4.18}\]
One can directly verify all constrains in Assumption 3 by letting
\[p_{1}(s)=u_{0}(\Delta,s)\;,\;p_{0}(s)=u_{0}(R_{0},s)\;\,p_{0}(s+\tau)=u_{0}(R_ {0},s+\tau)\;\,q(s)=-2\pi r_{w}\cdot K\frac{\partial u_{0}(r,s)}{\partial r} \big{|}_{r=r_{w}} \tag{4.19}\]
In this article we will not provide proof of the existence of Peaceman well block radius for boundary dominated regime in radial case, and investigate its property depending on the parameters of the problem. Instead of that we will state the statement in form of remark leaving details for upcoming publication.
**Remark 15**.: _Substitute \(p_{0}(s)\), \(p_{1}(s)\), \(p_{0}(s+\tau)\), and \(q(s)\) into material balance equation (4.1). Then we will get transcended equation for \(R_{0}^{BD}(r_{e},\Delta)\) of the form_
\[\varphi_{0}(R_{0}^{BD}(r_{e},\Delta))-\varphi_{0}(\Delta)=-\frac{2}{\pi}\ln \frac{\Delta}{R_{0}^{BD}(r_{e},\Delta)} \tag{4.20}\]
_Here \(\varphi_{0}(r)\) is the first eigenfunction of the problem (4.14)-(4.15), which defined by equation (4.16)._
|
2310.00341 | Mathematical Model of Dating Apps' Influence on Sexually Transmitted
Diseases Spread | Sexually transmitted diseases (STDs) are a group of pathogens infecting new
hosts through sexual interactions. Due to its social and economic burden,
multiple models have been proposed to study the spreading of pathogens. In
parallel, in the ever-evolving landscape of digital social interactions, the
pervasive utilization of dating apps has become a prominent facet of modern
society. Despite the surge in popularity and the profound impact on
relationship formation, a crucial gap in the literature persists regarding the
potential ramifications of dating apps usage on the dynamics of STDs. In this
paper, we address this gap by presenting a novel mathematical framework - an
extended Susceptible-Infected-Susceptible (SIS) epidemiological model to
elucidate the intricate interplay between dating apps engagement and the
propagation of STDs. Namely, as dating apps are designed to make users revisit
them and have mainly casual sexual interactions with other users, they increase
the number of causal partners, which increases the overall spread of STDS.
Using extensive simulation, based on real-world data, explore the effect of
dating apps adoption and control on the STD spread. We show that an increased
adoption of dating apps can result in an STD outbreak if not handled
appropriately. | Teddy Lazebnik | 2023-09-30T11:15:36Z | http://arxiv.org/abs/2310.00341v3 | # Mathematical Model of Dating Apps' Influence on Sexually Transmitted Diseases Spread
###### Abstract
Sexually transmitted diseases (STDs) are a group of pathogens infecting new hosts through sexual interactions. Due to its social and economic burden, multiple models have been proposed to study the spreading of pathogens. In parallel, in the ever-evolving landscape of digital social interactions, the pervasive utilization of dating apps has become a prominent facet of modern society. Despite the surge in popularity and the profound impact on relationship formation, a crucial gap in the literature persists regarding the potential ramifications of dating apps usage on the dynamics of STDs. In this paper, we address this gap by presenting a novel mathematical framework -- an extended Susceptible-Infected-Susceptible (SIS) epidemiological model to elucidate the intricate interplay between dating apps engagement and the propagation of STDs. Namely, as dating apps are designed to make users revisit them and have mainly casual sexual interactions with other users, they increase the number of causal partners, which increases the overall spread of STDS. Using extensive simulation, based on real-world data, explore the effect of dating apps adoption and control on the STD spread. We show that an increased adoption of dating apps can result in an STD outbreak if not handled appropriately.
**Keywords:** Sexual behavior dynamics; extended SIS model; multi-pathogen epidemic; digital intervention policy; public health.
## 1 Introduction
Sexually transmitted diseases (STDs) are a significant public health challenge, exerting a substantial social and economic burden globally [1, 2, 3, 4]. With an estimated 376 million new infections reported annually, the widespread prevalence of STDs necessitates comprehensive investigations into their transmission dynamics and the factors that contribute to their propagation [5, 6]. In Particular, data from the Centers for Disease Control and Prevention (CDC) in the U.S. illustrates a notable upsurge in newly reported cases of chlamydia, gonorrhea, and syphilis since 2013 [7, 8, 9].
As part of a larger trend of social interactions moving into the digital world [10, 11], the rise of online dating platforms has introduced increased complexity and versatility into the way individuals find life and sexual partners [12, 13]. For instance, recent research has established a correlation between the use of online dating applications and a history of five or more previous sexual partners among young adults [14, 15]. To effectively capture the interplay between sexual network structures, partner formation, and STD transmission, researchers have developed diverse mathematical frameworks [16, 17, 18]. However, existing models often overlook the inherent heterogeneity in individual-level link formation, as they rely on mean-field approximations at the pair level or statistical characterizations of sexual networks [19, 20, 21].
These efforts, however, have predominantly centered on traditional modes of social interaction, overlooking the transformative impact of digital platforms in reshaping interpersonal connections. In the contemporary landscape, dating apps have emerged as a pervasive and influential feature of modern society, revolutionizing the
way individuals initiate and cultivate relationships [22, 23]. The meteoric rise in dating app adoption and usage underscores the need to reevaluate existing disease transmission models. In this work, we introduce a novel mathematical framework based on an extended Susceptible-Infectious-Susceptible (SIS) epidemiological model to investigate the intricate interplay between dating app usage and STD transmission spread dynamics.
The rest of the paper is organized as follows. Section 2 presents an overview of the dating app design, objective, and social impact as well as STD spread dynamics models. In Section 3, we outline the proposed mathematical model constructed from a graph-based spatial model, the influence of dating apps, an extended multi-pathogen SIS model, and an agent-based simulation implementation to allow heterogeneous population dynamics. Next, in Section 4, we describe the experiment design of the proposed model with a realistic configuration followed by the obtained results from the experiments. Finally, Section 5 provides an analysis of the results as well as the strengths and limitations of the proposed model with possible future work.
## 2 Related Work
In order to understand the STD spread dynamics and the role of dating apps in these settings, we overview the current design, objective, and social influence of dating apps on the population followed by a disruption of previous epidemiological models in general and for STDs, in particular.
### Dating apps
As technology evolved, a greater number of dating apps were created to help individuals find their partner, whether it may be sexual or romantic [24]. The proliferation of dating apps has ushered in a new era of interpersonal connectivity, revolutionizing the way individuals form relationships and engage in romantic interactions [25, 26]. Dating apps have witnessed exceptional growth in recent years, with an increasing number of users engaging in diverse forms of interaction facilitated by these platforms [27]. Interestingly, the business objective of these apps is usually counter to their declared marketing for users since apps are financially gaining from users using the app as much as possible while promising to help find someone that would make the users leave the app [28, 29].
While studies about the nature of users' objectives in such dating apps are spread across the "hookup" and meaningful relationship line, all agree about the fact these mobile applications increase the amount of romantic and sexual interactions overall [30, 31, 32]. This fast-paced scenario can fuel an STD spread since the more sexual partners a person has, the higher the likelihood of coming into contact with an infected individual as each new partner represents a potential source of infection, especially if they have multiple partners themselves [33]. Hence, dating apps have garnered interest within the realm of public health research [34, 35]. Notably, the potential links between dating app usage and increased sexual risk behavior have raised concerns regarding STDs transmission dynamics [36]. For example, Miller (2020) [37] surveyed almost a thousand university students who used dating apps in the previous year versus students who did not. The author found that students who used dating apps were statistically more likely to have a greater number of sexual partners during this time but the author was not able to find a statistical increase in STD infection.
Overall, dating apps operate as a tool for an individual in the population to increase their network of possible casual and long-term sexual relationships. This increase can be integrated into current STD spread models to understand the possible role dating apps play in STD spread dynamics.
### STD spread modeling
Mathematical and computational models are key tools for understanding pandemic spread and designing intervention policies that help control a pandemic's spread [38, 39]. In particular, coupled ordinary and partial differential equations, as well as simpler growth-curve equations, are previously used to capture pandemic spread in general [40, 41, 42, 43, 44] and STD diseases spread, in particular [45, 46, 47].
More often than not, the models describing the spread of STDs extend the Susceptible-Infectious-Recovered (SIR) model [48] where each individual in the population is associated with one epidemiological state at a time [49, 50]. Commonly, since different STDs have different recovery and re-infection patterns [51], models also
adopted the SI, SIS, and SIRS types of models [45, 52]. In order to further improve these models' accuracy, many properties such as gender and age are introduced to make the population more heterogeneous [53, 54, 55]. For instance, [56] proposed a SIR-based disease spread model with multiple transmission mechanisms, such as direct contact or vectors, and showed that the model captures the pandemic spread for large population sizes.
In addition, unlike airborne pathogens that infect individuals by close spatial proximity [57, 58, 59], STDs are transmitted via sexual intercourse. Since sex is simply not random [60], most models adopt a graph-based spatial component for the spread dynamics [61, 62, 63]. Regularly. the nodes of the graph representing the individuals in the population while the edges indicate one or more types of interaction between them [64, 65]. For example, [66] proposed a SIR-based model for STD spread on a bipartite random contact network.
In this work, we follow this line of a STD pandemic spread model using an extended SIR model for the temporal component and a graph-based model for the spatial component.
## 3 The Model
The proposed model consists of three interconnected components: a temporal component that describes a multi-pathogen STDs spread in the population; a spatial component that describes the interactions between individuals; and a dating app component that specifies how dating apps influence both the spatial and temporal dynamics. Each of these components, as well as the interactions between them, are detailed below. In addition, we propose an agent-based simulation implementation of this model to allow its _in silico_ investigation.
### Extend multi-pathogen SIS model
In order to capture the spread of a multi-pathogen STDs spread, we based our model on the work of [67]. However, this model is proposed for a generic multi-pathogen pandemic spread dynamics and does not capture four important processes in the context of multi-pathogen STDs spread. First, since many STDs have a significant exposure time [68, 69], an Expose state (\(E\)) is introduced. Second, since individuals can recover from some STDs, and be re-infected later [70, 71], we also introduce an immunity decay and re-infection dynamics to the model. Third, individuals can be infected simultaneously by multiple STDs [72]. Thus, we further extended the model to capture these dynamics. Finally, we remove the recovery (\(R\)) state as individuals do not develop long-term immunity to STDs, in general [73, 74].
Formally, let us define a model that contains a finite population (\(P\)) of size \(n:=|P|\) and their change over finite time \([t_{0},t_{f}]\) such that \(t_{f}>t_{0}\). In addition, let us assume a set of disease-generating pathogens \(D\) such that \(|D|:=k\in\mathbb{N}\). At each point in time, each individual is either susceptible (\(S\)), exposed (\(E\)), infected (\(I\)), or dead (\(D\)) from each of these pathogens. Hence, the epidemiological state of an individual is represented by a tuple \(\eta\in\{s,e,i,d\}^{k}\). For some reason, each individual belongs to a super-position epidemiological state where it is susceptible, exposed, and infected by a set of pathogens, \(s,e,i\subset D\), such that \(s\cap e\cap i=\emptyset\wedge s\cup e\cup i=D\)[67]. One can ignore the dead (\(d\)) state since if a single pathogen caused the death of the individual, the other states \(s,e,\) and \(i\) do not play any role in the individual's overall epidemiological state.
As such, for each state, there are 12 processes that influence the number of individuals in each epidemiological state. First, individuals are born at some rate \(\alpha\). Second, individuals are infected by a pathogen \(j\in D\), becoming exposed to it with infection rate \(\beta\). Third, individuals that are exposed to a pathogen \(j\) become infectious at a rate \(\phi\). Fourth, individuals from the group \((s,e,i)\) are infected by a pathogen \(j\in s\) by animals from the same species, becoming exposed to it with an infection rate \(\beta\). Fifth, individuals from the group \((s,e,i)\) which are exposed to pathogen \(j\in e\) become infectious at a rate \(\phi\). Sixth, for each \(j\in i\) individuals from the group \((s,e,i)\) lose their immunity and become susceptible again to the pathogen \(j\) at a rate \(\psi\). Seventh, individuals from the group \((s,e,i)\) die due to their diseases at a rate \(\mu\). Finally, individuals are naturally dying at a rate \(\upsilon\), independent of the diseases they carry. These dynamics take the ordinary differential equations (ODEs) representation as follows:
\[\begin{split}&\forall s,e,i:\frac{dP_{s,e,i}(t)}{dt}=\sum_{a,b,c|a\cap b \cap c=\emptyset\wedge a\cup b\cup c=D}\alpha_{a,b,c}P_{a,b,c}+\sum_{j\in e} \rho_{s\cup j,e/j,i}^{s,e/j,i\cup j}P_{s\cup j,e/j,i}P_{s,e/j,i\cup j}\\ &+\sum_{j\in i}\phi_{s,e\cup j,i/j}P_{s,e\cup j,i/j}+\sum_{j\in s} \psi_{s/j,e,i\cup j}P_{s/j,e,i\cup j}-\sum_{j\in s}\beta_{s,e,i}^{s,e/j,i\cup j }P_{s,e,i}P_{s,e/j,i\cup j,r}\\ &-\sum_{j\in e}\phi_{s,e,i}P_{s,e,i}-\sum_{j\in i}\mu_{s,e,i}P_{ s,e,i}-\upsilon_{s,e,i}P_{s,e,i}\end{split} \tag{1}\]
A schematic view of the epidemiological states of the model for the case of two pathogens (i.e., \(k=2\)) is shown in Fig. 1 where each box indicates the epidemiological state of the individual represented by the pathogens belongs to each of the \(s,e,i\) sets.
### Graph-based spatial interactions
Following the models proposed by [75] and [76], for the proposed model, we adopted a two-layer graph-based spatial component. Formally, we consider a population of individuals, \(P\), that have two main types of interactions between them which are represented by two different "layers" of the interaction graph. The first layer, \(L_{1}\)
Figure 1: A schematic view of transition between disease stages, shown for \(k=2\). The red arrows indicate that from this stage, the individual might die from the disease. In a similar manner, the orange, black, and green arrows indicate exposure, infection, and recovery with immunity decay, respectively.
represents steady partnerships among the individuals that resulted from socially accepted long-term sexual partnerships. In addition to these interactions, we assume a second type of interaction that corresponds to potential casual partnerships. These interactions become active with a probability, \(\xi\in[0,1]\) when the individuals at both ends of the interactions are simultaneously seeking casual partners, aware of each other, and attracted to each other. This second "layer" of links is denoted by \(L_{2}\). By definition, \(L_{1}\cap L_{2}=\emptyset\). We assume that for each individual \(x\in\mathbb{P}\) in the population, there is a unique distribution function \(\delta_{x}(y)\) that obtains another individual in the population \(y\in\mathbb{P}\) and returns the probability that the individuals \(x\) and \(y\) would have a \(L_{1}\)-type interaction. In realistic social networks, each individual has a relatively small group of individuals with whom s/he has long-term sexual partnerships. In order to capture these dynamics in the infection graph, we assume \(\delta_{x}(y)\) follows a Poisson distribution with mean values \(\rho\in\mathbb{R}^{+}\). In addition, we assume that a \(L_{1}\) and \(L_{2}\) type edges can become \(L_{2}\) and \(L_{1}\) type edges, respectively, with probabilities \(\omega_{1}^{2}\in[0,1]\) and \(\omega_{2}^{1}\in[0,1]\) at each step in time.
In addition, we assume that each individual is either seeking a sexual partner or not at any point in time, \(t\). When an individual seeks a partner, it first updates its \(L_{2}\) layer and choose randomly an individual from it, and establishes a casual partnership. Later, when one of the two individuals is no longer looking for a sexual partner state, the edge between the two nodes is removed. We assume node activation processes are independent Poisson processes [75], where individual \(i\) seeks a sexual partner with rate \(\gamma_{1}^{i}\in\mathbb{R}^{+}\), and if it is seeking for a sexual partner, it goes to the non-seeking state with rate \(\gamma_{2}^{i}\in\mathbb{R}^{+}\). Due to the fact that the inverse of the transition rate is the expected value of transition time, if individual \(i\) seeks a sexual partner, it is expected to stay in this state for a period of time of length \((\gamma_{2}^{i})^{-1}\in\mathbb{R}^{+}\). Moreover, individuals can interact in either protected or unprotected sexual interactions. If at least one of the sides prefers a protected interaction, it would be protected. Fig. 2 shows a schematic view of the interaction graph for a single point in time.
### Dating Apps dynamics
Dating apps allow individuals in the population to meet more individuals than they would be able to by random encounters. More specifically, dating apps increase the rate at which both sides interact when they both seeking sexual partners as both individuals use dating applications only when they seeking sexual partners. Nonetheless, not all matches done in the dating application result in physical interaction [77]. The probability that a match
Figure 2: A schematic schematic view of the interaction graph for a single point in time.
on the dating app would result in a physical interaction depends on multiple factors. That said, one can simplify these into an abstract attractiveness level, \(b\in[0,1]\) which each individual in the population has for any other individual in the population, which results in the population's attractiveness matrix, \(B\in[0,1]^{n\times n}\). For simplicity, we assume that \(B\) is constant over time.
Therefore, in order to further capture the heterogeneity of the population, for each individual we take into consideration its gender (\(g\in\{male,female\}\)) and age (\(a\in\mathbb{N}\)). These factors are used to determine the attractiveness level of an individual for other individuals according to their own gender and age as well as their preferences of gender and age in their sexual partners. We assume that gender and its preferences are constant over time while age and its preferences change identically over time.
On top of that, dating apps shown empirically to be more popular in some social groups and their users' activity is also altering over time. To capture these dynamics, we assume that each individual in the population has a probability, \(d\in[0,1]\), to use a dating app while seeking for sexual partner. Individuals who were successful in finding sexual partners using the dating app are more likely to re-use it by increasing their probability \(d\) by a factor of \(\delta_{s}\in[0,1]\). On the other hand, individuals who were not successful in finding sexual partners in the dating app are more likely to use it less by decreasing their probability \(d\) by a factor of \(\delta_{n}\in[0,1]\).
### Assembling the components into a single framework using the agent-based simulation approach
A powerful approach to implement this multi-component model into one framework is the agent-based simulation [78, 79, 80, 81]. Inspired by previous works [82, 83, 84, 85], we formally define the model as a whole, denoted by \(M\), as follows. Let \(M\) be a tuple \((P,G)\) where \(P\) is a population of agents and \(G\) is the interaction graph between them. Let \(G:=(P,E\subset P\times P\times\{1,2\})\) be a two-layer connected graph where each node represents an individual in the population and the edge is a sexual interaction between two individuals. The individuals in the population are interacting in rounds \(t\in[1,\ldots,T]\), where \(T<\infty\). Each individual in the population, \(p\in\mathbb{P}\), is represented by a timed finite state machine [86]. An individual is described by a tuple \(p:=(\eta,a,g,\mu,\theta,\gamma_{1},\gamma_{2},d,\delta_{s},\delta_{n},\omega_{ 1}^{2},\omega_{2}^{2},\zeta)\) where \(\eta\) is the agent's current epidemiological state, \(a\) is the agent's age, \(g\) is the agent's gender, \(\mu\in[0,1]^{n}\) is the attractiveness level of all other individuals in the population according to the individual, \(\theta\in\{T,F\}\) indicates if the individual is currently seeking a sexual partner or not, \(\gamma_{1}\) and \(\gamma_{2}\) is the duration in which \(\theta\) changes between \(T\to F\) and \(F\to T\), respectively, \(d\) is the probability the individual would use a dating app while seeking for a sexual partner, \(\delta_{s}\) and \(\delta_{n}\) are the increase or decrease in \(d\) due either success or not in finding a sexual partner using the dating app, \(\omega_{1}^{2},\omega_{2}^{1}\) are the probability that \(L_{1}\) and \(L_{2}\) type interactions would become \(L_{2}\) and \(L_{1}\) type interactions, respectively, and \(\zeta\) is a binary variable indicating if the agent wishes to participant in protected or unprotected sexual interaction.
At the first round (\(t=1\)), the population (\(\mathbb{P}\)) is generated such that the individual's properties follow a predefined distribution. Moreover, the \(L_{2}\) layer in \(G\) is also generated. Then, at each round \(t\), each individual in the population, if seeking a sexual partner, can either try and increase the number of \(L_{1}\) type edges it has by using a dating app or not. Afterward, each individual chooses, at random, one of the \(L_{1}\) or \(L_{2}\) edges it has and interacts with the other individual. Following standard convention, we assume that all individuals interact in a single round. These interactions initiate some epidemiological dynamics, following Eq. (1). As discussed in Section 3.1, individuals with a susceptible status (\(S\)) have no immunity and are susceptible to infection by a pathogen \(i\). When an individual with an \(S\) status is exposed to the pathogen \(i\) through an interaction with an infected individual (\(I\) status), the individual is assigned with an exposed status \((E)\) with a probability \(\beta\) which corresponds to the \(\eta\) states of both individuals. Individuals with an \(E\) status have the pathogen but are not yet contagious. The individual remains with an \(E\) status for \(\phi\) rounds, after which the individual is assigned with an infected status \((I)\), which makes her contagious to other individuals. After \(\gamma\) rounds, an infected individual transitions back to a susceptible status (\(S\)) or dead status \((D)\) with probabilities \((1-\psi)\) and \((\psi)\), respectively. Dead individuals are removed from the population (and the graph). In addition, at each step in time, new individuals are added to the population as they reach adulthood with a rate corresponding to the population size \(\alpha\).
## 4 Experiment
In this section, we perform _in silico_ experiments based on the proposed model. Initially, we find from the literature realistic values for the model's parameters to obtain realistic realizations of the proposed model. Using this setup, we explore the influence of dating apps on the spread of STDs from three perspectives.
### Setup
High-resolution and extensive epidemiological data are required to obtain a real-world realization of the proposed model. Unfortunately, currently, such data is unavailable in the public domain (to our best knowledge). Nonetheless, partial data about STD spread epidemics and general statistics about dating app usage are available in the literature [87, 88, 89]. Specifically, we focused on the three common STDs in the United States - Chlamydia, Gonorrhea, and Syphilis [90]. In total, according to the Centers for Disease Control and Prevention, around 2.5 million cases of these diseases were reported during 2021 in the United States alone 1. On a more global scale, the World Health Organization (WHO) estimates 129, 82, and 7.1 million cases of Chlamydia, Gonorrhea, and Syphilis during 2020, respectively 2. In addition, to make the socio-demographic distribution realistic, we adopted the age and gender co-distribution from [88]. In particular, for the average number of preeminent interactions, we computed the portion of officially married adults from the entire adult population, assuming only monogamic relationships. Table 1 summarizes the proposed model's hyper-parameter values based on the available data from the literature, as stated in the **source** column. In particular, we choose to simulate a step in a time of one hour to balance the computational burden and the model's accuracy. Moreover, the population size range is chosen based on the estimation of sexually active adults in a small-medium US city.
Footnote 1: We refer the interested reader to [https://www.cdc.gov/std/statistics/2021/default.htm](https://www.cdc.gov/std/statistics/2021/default.htm) (visited 25th of September, 2023)
Footnote 2: The full report is available online [https://www.who.int/news-room/fact-sheets/detail/sexually-transmitted-infections-](https://www.who.int/news-room/fact-sheets/detail/sexually-transmitted-infections-)(stis)?qclid=CjwwCAjwpJw0oBhA@EiwAhZFfPYLqRVh-Tf2UNlypsUzZ7s9frmif0akHfur?LIw3k%rhoCAYEQAVD_BwE (visited 25th of September, 2023)
Moreover, in order to evaluate the epidemic spread, one is required to define an epidemiological metric of interest. In this study, we consider the average reproduction number (\(E[R_{t}]\)) which measures the number of secondarily infected individuals given the epidemic state at a given time \(t\)[96, 97, 98, 99, 100]. \(R_{t}\) can be approximated using the following formula: \(R_{t}:=\big{(}I(t)-I(t-1)+S(t)-S(t-1)\big{)}/I(t-1)\), where \(I(t)\) and \(S(t)\) are the number of infected (by any pathogen) and recovered (and therefore susceptible again) individuals at time \(t\), respectively. Intuitively, the average reproduction number (\(E[R_{t}]\)) computes how many, on average, a single infected individual infects other individuals in a pre-defined and fixed duration (i.e., a step in time).
### Results
Based on this setup, we conducted two main experiments as well as a sensitivity analysis for the model. First, we explore the influence of dating app adoption in the population on the STD spread dynamics. Second, we compare two scenarios of dating app usage - genuinely helping users to find stable relationships and promoting casual sexual encounters and further usage of the application. Finally, we explore the ability of dating apps to tackle the problem they (might) cause by introducing STD-aware and prevention policies3.
Footnote 3: This question is inspired by recent such features by some dating apps: [https://www.statnews.com/2022/07/18/dating-apps-help-stop-spread-sexually-transmissible-infections/](https://www.statnews.com/2022/07/18/dating-apps-help-stop-spread-sexually-transmissible-infections/)
Fig. 3 presents the average reproduction number (\(E[R_{0}]\)) as a function of the dating app initial adoption rate. The results are shown as the mean value of \(n=100\) simulation realizations and the error bars indicate the standard deviation of the sample. The case inferred from the historical data is marked by a red square while the other cases are marked by blue circles. The gray (dashed) line indicates \(E[R_{t}]=1\) which is the epidemic outbreak threshold. One can notice that the increase in the dating apps' initial adoption rate caused a monotonic increase in the average reproduction number and therefore in the STD pandemic spread. Moreover, an increase of \(0.079\) in the average reproduction number occurs between no adoption and 0.1 adoption rate. Moreover, the average reproduction number increased rate is growing with the adoption rate, indicating a non-linear relationship
between the two parameters. On the other hand, the standard deviations are (almost) monotonically decreased with respect to the adoption rate, excluding the case of no adoption.
After showing that dating apps adoption in its present form which encourages casual sexual interactions, we moved forward to investigate how changes in the application objective can influence STD spread. Namely, let
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Symbol** & **Description** & **Default value** & **Source** \\ \hline \hline \(T\) & Number of simulation rounds (spanning over a year & 8760 & Assumed \\ & in the \(\Delta t\) used) [1] & & \\ \(\Delta t\) & Simulation round’s duration in time [\(t\)] & 1 hour & Assumed \\ \(|P(0)|\) & The initial population size [1] & \([10^{5},10^{6}]\) & Assumed \\ \(k\) & The number of pathogens [1] & \(3\) & Assumed \\ \(\alpha\) & Birth rate in days [\(t^{-1}\)] & \(3.24\cdot 10^{-5}\) & [88] \\ \(\upsilon\) & Natural death rate in days [\(t^{-1}\)] & \(2.27\cdot 10^{-5}\) & [89] \\ \(\beta_{c}\) & Average Chlamydia infection rate [1] & Protected - 2\%, Unprotected & [90] \\ & - 100\% & & \\ \(\beta_{g}\) & Average Gonorrhea infection rate [1] & Protected - 2\%, Unprotected & [90] \\ & - 100\% & & \\ \(\beta_{s}\) & Average Syphilis infection rate [1] & Protected - 2\%, Unprotected & [90] \\ \(\phi_{c}\) & Average Chlamydia exposure to infectious transformation rate in days [\(t^{-1}\)] & 7-14 & [91] \\ \(\phi_{g}\) & Average Gonorrhea exposure to infectious transformation rate in days [\(t^{-1}\)] & 2-14 & [92] \\ \(\phi_{s}\) & Average Syphilis exposure to infectious transformation rate in days [\(t^{-1}\)] & 1-9 & [92] \\ \(\psi_{c}\) & Immunity decay rate for Chlamydia in days [\(t^{-1}\)] & 0-1 & [91] \\ \(\psi_{g}\) & Immunity decay rate for Gororrhea in days [\(t^{-1}\)] & 0-2 & [87] \\ \(\psi_{s}\) & Immunity decay rate for Syphilis in days [\(t^{-1}\)] & 0-2 & [87] \\ \(\gamma_{c}\) & Mortality rate due to Chlamydia [1] & \(1.8\cdot 10^{-6}\) & [90] \\ \(\gamma_{g}\) & Mortality rate due to Gonorrhea [1] & 0 & [90] \\ \(\gamma_{s}\) & Mortality rate due to Syphilis [1] & 0 & [90] \\ \(\gamma_{1}\) & Sexual partner looking in hours [1] & \(N(0.72,0.44)\) & [93] \\ \(\gamma_{2}\) & Sexual partner non-looking in hours & \(N(15.24,6.73)\) & [93] \\ \(d\) & Dating apps initial adoption rate [1] & 0.38 & [94] \\ \(\delta_{s}\) & Increase in personal usage probability of dating apps due to successful interaction using the app [1] & 0.05 & Assumed \\ \(\delta_{n}\) & Decrease in personal usage probability of dating apps due to successful interaction using the app [1] & 0.02 & Assumed \\ \(\omega_{1}^{2}\) & A probability that casual interaction would become a preeminent interaction [1] & 0.019 & [95] \\ \(\omega_{2}^{1}\) & A probability that preeminent interaction would become a casual interaction [1] & 0 & Assumed \\ \(\mu\) & The average attractiveness distribution in the population [1] & \(P(0.71)\) & [93] \\ \(|P|/|L_{1}|\) & Average number of preeminent interactions [1] & 0.32 & Assumed \\ & Portion of the population preferring protected sex [1] & 0.8 & Assumed \\ \hline \hline \end{tabular}
\end{table}
Table 1: A summary of the proposed model’s parameters and hyperparameters with their realistic value ranges. \(N(\mu,\sigma)\) indicates a normal distribution with a mean value of \(\mu\) and standard deviation of \(\sigma\). \(P(\lambda)\) indicates a Poisson distribution with a parameter \(\lambda\).
us consider a scenario where dating apps limit one's ability to interact with other users over some period of time in order to motivate users to establish long-term relationships. Thus, we introduce a parameter, \(\psi\in\mathbf{N}\), which indicates how many interactions a user of the dating app is allowed to have in a week. For comparison, the present scenario assumes \(\psi\rightarrow\infty\) as no actual limit is present. Fig. 4 shows the average reproduction number (\(E[R_{t}]\)) with respect to \(\psi\), demonstrating the STD spread dependency of how dating apps promoting casual sexual encounters and further usage of the application. The results are shown as the mean \(\pm\) standard deviation of \(n=100\) simulation realizations. One can notice a logarithmic relationship between the two parameters. Furthermore, with less than 10 possible interactions in the dating app, the STD epidemic is dying out as \(E[R_{t}]<1\).
This outcome reveals that a restricted limit on users' usage of the dating app is applied, the lower (on average) the STD spread in the population. However, applying such a strategy is undesirable for dating apps that profit from users using the app. Hence, a more economically realistic option is the introduction of some enforcement mechanism that makes sure the dating app's users are not spreading STDs. One possible implementation of such an enforcement mechanism is to request users to present, periodically, an official document they are free of STDs. As such, users who are infected while required to present such a document would have to wait until they recover. To evaluate the performance of such an enforcement mechanism, we define, \(\tau\in\mathbb{N}\), the duration, in days, between two times a user needs to provide an STD-free document to the application. Fig. 5 shows the average reproduction number (\(E[R_{t}]\)) with respect to \(\tau\) such that the values are presented as the mean \(\pm\) standard deviation of \(n=100\)
Figure 3: A comparison of the STD spread dynamics with different levels of dating app adoption. The results are shown as the mean \(\pm\) standard deviation of \(n=100\) simulation realizations. The case inferred from the historical data is marked by a red square while the other cases are marked by blue circles. The gray (dashed) line indicates \(E[R_{t}]=1\) which is the epidemic outbreak threshold.
simulation realizations. One can see a monotonic increase in both the mean and standard deviation of \(E[R_{t}]\) with respect to \(\tau\).
## 5 Discussion and Conclusion
In this study, we investigate the influence of dating apps on STDs spread in a population by applying a multi-pathogen epidemiological model. The proposed model is based on an extended SIR-based epidemiological model with a spatial component of infection graph, following a sequence of models designed and validated for STD spread analysis [45, 46, 47]. We implemented the proposed model as an agent-based simulation approach while taking into consideration a heterogeneous population and its usage of dating apps. We used historical STD epidemics as well as statistical data about dating app usage to obtain realistic realizations of the proposed model, capturing as closely as possible realistic spread dynamics in this context as previous models are shown to accurately capture similar epidemiological cases with only partial data [101, 102, 103, 104].
Taken jointly, our results, as shown in Figs. 3, 4, and 5 show a simplistic and consistent outcome - larger usage and adoption of dating apps causes an increase in STD spread. This conclusion, while sounds trivial, has not been empirically explored yet. Previous studies show that more sexual interactions cause more STD spread and that
Figure 4: A comparison of the STD spread dynamics for two cases - genuinely helping users to find stable relationships and promoting casual sexual encounters and further usage of the application. The results are shown as the mean \(\pm\) standard deviation of \(n=100\) simulation realizations. The x-axis is presented in a logarithmic scale. The gray (dashed) line indicates \(E[R_{t}]=1\) which is the epidemic outbreak threshold.
dating apps cause more sexual interactions, on average [35, 33, 25, 17]. That said, only recently, a self-reported, retrospective, and relatively small sample size study was able to statistically associate the two processes [37]. Thus, our result is the first to show a large-scale, while _in silico_, connection between dating apps and STD spread. Moreover, we show (see Fig. 3) that in its current form, more adoption of dating apps in the population would result in a polynomial increase in the average reproduction number of STDs which can quickly be developed into a large-size pandemic. Nonetheless, as presented by Figs. 4 and 5, one can enforce some limitations upon dating apps to control the additional STD spread they cause. That said, such limitations would probably negatively influence these apps profits and therefore would not be initiated by their owner companies. Hence, a balance between the two can be achieved where users repeatedly use the dating app while also testing to prevent STD spread. Our analysis shows that every three-month test should prevent any STD outbreak over time, for example.
This research is not without limitations. First, in the proposed model we ignore the role healthcare services play in treating STDs in a direct manner which can alter the proposed results depending on the quality and quantity of this service to the population [105]. Second, we do not include a socially-aware factor that causes individuals who are aware they have STDs to make sure they do not infect others, as also requested by law in some countries [106]. Third, as evidence regarding the connection between porn and reduced sexual desire is gathered [107], and in the scope of the digital world effect on STD spread, future work can also include the influence of porn. Namely, connecting the usage of porn to the duration of the non-seeking state of individuals in the proposed model.
Figure 5: The average reproduction number (\(E[R_{t}]\)) with respect to the duration between two times a user has to prove it is STD-free \(\tau\). The results are shown as the mean \(\pm\) standard deviation of \(n=100\) simulation realizations. The x-axis is presented in a logarithmic scale. The gray (dashed) line indicates \(E[R_{t}]=1\) which is the epidemic outbreak threshold.
This study highlights the importance of taking into consideration the interactions occurring in the digital world as these influence the physical one, in the context of STD spreads via dating apps. Our model and simulation can be utilized to design and _in silico_ test various policies to tackle and control STD spread among the population.
## Declarations
### Funding
This research does not receive any funding.
### Conflicts of interest/Competing interests
None.
### Data availability
The data that have been used in this study are publicly available in the referenced sources.
### Acknowledgement
The author wishes to thank Ariel Alexi for helping with this study's administrative work.
### Author Contribution
Ariel Fenster: Software, Writing - Review & Editing.
Teddy Lazebnik: Conceptualization, Resources, Data curation, Formal Analysis, Validation, Investigation, Methodology, Visualization, Supervision, Writing - Original Draft, Writing - Review & Editing.
|
2309.15149 | Bounding the QCD Equation of State with the Lattice | The equation of state of QCD matter at high densities is relevant for neutron
star structure and for neutron star mergers and has been a focus of recent
work. We show how lattice QCD simulations, free of sign problems, can provide
an upper bound on the pressure as a function of quark chemical potentials. We
show that at large chemical potentials this bound should become quite sharp;
the difference between the upper bound on the pressure P-phase-quenched and the
true pressure P is of order alpha^3 P. The corrections arise from a single
Feynman diagram; its calculation would render remaining corrections of order
alpha^4 P. | Guy D. Moore, Tyler Gorda | 2023-09-26T18:00:02Z | http://arxiv.org/abs/2309.15149v2 | # Bounding the QCD Equation of State with the Lattice
###### Abstract
The equation of state of QCD matter at high densities is relevant for neutron star structure and for neutron star mergers and has been a focus of recent work. We show how lattice QCD simulations, free of sign problems, can provide an upper bound on the pressure as a function of quark chemical potentials. We show that at large chemical potentials this bound should become quite sharp; the difference between the upper bound on the pressure \(P_{\rm{PQ}}\) and the true pressure \(P\) is of order \(P_{\rm{PQ}}-P={\cal O}(\alpha_{\rm{s}}^{3}P)\). The corrections arise from a single Feynman diagram; its calculation would render remaining corrections \({\cal O}(\alpha_{\rm{s}}^{4}P)\).
Keywords:Neutron star equation of state, lattice gauge theory
## 1 Introduction
The equation of state (EOS) of strongly interacting matter dictates the thermodynamics of any system ultimately composed of quarks and gluons. At high temperatures and low net baryon densities, the EOS can be computed directly from the partition function of Quantum Chromodynamics (QCD) using Monte-Carlo lattice techniques [1; 2] and compared to experimental determinations of thermodynamic properties [3]. However, at low temperatures and high net baryon densities, such techniques fail due to the well-known sign problem [4; 5], and alternative methods are required to determine the EOS.
Such cold and dense QCD matter is in fact realized in nature in the cores of massive neutron stars, where matter is in equilibrium under the weak interactions (beta equilibrated) and dense enough to populate the three lightest quark flavors. In this context, the EOS is intimately tied to the bulk properties of neutron stars via the stellar structure equations of general relativity [6; 7], and hence observations of neutron star properties, such as masses [8; 9; 10], tidal deformabilities [11; 12; 13], and radii [14; 15; 16; 17; 18; 19; 20] can be used to constrain the EOS of QCD. Recently the community has converged on a strategy for inferring the EOS of cold and dense QCD matter using these astrophysical constraints in combination with a general set of causal extensions of low-density effective-field-theory calculations [21; 22; 23; 24; 25] of the EOS of dense nuclear matter (see, e.g., [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39]), sometimes also paired with constraints from perturbative QCD calculations at high net baryon densities [40; 41; 42; 43; 44; 45; 46; 47] (see, e.g. [48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60]). In addition to these constraints, there has been recent work on incorporating other experimental constraints, such as bounds from low-energy nuclear collision experiments [37; 61], which have so far been seen to compliment those from astrophysics. On the theoretical side, the unitary gas constraint [62; 63; 64; 65; 66] - which has been conjectured to provide a lower bound of the energy per particle at low to moderate net baryon densities in the hadronic phase - has been used as a reference to benchmark different nuclear-theory calculations of the hadronic EOS as they are extended to higher densities.
In this paper we argue that one more constraint (or family of constraints) can be added to this list of bounds on the dense QCD EOS. While lattice QCD cannot compute the EOS for the physical combination of chemical potentials, phase-quenched lattice QCD is free of sign problems. The pressure as a function of chemical potentials, calculated using phase quenching, \(P_{{}_{\rm PQ}}(\mu_{q})\), is a strict upper bound on the true pressure at the same values of quark chemical potentials: \(P_{{}_{\rm PQ}}(\mu_{q})\geq P(\mu_{q})\). This fact was established by Cohen in Ref. [67], who used it to show how the case of an isospin chemical potential - opposite up and down quark chemical potentials - can be used to bound the more physically interesting case of equal up and down quark chemical potentials. Unfortunately, the combination of chemical potentials relevant for neutron stars is rather different: the down and strange chemical potentials should be equal \(\mu_{d}=\mu_{s}\), while the up-quark chemical potential is somewhat smaller to accommodate charge neutrality, \(\mu_{u}<\mu_{d}\). Bounding the equation of state for this combination of chemical potentials starting from lattice results for isospin chemical potentials requires the use of additional inequalities [68], as recently explored by Fujimoto and Reddy [69].
We argue here that the most effective way to use the lattice to bound the neutron-star equation of state is to perform new phase-quenched lattice simulations using the physical combination of up, down, and strange quark chemical potentials. Unlike an isospin chemical potential, the phase-quenched version of such a combination represents a completely unphysical system. But we expect that it will present the tightest bounds on the physical equation of state. In particular, we show here that, while at low densities the constraint is likely to be very loose, at high densities it should become ever sharper. In fact, we will demonstrate that, in the perturbative region, the relative difference \(\frac{P_{{}_{\rm PQ}}(\mu_{q})-P(\mu_{q})}{P(\mu_{q})}\) is of order \(\alpha_{\rm s}^{3}\). Furthermore, at this order in the coupling expansion, the difference arises from a single Feynman diagram, which we identify.1
Footnote 1: Two diagrams if one considers the two directions which fermion arrows in a loop can point to represent distinct diagrams.
An outline of this paper is as follows. In the next section we review the path integral for QCD at finite chemical potential, and how it is related to the phase-quenched version. The section reviews the (quite simple) proof that \(P_{{}_{\rm PQ}}(\mu_{q})\geq P(\mu_{q})\), and discusses a little more how one should interpret the phase-quenched calculation. In Section 3 we show how to represent the (unphysical) phase-quenched theory in Feynman diagrams, and we identify the unique \(\mathcal{O}(\alpha_{\rm s}^{3})\) diagram which differs between \(P_{{}_{\rm PQ}}(\mu_{q})\) and \(P(\mu_{q})\). Section 4 estimates the size of lattice artifacts in evaluating the pressure on the lattice, in order to give guidance for how small the lattice spacing must be at a given, large \(\mu\) value. We end with a discussion, which lists the most important directions for further work.
## 2 Path integral and phase quenching
Here we review the proof that the pressure from the phase-quenched theory is a strict upper bound on the original theory. The proof as we formulate it is due to Cohen [67], though the
underlying ideas are older [70]. The partition function of QCD at a small finite temperature \(T=1/\beta\) in a box with a large volume \(V\) and with quark chemical potentials \(\mu_{q}\) is2
Footnote 2: The integral as written requires either gauge fixing or a restriction that \(\mathcal{D}G_{\mu}\) should be understood only to run over distinct affine connections, but this point is not relevant here, since the fermionic determinants are our main focus.
\[Z(\beta,\mu_{q})=\int\mathcal{D}G_{\mu}\,\exp\left(-\int_{0}^{\beta}dt\int_{V}d ^{3}x\;\mathcal{L}_{\rm E}(G)\right)\;\prod_{q=u,d,s}\,{\rm Det}\left(\not{ \mathcal{D}}+m_{q}+\mu_{q}\gamma^{0}\right). \tag{1}\]
Because \(\not{\mathcal{D}}+m_{q}\) and \(\mu_{q}\gamma^{0}\) have different \(\gamma^{5}\)-Hermiticity [4], each determinant is complex. Phase quenching is the replacement of \(Z\) with
\[Z_{{}_{\rm PQ}}(\beta,\mu_{q})\equiv\int\mathcal{D}G_{\mu}\,\exp\left(-\int_{ 0}^{\beta}dt\int_{V}d^{3}x\;\mathcal{L}_{\rm E}(G)\right)\;\prod_{q=u,d,s} \Big{|}\,{\rm Det}\left(\not{\mathcal{D}}+m_{q}+\mu_{q}\gamma^{0}\right) \Big{|}\,. \tag{2}\]
That is, one uses the absolute values of the determinants, rather than the determinants themselves. Since the Euclidean gluonic action \(\mathcal{L}_{\rm E}(G)\) is strictly real,3 the integrand is now real and positive and equals the absolute value of the integrand for \(Z\). Since the integral of the absolute value of a function over a positive measure is greater than or equal to the integral of the original complex function, we have
Footnote 3: We assume that the QCD theta angle [71; 72; 73] is zero.
\[Z_{{}_{\rm PQ}}(\beta,\mu_{q})\geq Z(\beta,\mu_{q})\quad\text{and therefore}\quad P_{{}_{\rm PQ}}(\beta,\mu_{q})\geq P(\beta,\mu_{q})\,, \tag{3}\]
where \(P=\ln(Z)/(\beta V)\) is the pressure.4
Footnote 4: In practice we want \(P(\mu_{q})-P(\mu_{q}=0)\), that is, one should subtract the zero-chemical-potential value of the pressure. This has the benefit of removing cosmological-constant-type power UV divergences.
Let us investigate the interpretation of the phase-quenched theory. Applying \(\gamma^{5}\)-Hermiticity, one finds that [74; 4]
\[\Big{(}\,{\rm Det}\left(\not{\mathcal{D}}+m_{q}+\mu_{q}\gamma^{0}\right) \Big{)}^{*}=\Big{(}\,{\rm Det}\left(\not{\mathcal{D}}+m_{q}-\mu_{q}\gamma^{0} \right)\Big{)} \tag{4}\]
and therefore
\[\Big{|}\,{\rm Det}\left(\not{\mathcal{D}}+m_{q}+\mu_{q}\gamma^{0 }\right)\Big{)}\Big{|} =\sqrt{\,{\rm Det}\left(\not{\mathcal{D}}+m_{q}+\mu_{q}\gamma^{0 }\right){\rm Det}\left(\not{\mathcal{D}}+m_{q}-\mu_{q}\gamma^{0}\right)}\,, \tag{5}\] \[Z_{{}_{\rm PQ}} =\int\mathcal{D}G_{\mu}\,\exp\left(-\int_{0}^{\beta}dt\int d^{3} x\mathcal{L}_{\rm E}(G)\right)\] (6) \[\qquad\times\prod_{q=u,d,s}\sqrt{\,{\rm Det}\left(\not{ \mathcal{D}}+m_{q}+\mu_{q}\gamma^{0}\right){\rm Det}\left(\not{\mathcal{D}}+m_ {q}-\mu_{q}\gamma^{0}\right)}.\]
Therefore, the phase-quenched theory is equivalent to a theory with twice as many fermionic species, but where each is represented by the square root of a determinant - think of each as a half-species, which appear in pairs with equal mass but opposite chemical potential. This
theory is clearly unphysical, since a half-species of fermion does not make sense as an external state. A well known exception is if two quark masses and chemical potentials are equal [70; 75; 76; 77]. In particular, for \(m_{u}=m_{d}\) and \(\mu_{u}=\mu_{d}\) with \(\mu_{s}=0\), representing "standard" baryonic chemical potential, the phase-quenched version is equivalent, after re-labeling two of the half-species, to the case \(\mu_{d}=-\mu_{u}\), corresponding to an isospin chemical potential. This among other things has motivated rather detailed lattice investigations of the case of finite isospin chemical potential, see for instance Refs. [78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91] or the comprehensive review by Mannarelli [92]. In this case it is particularly clear that, for small chemical potentials, the two pressures are quite different. Tom Cohen has analyzed the case for an isospin chemical potential [74] and shown that nonzero \(\mu_{q}\) in Eq. (5) does not change the value of the determinant until \(\mu_{q}=m_{\pi}/2\), when eigenvalues of \(\not{\rm D}+m_{q}+\mu_{q}\gamma_{0}\) start to cross the imaginary axis. He refers to the constancy of the determinant up to this point as "the silver blaze problem." Applying his analysis for the general case of distinct \(\mu_{u},\mu_{d},\mu_{s}\), one finds that \(P_{{}_{\rm PQ}}\) first deviates from its vacuum value when either \(\mu_{d}=m_{\pi}/2\), \(\mu_{u}=m_{\pi}/2\), or \(\mu_{s}=m_{s\bar{s}}/2\), where \(m_{s\bar{s}}\) is the mass of the unphysical strange-antistrange pseudoscalar obtained by ignoring disconnected contributions to the correlation function. Therefore, at small chemical potentials \(m_{\pi}/2<\mu_{q}<m_{p}/3\), \(P_{{}_{\rm PQ}}\) differs significantly from the physical pressure.
In the next section we will see that, for large chemical potentials, the two pressures become much more similar.
## 3 Perturbation theory
To construct a perturbation theory for Eq. (6), as usual for fermions, we rewrite
\[{\rm Det}\left(A\right) =\exp(\ln\,{\rm Det}\left(A\right))=\exp(\,{\rm Tr}\,\ln(A))\] \[\sqrt{\,{\rm Det}\left(A\right)} =\exp\left(\frac{1}{2}\ln\,{\rm Det}\left(A\right)\right)=\exp \left(\frac{1}{2}\,{\rm Tr}\,\ln(A)\right)\,. \tag{10}\]
The process of deriving Feynman rules from \(\frac{1}{2}\,{\rm Tr}\,\ln(\not{\rm D}+m\pm\mu\gamma^{0})\) is the same as without the \(\frac{1}{2}\) factor, except that each fermionic loop receives a factor of \(\frac{1}{2}\). Therefore, in each place where a fermionic loop can appear, instead of performing a sum over \(n_{f}\) fermions, each with mass \(m_{q}\) and chemical potential \(\mu_{q}\), we sum over \(2n_{f}\) terms; twice for each \(m_{q}\), once with \(\mu_{q}\) and once with \(-\mu_{q}\) but each with an overall factor of \(\frac{1}{2}\). This is the same as averaging the loop over whether \(\mu\) is positive or negative.
To see the impact of \(\mu\to-\mu\) we remind the reader about Furry's theorem [93]. Consider a fermion loop with \(n\) external gluons connected by \(n\) propagators, with incoming momenta \(q_{1},\ldots,q_{n}\) with \(q_{n}=-\sum_{j=1}^{n-1}q_{j}\) by momentum conservation. The fermions are in a representation \(R\) (typically the fundamental representation) with Hermitian generators \(T^{A}\). Its group-theory factor and the numerator of its Feynman rule are
\[{\rm Tr}\,\left(T^{A_{1}}\ldots T^{A_{n}}\right)\ \ {\rm Tr}\,\gamma_{\mu_{1}}(i \not{p}_{1}+m+\mu\gamma^{0})\gamma_{\mu_{2}}(i\not{p}_{2}+m+\mu\gamma^{0})\ldots \tag{11}\]
where \(p_{1}\) is the loop momentum and \(p_{i}=p_{i-1}+q_{i}\). Inserting the charge conjugation matrix times its inverse, \(CC^{-1}\), between each neighboring term and using \(C^{-1}\gamma_{\mu}C=-\gamma_{\mu}^{\top}\), we find
\[\mathrm{Tr}\;\left((-T^{A_{1}})\dots(-T^{A_{n}})\right)\;\mathrm{ Tr}\;\gamma_{\mu_{1}}^{\top}(-i\not{p}_{1}^{\top}+m-\mu\gamma^{0\top}) \gamma_{\mu_{2}}^{\top}(-i\not{p}_{2}^{\top}+m-\mu\gamma^{0\top})\dots. \tag{11}\]
Reversing the order of the symbols to get rid of the transposes on the gamma matrices, and using \(T^{\top}=T^{*}\) for the Hermitian group generators, we find
\[\mathrm{Tr}\;\left((-T^{*A_{n}})\dots(-T^{*A_{1}})\right)\;\mathrm{Tr}\;\dots (-i\not{p}_{2}+m-\mu\gamma^{0})\gamma_{\mu_{2}}(-i\not{p}_{i}+m-\mu\gamma^{0} )\gamma_{\mu_{1}} \tag{12}\]
which is the same diagram,5 traversed in the opposite sense, with \(\mu\to-\mu\) and with \(T^{A}\to(-T^{*A})\) which is the generator of the conjugate representation - so if the \(T^{A}\) generate the fundamental representation, the \(-T^{*A}\) generate the antifundamental representation. Another way to state Furry's theorem is then that reversing the sign of \(\mu\) is equivalent to considering fermions with the original \(\mu\) value but in the conjugate representation (quarks with \(-\mu\) are the same as antiquarks with \(+\mu\)).
Footnote 5: Reversing the sense in which the loop is traversed flips the signs of the momenta as well as reversing the order of the matrices in the color and Dirac traces.
For the case of two external gluons we have
\[2\,\mathrm{Tr}\;T^{A}T^{B}=\delta^{AB}=2\,\mathrm{Tr}\;(-T^{*A})(-T^{*B}) \tag{13}\]
and the fundamental and antifundamental representations give the same answer. The representations first differ when there are three gluons attached:
\[4\,\mathrm{Tr}\;T^{A}T^{B}T^{C} =if^{ABC}+d^{ABC}\] but \[4\,\mathrm{Tr}\;(-T^{*A})(-T^{*B})(-T^{*C}) =if^{ABC}-d^{ABC} \tag{14}\]
where \(d^{ABC}\) is a totally symmetric symbol which arises in groups of rank 2 or more.
Because both \(f^{ABC}d^{ABD}=0\) and \(\delta^{AB}d^{ABC}=0\), the first group-theoretical structure where \(d^{ABC}\) can give a nonzero contribution, for the group SU(\(N_{c}\)), is
\[d^{ABC}d^{ABD}=\frac{N_{c}^{2}-4}{N_{c}}\delta^{CD}\qquad\mathrm{so}\qquad d^ {ABC}d^{ABC}=\frac{(N_{c}^{2}-1)(N_{c}^{2}-4)}{N_{c}}. \tag{15}\]
Figure 1: The lowest-order diagram which distinguishes between fundamental and antifundamental representations, with two relative orientations for the fermionic loops.
This contraction can only appear in a diagram with at least two fermion loops with at least three gluon attachments each. The only such diagram at the \(\alpha_{\rm s}^{3}\) level is shown in Figure 1. Fermion number can traverse the two loops with two relative orientations, shown on the left and the right in the figure. We will write the two versions of the diagram, stripping off all group-theory factors, for two specific species \((i,j)\), as \(A(\mu_{i},\mu_{j})\) and \(B(\mu_{i},\mu_{j})\). That is, \(A(\mu_{i},\mu_{j})\) and \(B(\mu_{i},\mu_{j})\) represent the value of each fermion orientation in the abelian version of the diagram. In this notation, the contribution of this diagram to the true pressure is
\[\delta P= \left(\,{\rm Tr}\,T^{A}T^{B}T^{C}\,\,\,{\rm Tr}\,T^{A}T^{B}T^{C} \right)A(\mu_{i},\mu_{j})+\left(\,{\rm Tr}\,T^{A}T^{B}T^{C}\,\,{\rm Tr}\,T^{C}T ^{B}T^{A}\right)B(\mu_{i},\mu_{j})\] \[= \frac{(N_{c}^{2}-1)(N_{c}^{2}-4)}{16N_{c}}\left(A(\mu_{i},\mu_{j })+B(\mu_{i},\mu_{j})\right)+\frac{(N_{c}^{2}-1)N_{c}}{16}\left(B(\mu_{i},\mu _{j})-A(\mu_{i},\mu_{j})\right). \tag{11}\]
In the abelian theory the group-theoretical coefficients on the first and second terms of the second line would be 1 and 0 respectively.
Furry's Theorem means that \(B(\mu_{i},\mu_{j})=-A(\mu_{i},-\mu_{j})=-A(-\mu_{i},\mu_{j})\). This allows us to rewrite Eq. (11) in terms of \(A\) only:
\[\delta P=\frac{(N_{c}^{2}-1)(N_{c}^{2}-4)}{16N_{c}}\left(A(\mu_{i},\mu_{j})-A( \mu_{i},-\mu_{j})\right)-\frac{(N_{c}^{2}-1)N_{c}}{16}\left(A(\mu_{i},\mu_{j} )+A(\mu_{i},-\mu_{j})\right). \tag{12}\]
Averaging over \(\mu_{j}\leftrightarrow-\mu_{j}\) and/or \(\mu_{i}\leftrightarrow-\mu_{i}\), as we are instructed to do in evaluating \(P_{{}_{\rm PQ}}\), eliminates the first expression and leaves only the second. Therefore, the difference between the phase-quenched and true pressure, at order \(\alpha_{\rm s}^{3}\), is
\[P_{{}_{\rm PQ}}-P=-\sum_{i,j=uds}\frac{(N_{c}^{2}-1)(N_{c}^{2}-4)}{16N_{c}} \left(A(\mu_{i},\mu_{j})-A(\mu_{i},-\mu_{j})\right). \tag{13}\]
We can show that this combination is automatically positive; the proof, along with an explicit evaluation, will appear in a forthcoming publication.
Evaluating this single combination of diagrams would determine the perturbative correction between the phase-quenched and true pressures, through to order \(\alpha_{\rm s}^{3}\). Any quark with \(\mu_{i}=0\) does not contribute to Eq. (13). The contribution of soft gluon momenta to Eq. (13) is suppressed, since the three-gluon hard loop contains only the \(f^{ABC}\) group-theory factor [94].6 Therefore Eq. (13) does not contain any logarithmically enhanced \(\sim\alpha_{\rm s}^{3}\ln(\alpha_{\rm s})\) contributions.
Footnote 6: We are indebted to Saga S��pipi for useful conversations on this point.
As an aside, we comment on the behavior of these terms as a function of \(N_{c}\). In the large \(N_{c}\) or t'Hooft limit, the group-theory factor on Orientation \(B\) in Eq. (11) is \(\propto N_{c}^{3}\) while that for Orientation \(A\) is \(\propto N_{c}\) and is suppressed. In contrast, for the group SU(2), the group-theoretical factor on \(A+B\) in Eq. (11) vanishes and there is no distinction between the fundamental and antifundamental representations. In fact, because every SU(2) representation is equivalent to its conjugate representation, it is easy to show that any combination of chemical potentials is free of sign problems in 2-color QCD, which has motivated investigation of this theory at finite density [95; 96; 97; 98; 99; 100; 101].
Lattice considerations
The previous sections have made clear that a lattice calculation of \(P_{\rm PQ}\) would be valuable. Here we want to take one small step towards estimating how difficult such a lattice calculation would be. The larger \(\mu a\) is, the more rapidly a lattice calculation develops statistical power; since \(P\propto\mu^{4}\), we expect that the signal-to-noise should scale roughly as \((\mu a)^{4}\). Therefore it is advantageous to work on lattices with the largest \((\mu a)\) we can get away with.
But increasing \(\mu a\) increases systematic lattice-spacing effects. In general, for a lattice calculation to be accurate we need the lattice spacing to obey \(a\Lambda_{\rm QCD}\ll 1\). But \(\mu\) introduces an additional scale, and for the regime \(\mu\gg\Lambda_{\rm QCD}\), we expect to need the stronger condition \(\mu a\ll 1\). But what does \(\mu a\ll 1\) really mean - that is, how small does \(\mu a\) really need to be? To estimate this, we compute how different the vacuum-subtracted7 pressure \(P(\mu)-P(0)\) is on the lattice from its continuum value, at lowest (zero) order in the strong coupling. Here we will only consider staggered quarks [102; 103; 104], since we expect this quark formulation to be used in practical calculations.
Footnote 7: The pressure receives divergent vacuum contributions (the cosmological constant problem) which have to be subtracted in a lattice treatment.
In the continuum, one free Dirac fermion with chemical potential \(\mu\) provides a pressure of
\[P=\frac{1}{\beta V}\ln(Z)=\int\frac{d^{4}p}{(2\pi)^{4}}\left(\,{\rm Tr}\,\ln(-i \hbox to 0.0pt{/}p+m+\mu\gamma^{0})-\,{\rm Tr}\,\ln(-i\hbox to 0.0pt{/}p+m) \right)\,, \tag{10}\]
where the second term subtracts the \(\mu=0\) "vacuum" pressure. Evaluating the trace, one finds
\[P=2\int\frac{d^{3}\vec{p}}{(2\pi)^{3}}\int\frac{dp_{0}}{2\pi}\left(\ln\frac{( p_{0}+i\mu)^{2}+\vec{p}^{2}+m^{2}}{p_{0}^{2}+\vec{p}^{2}+m^{2}}\right). \tag{11}\]
Naively it appears that \(\mu\) contributes to the pressure at any \(\vec{p}\) value including when \(\sqrt{\vec{p}^{2}+m^{2}}>|\mu|\), but this is illusory. Separating the numerator and denominator of the log, one may deform the \(p^{0}\) integration contour for the former, shifting it by \(-i\mu\). If \(\sqrt{p^{2}+m^{2}}>|\mu|\) then this contour deformation encounters no singularities, and the \(p_{0}\) integral at nonzero and at zero \(\mu\) are identical. For \(|\mu|>\sqrt{\vec{p}^{2}+m^{2}}\) there is a cut running from \(p^{0}=0\) to \(p^{0}=-i(\mu-\sqrt{\vec{p}^{2}+m^{2}})\) with discontinuity \(2\pi i\) which the deformed contour must enclose, leading to
\[P=2\int\frac{d^{3}\vec{p}}{(2\pi)^{3}}\left(|\mu|-\sqrt{\vec{p}^{2}+m^{2}} \right)\Theta\left(|\mu|-\sqrt{\vec{p}^{2}+m^{2}}\right) \tag{12}\]
as expected. The expressions for the energy density \(\varepsilon\) and for \(\mu N\) the chemical potential times the number density are the same but with the first factor, \(|\mu|-\sqrt{\vec{p}^{2}+m^{2}}\), replaced by \(\sqrt{\vec{p}^{2}+m^{2}}\) and by \(|\mu|\) respectively, recovering the thermodynamical relation \(\mu N=\varepsilon+P\).
Naive staggered fermions [104] represent four species of physical fermions with the limitation that each \(p_{\mu}\) component runs over \([-\pi/(2a),\pi/(2a)]\) with \(a\) the lattice spacing8 and
with the substitution
\[\partial_{j}\psi(x) \implies \frac{\psi(x+a\hat{j})-\psi(x-a\hat{j})}{2a}\,,\] \[\partial_{0}\psi(x)+\mu\psi(x) \implies \frac{e^{a\mu}\psi(x+a\hat{0})-e^{-a\mu}\psi(x-a\hat{0})}{2a}\,,\] \[-ip_{j} \implies \frac{-i}{a}\sin(ap_{j})\,,\] \[-ip_{0}+\mu \implies \frac{1}{a}\left(-i\sin(ap_{0})\cosh(a\mu)+\cos(ap_{0})\sinh(a \mu)\right)\,. \tag{24}\]
Here \(\hat{j}\) and \(\hat{0}\) represent the unit vector in the \(j\) spatial direction and in the time direction respectively. Note that the effects of \(\mu\gamma^{0}\) must be incorporated along with the time derivative because, in the staggered formulation, an insertion of \(\gamma^{0}\) requires that the comparison be made between \(\bar{\psi}\) and \(\psi\) which differ by an odd number of steps in the time direction. This is in any case the preferred way of introducing a chemical potential because it avoids quadratic-in-\(\mu\) lattice artifacts, see [106]. The fact that a staggered fermion represents four physical fermions is handled through the fourth-root trick and leads to a pressure for one physical fermion of
\[P =2\int_{-\pi/2a}^{\pi/2a}\frac{d^{3}\vec{p}}{(2\pi)^{3}}\int_{- \pi/2a}^{\pi/2a}\frac{dp_{0}}{2\pi}\left(\ln\frac{\sin^{2}(ap_{0}+ia\mu)+a^{2}E _{p}^{2}}{\sin^{2}(ap_{0})+a^{2}E_{p}^{2}}\right)\] \[a^{2}E_{p}^{2} =\sum_{j}\sin^{2}(ap_{j})+a^{2}m^{2}\,. \tag{25}\]
There are two effects. First, the lattice, rather than physical, dispersion determines the energy \(E_{p}\). Second, the \(p_{0}\) range is finite and periodic and \(p_{0}\) appears inside a trigonometric function inside the log. Nevertheless, a contour deformation, \(p_{0}\to p_{0}-i\mu\), is still possible, and it encounters a similar discontinuity in the log:
\[P=2\int_{-\pi/2a}^{\pi/2a}\frac{d^{3}\vec{p}}{(2\pi)^{3}}\left(|\mu|-a^{-1} \mathrm{arcsinh}(aE_{p})\right)\Theta\left(|\mu|-a^{-1}\mathrm{arcsinh}(aE_{ p})\right)\,. \tag{26}\]
We have only been able to compute the resulting \(\vec{p}\) integral numerically, but as expected, the leading small-\(\mu\) corrections are of order \(\mu^{2}a^{2}\). Specifically, the flatter dispersion relation lowers \(E_{p}\) and leads to a larger phase space region where fermionic states are occupied.
We show the results of a simple numerical evaluation of Eq. (26) in Figure 2. We have also checked that carrying out the integral shown in Eq. (25) leads to the same result. Unfortunately, if we set as a criterion for good convergence that the lattice and continuum pressures should differ by at most 10%, we are restricted to \(a\mu<0.42\). To do better, we could use an improved fermionic action. Naik has advocated [107] that one replace Eq. (24) with
an improved version,
\[\partial_{0}\psi(x)+\mu\psi(x) \Longrightarrow \frac{9}{8}\frac{e^{a\mu}\psi(x{+}a)-e^{-a\mu}\psi(x{-}a)}{2a}- \frac{1}{8}\frac{e^{3a\mu}\psi(x{+}3a\hat{0})-e^{-3a\mu}\psi(x{-}3a\hat{0})}{6a}\] \[\sin^{2}(ap_{j}) \Longrightarrow \left(\frac{9}{8}\sin(ap_{j})-\frac{1}{24}\sin(3ap_{j})\right)^{ 2}\,,\] \[\sin^{2}(ap_{0}+ia\mu) \Longrightarrow \left(\frac{9}{8}\sin(ap_{0}+ia\mu)-\frac{1}{24}\sin(3ap_{0}+3 ia\mu)\right)^{2}\,. \tag{20}\]
As shown in Figure 2, this dramatically improves the match between the lattice and continuum pressure, such that \(a\mu=1\) still has \(<10\%\) corrections. However, beyond \(a\mu=1.147\), \(\cosh(3a\mu)>9\cosh(a\mu)\) and the "improvement" term in Eq. (20) starts to dominate over the nearest-neighbor term. This leads to additional cuts in the modified version of Eq. (21), and the performance of this implementation rather abruptly breaks down.
With these results in mind, we feel that nearest-neighbor staggered quarks can only treat large \(\mu\) accurately out to disappointingly small \(\mu a\sim 0.4\) or \(0.5\) - possibly a little higher with the help of extrapolation over a few lattice spacings. Improved quarks, such as ASQTAD
Figure 2: Lattice-to-continuum pressure ratio as a function of the chemical potential in lattice units. The black curve is for nearest-neighbor staggered fermions, the blue curve is for 3’rd-neighbor-improved fermions. At the order of interest, the use of “fattened links” and other modifications to the fermionic action are not relevant.
[108; 109] or HISQ [110] quarks, which use the Naik term, should be able to do better but it is very dangerous to venture beyond \(a\mu=1.1\).
Another limitation of a lattice treatment is that one generically must compute at finite temperature. Therefore we should also estimate the size of thermal effects. We will assume that the thermal corrections are similar to those in the continuum. At the free theory level, the pressure of a single Dirac fermion with chemical potential \(\mu\) at temperature \(T\) actually has a closed form:
\[P=4\left(\frac{7\pi^{2}}{720}T^{4}+\frac{1}{24}\mu^{2}T^{2}+\frac{1}{48\pi^{2} }\mu^{4}\right). \tag{22}\]
This implies \(\mu>14T\) to keep thermal effects at the 10% level.
One can attempt to remove both lattice-spacing and temperature effects through extrapolation over multiple lattice spacings and box sizes. In the case of lattice spacing effects, an effective field theory analysis at the scale \(\mu\) tells us that the lattice-spacing effects should scale as \((\mu a)^{2}\) up to anomalous dimension corrections. The coefficient will not necessarily equal the free-theory one, but the effect should definitely be a power law with power close to 2. More care is needed in extrapolating away temperature effects. While one might expect that the leading thermal effects are of order \(T^{2}\) as in Eq. (22), this is really an assumption which can go wrong if, for instance, the theory develops a mass gap due to interactions between excitations near the Fermi surface. Therefore we expect that more care must be taken when extrapolating to small temperature.
## 5 Discussion
This paper has made three points.
* Lattice QCD can study arbitrary combinations of \(\mu_{u},\mu_{d},\mu_{s}\) using phase-quenching, providing \(P_{\mbox{\tiny PQ}}(\mu_{q})\). Though this does not return the true QCD pressure, it returns a strict upper bound on the true pressure: \(P_{\mbox{\tiny PQ}}(\mu_{q})\geq P(\mu_{q})\), which could still be useful in constraining the equation of state for neutron star matter.
* The difference \(P_{\mbox{\tiny PQ}}-P\) is small at weak coupling, in the sense that \(\frac{P_{\mbox{\tiny PQ}}-P}{P}\propto\alpha_{\mbox{\tiny s}}^{3}\). Furthermore, the \(\alpha_{\mbox{\tiny s}}^{3}\) contribution arises from a single diagram. If we could compute this diagram, we could use \(P_{\mbox{\tiny PQ}}\) to determine \(P\) up to \(\alpha_{\mbox{\tiny s}}^{4}\) corrections (up to logs and nonperturbative effects such as pairing gaps).
* Lattice calculations of \(P(\mu)\) encounter lattice artifacts. We estimate the size of these artifacts and advocate that nearest-neighbor fermion formulations use \(a\mu\leq 0.4\), while improved-dispersion fermions may be reliable at chemical potentials more than a factor of 2 larger.
The first step in utilizing this approach is to perform a lattice study at a series of chemical potentials. For neutron star physics one should choose \(\mu_{d}=\mu_{s}\) and \(\mu_{u}\) somewhat smaller to describe a charge-neutral system including equilibrated leptons. One might study a range of
chemical potentials from \(\mu_{s}=100\) MeV to 1 GeV. Since a lattice approach will likely involve determining \(N\) at each \(\mu\) value and integrating it to find the pressure using \(N_{q}=dP/d\mu_{q}\), it appears necessary to consider a tightly spaced series of \(\mu\) values. One also needs a few lattice spacings in order to perform a continuum extrapolation.
The next step would be to use these \(P_{{}_{\rm PQ}}(\mu)\) values as constraints, when considering the high-density QCD equation of state. A proposed QCD equation of state is usually expressed as a curve in either the \((P,N)\), \((N,\varepsilon)\), or \((P,\varepsilon)\) plane. Standard thermodynamical relations can convert this into a curve in the \(P,\mu\) plane; if the proposed EOS exceeds \(P_{{}_{\rm PQ}}(\mu)\) for any \(\mu\) where we have data, it is excluded.
Next, the possibility of evaluating \(P_{{}_{\rm PQ}}(\mu)\) directly on the lattice at large \(\mu\) can be used as a check on the performance of the perturbative expansion. Specifically, one can consider the perturbative expansion for \(P_{{}_{\rm PQ}}(\mu)\), which as we have argued is the same as the expansion for \(P(\mu)\) at the currently known order of \(\alpha_{\rm s}^{3}\ln(\alpha_{\rm s})\)[46]. By comparing this result to the lattice-determined \(P_{{}_{\rm PQ}}(\mu)\), one can assess how accurate the perturbative series is as a function of the scale \(\mu\).
Finally, one should perform an accurate evaluation of the single diagram which introduces \(\mathcal{O}(\alpha_{\rm s}^{3})\) differences between \(P_{{}_{\rm PQ}}\) and \(P\). In any \(\mu\)-region where the result is actually a small correction, this can be used together with a lattice-determined \(P_{{}_{\rm PQ}}(\mu)\) to provide an improved estimate of \(P(\mu)\). The resulting estimate would also be perturbatively complete at \(\mathcal{O}(\alpha_{\rm s}^{3})\), with only \(\alpha_{\rm s}^{4}\ln(\alpha_{\rm s})\), higher-order, and nonperturbative corrections remaining.
We would like to thank Gergely Endrodi and Saga Sappi for useful discussions. We acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the CRC-TR 211 "Strong-interaction matter under extreme conditions"- project number 315477589 - TRR 211, the DFG-Project ID 279384907-SFB 1245, and by the State of Hesse within the Research Cluster ELEMENTS (Project ID 500/10.006).
|
2309.04044 | Capturing continuous, long timescale behavioral changes in
$\textit{Drosophila melanogaster}$ postural data | Animal behavior spans many timescales, from short, seconds-scale actions to
circadian rhythms over many hours to life-long changes during aging. Most
quantitative behavior studies have focused on short-timescale behaviors such as
locomotion and grooming. Analysis of these data suggests there exists a
hierarchy of timescales; however, the limited duration of these experiments
prevents the investigation of the full temporal structure. To access longer
timescales of behavior, we continuously recorded individual $\textit{Drosophila
melanogaster}$ at 100 frames per second for up to 7 days at a time in
featureless arenas on sucrose-agarose media. We use the deep learning framework
SLEAP to produce a full-body postural data set for 47 individuals resulting in
nearly 2 billion pose instances. We identify stereotyped behaviors such as
grooming, proboscis extension, and locomotion and use the resulting ethograms
to explore how the flies' behavior varies across time of day and days in the
experiment. We find distinct circadian patterns in all of our stereotyped
behavior and also see changes in behavior over the course of the experiment as
the flies weaken and die. | Grace C. McKenzie-Smith, Scott W. Wolf, Julien F. Ayroles, Joshua W. Shaevitz | 2023-09-07T23:25:10Z | http://arxiv.org/abs/2309.04044v1 | # Capturing continuous, long timescale behavioral changes in _Drosophila melanogaster_ postural data
###### Abstract
Animal behavior spans many timescales, from short, seconds-scale actions to circadian rhythms over many hours to life-long changes during aging. Most quantitative behavior studies have focused on short-timescale behaviors such as locomotion and grooming. Analysis of these data suggests there exists a hierarchy of timescales; however, the limited duration of these experiments prevents the investigation of the full temporal structure. To access longer timescales of behavior, we continuously recorded individual _Drosophila melanogaster_ at 100 frames per second for up to 7 days at a time in featureless arenas on sucrose-agarose media. We use the deep learning framework SLEAP to produce a full-body postural data set for 47 individuals resulting in nearly 2 billion pose instances. We identify stereotyped behaviors such as grooming, pro-boscis extension, and locomotion and use the resulting ethograms to explore how the flies' behavior varies across time of day and days in the experiment. We find distinct circadian patterns in all of our stereotyped behavior and also see changes in behavior over the course of the experiment as the files weaken and die.
technology | behavioral tracking | pose estimation | circadian rhythms | aging
## 1 Introduction
Uncovering the temporal structure of behavior has long been a topic of theoretical interest and experimental challenge [1, 2, 3, 4]. Animals carry out sequences of behaviors on many timescales, from the short timescales of the individual movements required for grooming, eating, and social communication to the longer timescales of hunger, arousal, circadian cycles, mating seasons, and the aging process. The specifics of these behavior sequences determine much of what we can characterize about an animal, such as its health, reproductive fitness, and that idiosyncracy of action that we might call 'personality.' These behavior sequences also give us indirect ways to assess the internal processes of the animal, such as neural activity, gene expression, and other internal states like hunger or fatigue. Finding general principles that govern the order of behaviors would be an exciting step forward in understanding how animals interact with the world around them and how internal factors may shape that interaction. This course of study requires data that covers the many timescales over which animal behavior varies.
Historically, taking long-timescale data covering days or weeks of an animal's life has required balancing continuity, throughput, and dimensionality. _In Drosophila melanogaster_, simple experimental setups, such as beam-break assays, allow for continuous monitoring of activity levels over days [5, 6], but fail to capture the high-resolution data necessary for modern techniques of behavior analysis such as MotionMapper [7], B-SOiD [8], VAME [9], or Keypoint-MoSeq [10]. On the other hand, the acquisition of high-resolution data has been restricted to short timescales by the computational resources required to store and process the extremely large imaging data, imposing an upper limit on the order of an hour. When studying fine-grained behavioral variation at longer timescales, previous work utilized short recordings taken from different individuals with ages distributed across the lifespan of the animal [11].
Here, we leverage recent computational advances to record a high-resolution continuous data set of _D. melanogaster_ behavior spanning 4-8 days. We recorded 47 freely moving _D. melanogaster_ using constant IR illumination and an IR-sensitive camera at a frame rate of 100Hz, with a 12-hour visible-light day/night cycle. We tracked 14 body parts from each fly using SLEAP [12] and utilized MotionMapper to charac
terize stereotyped behavioral states, such as grooming, locomoting, and feeding. Using techniques of compositional data analysis [13], we characterize the dynamics of this behavioral repertoire across time of day and over the days of the experiment. We find distinct circadian patterns in all measured behaviors, including grooming, proboscis extension, and locomotion speed. We see an overall decline in circadianicity, the difference in behavior between day and night hours, across days in the experiment as flies weaken an die, and see general declines in feeding and locomotion speed as the fraction of time spent in an idle state increases. Overall, we find that our data captures both expected and novel patterns of _D. melanogaster_ behavior across multiple 24-hour periods. We also provide this data to the broader community as a resource to study _D. melanogaster_ behavior as it evolves along timescales beyond the scope of previous research.
## Results and discussion
We designed a recording apparatus to allow for continuous capture of _D. melanogaster_ behavior over the course of days (see Methods for details and Figure S1A). _D. melanogaster_ were constantly illuminated from above with IR light, to which they have minimal visual sensitivity [14], while LED panels provided a 12-hour visible-light day/night cycle with the same on/off times under which the animals were raised. We made arenas by layering pieces of transparent laser-cut acrylic to create cylindrical chambers in which flies lived and behaved over the course of our experiments (Figure S1B). We limited the arenas to 1.5mm in height to prevent flying and to decrease the incidence of wall walking and ceiling walking, which lower tracking quality. We provided the flies with a base gel layer of sucrose-agarose, which permitted survival of up to 7 days while preventing the significant fungal growth observed when yeast extract was included. We recorded four freely behaving _D. melanogaster_ in individual chambers per camera at 100 Hz with a resolution of 28.25 pixels/mm
Figure 1: Experimental schematic showing tracking, lifespans, and behavioral segmentations across timescales. **A** Image showing the experimental arena as viewed from below. The behavior of 4 \(\Delta\)_D. melanogaster_ is captured simultaneously while giving each by enough room to freely carry out at behaviors except light. **B** Magnified view of a single individual showing tracks for each node of the SLEAP selection. Each color denotes a node and circle sizes increase with time. **C** Survival curve of the 47 flies included in the experiment. Death occurs on average after \(\sim\)119 hours, or almost 5 full days into the experiment. **D** Eltogram and geocoritized traces for each track and a raster denoting proboscis visibility. **E** Rapid showing the geometric means of stereotyped behavior components across all flies and all complete 24h periods. **F** Barplot showing the geometric means of stereotyped behavior components across all flies and all hours grouped by experimental day.
(Figure 1A). This is sufficient to resolve relevant features of the _D. melanogaster_ body such as the tarsi (leg tips) and proboscis (Figure 1B).
For each experiment, we imaged male _iso\({}^{KH11}\)D melanogaster_ from two days post-eclrosion (emergence from the pupa as an adult insect) until death, yielding 4-8 days of continuous recording with half the flies dying by Day 5 (Figure 1C), for a total of 5,584 fly-hours. Note that this lifespan is shorter than conventional assays due to the nutrient-limited sucrose-based food source we used to avoid fungal growth [15].
In the natural world, daytime conditions increase lighting and temperature, but in the lab, the _D. melanogaster_ circadian cycle can be maintained by these factors in isolation, with a day/night lighting cycle under constant temperature or a temperature cycle under constant lighting conditions [16, 17, 18]. Our experiments have both a change in light intensity and temperature conditions between day and night, with daytime temperature levels varying between different experiments (\(\sim\)28-29 \({}^{\circ}\)C for experiments 1-2, and \(\sim\) 30-31 \({}^{\circ}\)C for experiments 3-4) and nighttime temperatures settling to \(\sim\)27 \({}^{\circ}\)C for all experiments (see Methods and Figure S9). We provide temperature and humidity recordings with our dataset.
To extract postural information from our data, we used SLEAP, a deep-learning-based framework that can infer animal posture based on user-generated training data [12] (See Methods for details). We tracked a 14-point skeleton comprised of the head, eyes, thorax, abdomen, wing tips, tarsi, and proboscis of each individual (Figure 1B). While our mean localization error was less than.1mm (Figure S2), the quality of the tracks decreased when animals walked along the edges of the arenas. Accordingly, we built a classifier to identify the time points when flies walked on the edges (Figure S3), and excluded these time points from the portions of our analysis reliant on accurate tracking of any body part but the thorax.
In order to quantify discrete, stereotyped behaviors, we modified the MotionMapper pipeline [7] to parallelize more steps and optimize use for postural data instead of raw images (Figure S4). We used the Lomb-Scargle periodogram rather than a continuous Morlet wavelet transform to generate power spectra for each body part as this algorithm does not require interpolation of missing data. As a first step, we classified all time points with a total power of less than \(0.5012mm^{2}\), summed over all tracked positions, as 'idle', i.e. times where the flies are not moving at all. We exclude all time points classified as idle and all non-idle edge time points from the spectral analysis that follows. 'Idle' and 'non-idle edge' then become their own behavioral categories.
Our total amount of data is too large to allow for direct classification of behaviors from all time points. Instead, we generated a set of 141 one-hour videos sampled evenly across flies, time of day, and day of experiment. From this subset of the videos, we selected 64,014 time points representative of the full suite of observed dynamics via an importance-sampling algorithm [7], and embed the power spectra from these points into two dimensions using the UMAP algorithm.
We then embedded all time points from the 141 hour subset and found well-separated peaks of high density using an adaptive threshold (Figure S5). We assigned behavior labels to these regions by looking at randomly selected clips from time points where the flies' dynamics tell within a given regions' boundaries for a reasonable length of time. We grouped together regions of similar dynamics, and identified seven well-defined behaviors: idle, proboscis extension, fore grooming (of the eyes or forelegs), hind-grooming (of the abdomen or hindlegs), wing grooming, altered locomotion (often involving slipping or limping), and locomotion. The idle behavior state includes all points assigned as idle using the total power cutoff as well as several regions of the spectral embedding that contained idle behaviors with single-limb tracking errors. In addition to these well defined behaviors, \(\sim\)15% of all time points represent unstereotyped dynamics, where the fly is either on the edge and non-idle, or where its dynamics fall outside the boundaries of the identified peaks of stereotyped behaviors. We exclude these time points from later analyses. Finally, we project the full data set into the two-dimensional space and use the behavioral boundaries from the training set to classify each time point as one of the six stereotyped behaviors, idle, non-idle edge, or
Figure 2: Principal component analysis of stereotyped behavioral components of all flies across all experimental hours. A Bipid showing the projections of individual fly hours and loadings of each stereotyped behavioral component. Dark gray dots show timepoints from when lights are off and light gray dots show timepoints when lights are on. **B** Projection of PC1 against time of day for all complete 24h periods of all flies. **C** Projection of PC2 against time of day for all complete 24h periods of all flies. **D** Circadianity vs day of experiment as measured by the difference between the average projection onto PC1 during night hours minus the projection onto PC1 during day hours.
unstereotyped.
We used 5 frames (1/20 of a second) as a minimum bout length for each stereotyped behavior, and forwarded each fly's behavior sequence with this bout length, assigning any bout of 4 frames or less to the previous region of long duration. The resulting ethograms permit analysis of patterns in locomotion, feeding, and grooming (Figure 1D). Because our data is continuous over multiple 24-hour periods, we can look at how behavior varies with time of day and across days of the experiment.
Our data is closed (i.e. the fraction of time spent in all behaviors must add up to one) requiring us to use methods of compositional data analysis to examine changes across flies, hours, and days [13, 19]. Averages of closed data are best calculated as geometric means, which we denote 'behavior components'. To discuss circadian behavioral effects, we use Zeitgeber time (ZT), where time is measured from the onset of a periodic stimulus rather than from midnight on a clock, to capture the cyclic nature of circadian effects. For this set of experiments, ZT = Oh corresponds to the visible lights coming on, and ZT = 12h corresponds to lights turning off.
Looking across all fly hours and all days, we see a distinct circadian pattern of behavior with higher levels of idle during the dark hours, and more locomotion and grooming during the light hours (Figure 1E). The first hour after the lights turn on is particularly distinct, with comparatively high levels of locomotion and grooming. The locomotion and grooming behavior components increase in the hours leading up to lights on and lights off, indicating anticipation of the change in lighting condition. Over the course of the experiment, the flies' behaviors start changing significantly after Day 3 (Figure 1F). Time spent in idle increases over Days 4 through 6 as flies begin dying on the nutritionally incomplete food used for this experiment.
To examine overall behavior variation across hours of the day, we carried out a principal components analysis of the compositional data [20, 21] using the compositions package in R [22]. We used the isometric log-ratio transformed behavior compositions to carry out robust principal components analysis using the Minimum Covariance Determinant (MCD) method, and then backtransformed the result into centered log-ratio loadings. The first three principal components (PCs) explain \(\sim\)85% of the variance across all fly hours (Figure S6).
As can be seen in the bipilot of the first two PCs (Figure 2A), PC1 largely weights the locomotion behaviors, locomotion and altered locomotion, against idle and pro-boscis extension. This PC describes the main differences between day and night, with positive projection averages during the day, corresponding to more locomotion/grooming, and negative values at night when the animals are idle (Figure 2B). The average projection along PC1 begins to increase before the lights turn on, indicating that the animals anticipate the rise of the sun. Peak amplitude along this PC occurs just after the lights turn on, potentially indicating a slow morning transition from nighttime behaviors to daytime activity. The level of this projection stays roughly constant throughout the day, but then increases and peaks just before the lights turn off at 12h ZT. This is followed by a slow decline in the amplitude after dark until reaching a steady night level.
Previous behavioral studies of the _D. melanogaster_ circadian cycle have used relatively coarse metrics, such as the activity counts generated by _Drosophila_ Activity Monitors [5]. These studies have shown that _D. melanogaster_ have peaks of locomotion activity around their subjective morning and evening, with the increase in activity slightly anticipating the actual change in lighting conditions [23, 24]. Our high-resolution behavioral data and the projection along PC1 recapitulate these general trends, but show quantitative difference when the lights change. In particular, we see gradual change in amplitude after lights turn off that last several hours whereas this previous work sees a more abrupt session of locomotion at this time.
The second principle component weights the three grooming behaviors (fore, hind, and wing) against the locomotion behaviors and proboscis extension. The average projection onto PC2 has a distinct peak during the hour just after lights turn on, separating this unique part of the circadian pattern from the more general day vs. night changes in behavior picked up by PC1 (Figure 2C). PC3 largely separates the first two experiments (begin 02/17/2022 and 03/13/2022) from the second set of experiments (begin 03/26/2022 and 04/18/2022) (Figure S7). The second set of experiments took place at higher temperatures (Figure S9). The difference in the projections of each fly-hour along PC3 between the two sets of experiments is lowest on Day 1, and increases over experimental days.
Since the amplitude along PC1 largely follows the day-night cycle and describes the circadian change in behaviors, we use the difference between the average value of PC1 during light and dark hours to define a 'circadianicity' value for each fly day. We find that circadianity decreases steadily over days in the experiment (Figure 2D). Previous studies have found that the sleep/wake cycles of behavior in _D. melanogaster_ weaken as they age [25]. While the flies in our experiment were all comparatively young (all died before 10 full days, whereas life expectancy is 2-3 months under ideal conditions), they were living in very harsh conditions of relatively high temperature, low humidity, and poor nutrient availability. The gradual weakening over the course of the experiment is in some ways similar to an accelerated aging, and the steady decline in circadianity over 6 days is similar to the decline in the strength of the sleep/wake cycle seen in over experiments over 60 days [25].
We leveraged our high-resolution behavior data to carry out an in-depth analysis of _D. melanogaster_ circa
dian patterns of behavior, focusing our analysis on the first day of the experiment when circadianity was strongest(Figure 2D). Flies were reared from embryos to two day old adults with the same light/dark cycle time and phase as the experiments. Even before eclosing, _D. melanogaster_ exhibit circadian patterns of certain behaviors, such as larval negative phototaxis [28] or eclosen [29], so it is unsurprising that even 2-3 day old adults already have a strong circadian pattern.
The geometric means of behavior components across all flies versus ZT for Day 1 show the expected pattern of increased idle during the night and increased locomotion during the day (Figure 3A). The hour just after lights on remains the most distinct, with a very low idle behavior component. After lights off, the flies take \(\sim\)2 hours to settle into their characteristic high idle, low locomotion night state.
The temporal changes we observe in the two locomotion behaviors (altered locomotion and locomotion) are similar, as are the changes in the different grooming behaviors (fore, hind, and wing grooming). Using these strong correlations, we condensed our seven stereotyped behavior components into three categories, grouping together the grooming behaviors, the locomotion behaviors, and idle and proboscis extension. This allowed us to plot the average behavior composition for each circadian hour in a ternary plot to visualize differences in overall behavior across circadian time and along the previously identified PCs (Figure 3B). The day and night hours cluster together and largely lie along PC1 as expected. The two hours just after lights off fall between these clusters as the flies transition into their night state of behavior. The hour just after lights on is an outlier, falling well off the line of variance explained by PC1, with higher proportions of grooming and locomotion behaviors compared to all other circadian hours. This hour lies in the direction of increasing PC2, which explains \(\sim\)17% of the variance in the data. This, combined with the peak in the projection of behavior components along PC2 during the hour after lights on (Figure 2C) indicates that this hour is a unique time point in the circadian cycle of behavior.
To further investigate the circadian pattern of grooming, we looked at the enrichment of grooming behaviors at each circadian hour compared to the geometric mean of the grooming behavior component across all hours (Figure 3C). It has been shown that spontaneous grooming is under circadian control, but no clear pattern of when
Figure 3: Circadian patterns of behavior on experimental Day 1. **A** Barplot showing the geometric means of stereotyped behavioral components of the first experimental day across all flies. **B** Ternary plot showing the geometric means of condensed behavioral components across all flies for each circadian hour of Day 1. Directions along PC1 (dashed) and PC2 (solid) as calculated by perturbing the geometric mean of the displayed data points [26]. The ternary plot was generated using the Ternary Plots package in MATLAB [27]. **C** Groming enrichment with respect to the geometric mean of the condensed grooming behavioral component of the first experimental day for all fly with bootstrap confidence intervals. **D** Mean proboscis but length by hour of the first experimental day. The shaded region is the standard error. **E** Mean locomotion speed (mmis) during stereotyped locomotion state by hour of the first experimental day. The shaded region is the standard error.
grooming happens throughout the day has been identified [30]. We find that grooming behaviors peak in the hour after lights on, contributing to the uniqueness of that time point, in agreement with our analysis of PC2. This temporary spike in grooming behavior may come from a need to refresh the various sensory appendages that lie along the body after a prolonged nighttime period without grooming.
Grooming remains enriched during the day, although this enrichment decreases after the early morning hours. Of the specifically identified grooming states, flies spend the most time grooming their fore limbs and eyes, with a lower proportion of time spent in hind grooming and wing grooming. This follows the flies' hierarchy of grooming motor programs, where fore grooming is prioritized, followed by abdomen grooming, which is captured in our hind grooming state, and finally wing grooming [31].
We also looked at daily eating patterns, using proboscis visibility as a proxy for feeding as proboscis extension is well correlated with food intake [32]. Previous studies report peak feeding activity centered around lights on and lights off in the mornings and evenings, with more feeding concentrated in the evening [33, 34]. Probocis extension comprises a very small fraction of our data, less than 1% of the overall behaviors across all time points. Because it is such a small component, using compositional data analysis techniques is challenging, as many true zeros exist in the proboscis data. To get a better sense of the circadian nature of proboscis extension (and feeding), we instead look at the average duration of proboscis extension bouts over the course of the day (Figure 3D). We find that flies typically leave their proboscis extended for about three seconds during night bouts, and about 2 seconds during day bouts. By this measure we do not see notable peaks of morning and evening feeding, but instead a more general trend of more time spent feeding at night, and less during the day.
Our observed trend of the locomotion behavior component with the time of day differs from results from previous studies using activity counts to measure overall movement levels. While there is a slight increase in the locomotion behavior component in anticipation of lights on in our study, it is less dramatic than the increase in activity counts observed in previous work [23], and we see no peak in locomotion around lights off compared to the locomotion behavior component throughout the day hours. However, the circadian pattern of locomotion speed (the mean speed of flies only when they are in the 'loconotion' state, calculated with the mean on a 5 frame rolling window) has peaks around each change in lighting conditions, along with anticipatory increases, particularly for lights on (Figure 3E). In _Drosophila_ Activity Monitors, activity counts are recorded each time a fly crosses an infrared beam [5]. These counts could increase due to a combination of increased movement time and increased movement speed. Our results indicate that it is an increase in movement speed, rather than time spent moving, that is responsible for the larger activity count peaks around lights on and lights off. The increase in locomotion speed before lights on, and the gradual falling off after lights off, indicates that flies are modulating their movement speed partially due to internal cues, rather than only as a startle response or some other reaction to lights on.
In addition to circadian patterns of behavior, the flies' behavior changed across experimental days as they weakened and died. Because of the nutritionally incomplete food and the relatively high temperature and low humidity, flies in our experiment all died within 8 days. The behavioral composition remained relatively constant across the first 3 days of the experiment, but starting at Day 4 the idle component began to increase (Figure 1F). This is similar to the increase in the proportion of time male flies spend idle near the end of their lives in a more natural aging paradigm [11]. Flies also show a reduction in the propensity to spend more time near the edge of the arena rather than the center after the first 3 days of the experiment (Figure S8A). The wall following behavior of _D. melanogaster_ likely arises from boundary exploration, possibly as a means of seeking escape from a given enclosure [35]. Over the course of the experiment, flies decrease wall following behavior as habituation to an unchanging environment decreases exploration activities [36]. However, the edge preference is difficult to disentangle from the differences in the fraction of time spent in other stereotyped behaviors at different radii (Figure S8B). Flies spend an increased fraction of time in locomotion near the edge of the area and a increased fraction of idle near the center of the arena, and these differences may drive the observed changes in edge preference.
Since the hour after the lights turn on is such a unique time point in the circadian pattern of behavior, we were curious how the behavior components during that hour change over the days of the experiment. The geometric means of the relative proportions of the grooming behaviors, idle behaviors, and locomotion behaviors across all surviving flies in the hour after dawn remain similar for the first 3 days of the experiment, lying in a cluster offset from PC1 in the ternary plot (Figure 4A). Starting at Day 4, however, the hour after dawn components begin falling onto PC1, and are much more similar to other circadian time points. This behavioral composition moves towards lower values of PC1 with age and becomes more similar to the nighttime composition. Thus, as the flies in our experiment weaken and die, not only do their day and night behavior patterns begin to look more similar, they also lose the distinctive behavioral character of the hour after dawn.
We also asked how feeding and locomotion change with age in our experiments. We find that proboscis bout duration decreased steadily through Day 3 and then plateaued (Figure 4B). It has been reported that flies eat more as they age [37], but the limited food source and
harsh environmental conditions may change this trend for the flies in our experiment. In contrast, the average locomotion speed remained steady through Day 3 and then began decreasing with age (Figure 4C). The combination of steady locomotion speed and no increase in the fraction of time spent locomoting means that overall 'locomotion activity', comparable to traditional activity counts, does not appreciably change over the first 3 days of the experiment after which there is a decline. Previous studies have shown that male flies in a natural aging context have an increase in locomotion activity during early life, before a decrease leading up to death [11, 38]. We do not see this increase at young age in our experiment, however, lifelong locomotion patterns are genotype-dependent [39], so results from _theisGKH11_ flies used here may not be not directly comparable to these previous studies.
## Conclusion
We report the first measurements of high resolution _D. melanogaster_ behavior recorded over many days with high temporal bandwidth. By leveraging recent advances in GPU-based video processing and postural inference, we captured the behavior of freely moving _D. melanogaster_ over the course of multiple days, encompassing the behavioral effects of circadian rhythms, starvation, aging, and habituation at continuous high resolution. Our data recapitulates many previously described trends in _D. melanogaster_ circadian and aging/dying patterns of behavior. We also leveraged high resolution postural data in combination with fine-grained ethograms to characterize changes in whobels extension bout duration and locomotion speed across time of day and over the days of our experiment. With compositional data analysis techniques, we identified the hour after lights on as a uniquely distinctive time point in the circadian pattern of behavior.
Our data addresses several limitations of the high-quality ethological data currently available. Previous work on the temporal structure of behavior has found correlations extending beyond the length of the available data, typically 30-60 minutes [40, 41]. The data presented here extends these time scales by more than two orders of magnitude. This data set is also the first to continuously capture high dimensional, high-resolution behavioral data across a circadian cycle, allowing us to investigate how changes in internal state related to time of day affect behavior. By recording when flies feed (as measured by proboscis extension), this data may also provide new insights into the effects of hunger and satiety. We provide both high-resolution recordings and our postural tracking output to facilitate further data analysis. The analyses presented here leverage only a fraction of the resolution and dimensionality provided by our data, and we hope this 100-fold increase in the amount of high-quality ethological data available will give rise to yet more tools and techniques. Finally, aging in our experiments was significantly accelerated due to nutrient limitation. Future work with new kinds of arenas and food sources may extend this type of high-resolution behavioral recordings to cover the full natural lifespan of a fly.
## Methods
### Fly rearing
To control for possible genetic effects, we used the highly inbred wild-type _isoKH11_ strain. _isoKH11_ flies were raised on standard corneal media (see github.com/shawfik-lablong-timescale-analysis for complete recipe) at 25\({}^{\circ}\) under humidity 60% with a 12-hour light/dark cycle, with visible light of \(\sim\)1300lux. Before each experiment, we performed egg lays and, on eclosion, flipped flies into new vials. We allowed the flies to age for two days, yielding 2-3 day-old flies, which we anesthetized using CO\({}_{2}\) and distributed males to arenas to be imaged.
### Media
During experiments, flies were allowed to feed ad lib from a pad of optically clear media (10% sucrose, 1.5% agarose). We were not able to include a protein source, such as yeast extract, as this led to high levels of fungal growth within 1-2 days that obscured imaging.
Figure 4: Day wise behavioral changes throughout the experiment. **A.** Temary plot showing the geometric mean of the condensed stereotyped behavioral components of the first hour after lights on across all surviving flies for each complete 24h period. Directions of PC1 (dashed) and PC2 (solid) are also shown, as calculated based on perturbation of the geometric means of all circadian hours from Day 1. **B** Mean proboscis bout length by day of experiment. The shaded region is the standard error. **C.** Mean locomotion speed (mms) during the stereotyped locomotion state by day of experiment. The shaded region is the standard error.
### Area
We constructed experimental arenas out of laser-cut acrylic using acrylic cement (McMaster 7517A4) to adhere layers together (Figure S1B). The bottom layer of each arena consisted of a 3mm layer of food (described above). Each individual fly was able to freely move about within a 25mm diameter cylinder of height 1.5mm. Because these arenas have straight walls, flies are able to walk along the sides, which can cause limb occlusions that pose difficulties to downstream postural tracking. To address this, we used a low arena height that impedes flies from easily maneuvering off the base layer. We also scaled the top and walls with Symancote (Sigma-Aldrich SL2), which discourages flies from walking on the ceiling of the arena but does not fully restrict them from walking on the edges of the arenas.
### Imaging and illumination
The arenas are lit from above using 880nm IR LED pads (Advanced illumination BL040801-880-1C). Below each arena, we placed high-resolution, high frame-rate cameras (FLR BP85-303-254M-C) paired with 880nm band-pass filters (Thot alas FB898-70) (Figure S1A). This combination allows bright, uniform lighting angles for the arenas permitting extremely short exposure times to reduce motion blur. Imaging from above and recording from below also eliminates condensation in the arenas. We found that the ideal balance between contrast and motion blur was at 1 ms exposure time. In addition, we used a pair of visible light LED panels at the top of the tent enclosing the experimental setup to provide a 12-h visible light (\(\sim 6500\)km) and 12-h darkness cycle (\(<\) kilv), matching the timing of the lightdark cycle under which experimental flies were reared. This visible light cycle did not appreciably affect the IR imaging.
### Temperature and humidity
We recorded temperature and humidity within the imaging enclosure throughout the trials (Supplemental Figure 2) with a Extech RHT10 dataloader. As temperature and humidity have known effects on fly behavior [42, 43], these data are provided with the behavioral data set so that they may be taken into account (Figure S9). The environmental controls of the room in which our experiments were housed cycle on and off, leading to \(\sim 1^{\circ}\) C temperature fluctuations with a period of \(\sim\)1 hour.
### Acquisition software
We used a modified version of campy (github.com/Wolffff/campy) forked from github.com/kskseworson57/campy which was developed by Iyfea Severson. We altered the package to suit our specific use-case, including chunking videos and adjusting the exception handling. Cappy pipes frames from FLRTs Spinaker SDK (PySpin) to Frmp-pe. The flexibility of Frmpegg allows us to drastically reduce the life size of our videos by utilizing hardware-based compression. Specifically, we use Nvidia NVENC (hevc_nvec) paired with the segment_time flag to produce hour-long chunks. This increased compression makes it feasible to perform high-throughput recordings of 8 flies simultaneously on a single computer. To facilitate ease of use in analysis and distribution, we merge these videos into long videos; however, because loading tens of millions of frames and instances can cause 10 issues, we use hour long segments for training.
The machines used for recording were running Windows 10 with 64GB RAM, Intel(R) Core(TM) i7-8700K CPU processor, and either Nvidia Quadro RTX 4000, Quadro P2000, or GeForce RTX 2080 GPU.
### SLEAP tracking
After imaging, SLEAP [12] was used to estimate the pose of each individual and maintain identity across videos. We used a 14 node skeleton: head, eyes (eyel, eyeR), proboches, thorax, abdomen, fore legs (forelegL, forelegR), mid legs (midlegL, midlegR), and the hind legs (hingled, hindlegR). We labeled 1930 individuals across 482 frames. 434 frames (1738 instances) were used for training, with 48 frames (192 instances) reserved for validation. We trained a U-Net based model with a receptive field size of 76 pixels (\(2.6\)mm) on Nvidia A100 GPUs. The complete hyperparameter set is provided along with the model. We include some training data from recordings not included in the final data set due to early truncation but with identical frame rates and resolution. To facilitate dealing with the more than 500 million frame dataset, we use SLURUR to distribute our inference across 300 Nvidia P100 GPUs at approximately 20 ps yielding approximately 600ps - 6x speed - tracking. After inferring locations with identity, we merged the resulting.slp flies together and ran SLEAP's identity tracking script to preserve identity over time. For convenience of analysis and storage, we convert each.slp life to HDF5. Since individuals are in separate chambers, we can validate these identity tracks by the amount of time spent in each quadrant of the arena. The pipeline for sectioning, merging, and tracking can be found on the associated GitHub repository.
### Edge detection
While flies spend the majority of their time in the flat bottoms of the arenas, there is a small proportion of time (\(\sim\)5%) when they are oriented sideways with respect to the cameras with their tars on the walls of the arenas. In this position the legs are often occluded and difficult to identify, leading to SLEAP tracking errors. In order to provide a flag for time points when the flies are on the edge and tracking fidelity is compromised we used the MATLAB Classification Learner App to train an SVM to identify whether flies are on or off the edge based on the all-by-all distances between tracked body coordinates (excluding the proboches), the speed of each body coordinate, and the distance between each body coordinate and the edge of the arena. We used 2788 training points equally split between on and off edge instances, and sampled evenly across all experimental files. Our final model accurately labeled 95% of held out validation points (Figure S3).
### Unsupervised behavioral classification
To identify stereotyped behaviors from body-part dynamics, we adapted the previously described MotionMapper pipeline [7] for our data (Figure S4). We first partially filled in missing data, interpolating all missing data for head and thorax points using Piecewise Cubic Hermple Interpolation Polynomial (PCHP), to allow for subsequent egocentriczing. For all other nodes, we performed CPHP interpolation with a limit of filling 5 consecutive missing values. Further, for the proboches node, we replaced all missing values with the location of the had, representing a retracted proboches. Further, we performed a median filter on all nodes with a window size of 5 and Gaussian filtering with standard deviation 1 and window size 5. Following this, we apoorooritized the data by shifting all individuals so that the thorax is at (0, 0) and rotating each node location so that the thorax-based connection fails along the positive x-axis. After this, we calculated the Lomb-Scargle periodogram on rolling windows for each coordinate of each node. Because the Lomb-Scargle periodogram allows the utilization of unevenly sampled data and avoids the necessity of providing fully interpolated data. Further, by adjusting the window size based on our frequency of interest, we are able to capture behaviors across timescales similar to the envelope size in continuous wavelet transforms.
We compiled a representative subsample of our data by selecting 141 fly hours evenly across files and time of day. Because flies are dying throughout the course of the experiment our sample set is slightly skewed towards earlier days to maintain even sampling across files. We filtered training points from this subsample of data by removing time points where the flies were on the edge. We also removed time points we classified as idle where the total amplitude of the wavelets was less than \(0.5012mm^{2}\), a threshold we empirically determined to separate the majority of idle instances where the fly was largely motionless. From these, we sampled 36000, or the maximum number of unfiltered time points, from each fly-hour. From each of the these groups, we importance sampled 454 time points for a total of 64,014 training points.
We embedded our importance-sampled training set into two dimensions using UMAP and used this map for behavioral segmentation. We found that UMAP resulted in superior separation into unique clusters for the total training set when compared with t-SNE. We used kernel density estimation to create a 2D probability distribution of our training points. To identify distinct peaks in the density of training points we eliminated points of extreme low density and utilized adaptive thresholding on the resulting distribution. We adjusted parameters by eye to achieve distinct clusters for obviously separate peaks of density while aiming to avoid oversegmentation.
In order to assign specific discrete behaviors to each region of stereotyped power spectra we randomly selected clips from our sample set (141 fly hours) corresponding to each region. We imposed a minimum duration based on the dwell time distribution for each region to avoid very short bouts where behaviors might be difficult to identify. We identified six well-defined stereotyped behaviors (probsics extension, fore grooming, high grooming, wing grooming, altered locomotion, and locomotion) as well as many clusters that corresponded to idle behaviors with single-joint SLEAP tracking errors.
We then embedded our entire dataset in the same two dimensional space. Using the boundaries defined on the training set we assigned all time points to one of our six well-defined stereotyped behaviors, idle, edge (as called by our edge detector), or unsteetotyped. With this method, only \(\sim\)15% of our data is classified as unsteetotyped behavior.
Dwell times within these behavior states can vary from single frames to many hundreds of frames. To identify a reasonable minimum but length we fit two geometric distributions to the total dwell time histogram. We selected 5 frames (\(\sim\)1/20 of a second) as a minimum bowl duration, as this excludes \(\sim\)95% of bouts from the distribution dominated by shorter bouts, and only \(\sim\)14% of bouts from the distribution of longer bouts, which we take to include legitimate behavior bouts. We forward-filled ethograms with this boat duration, assigning any bout of 4 frames or less to the previous behavior of long duration.
## Data availability
The data repository associated with this paper can be found at doi.org/10.34770/1sab-8845. For each individual, we provide a single HDF5 file that includes datasets for the tracked body parts, stereotyped behaviors, onuf edge classification, temperature and humidity data, along with experimental metadata such as start date and time and lights on and off times. Videos cropped to contain individual files are also provided. The original uncropped videos and the full postural tracking data, as slip flies with prediction scores for each body part of each individual, are available upon request.
## Code availability
The source code for the data analysis is publicly available. The code can be found on GitHub (github.com/shaewiz-lab/long-timescale-analysis). The repository includes the scripts used in this paper along with other pragmatic tools and examples.
The modified version of MotionMapperPy [7] we use can be found at [https://github.com/Wolffff/motionmapperpy](https://github.com/Wolffff/motionmapperpy) and included as a git submodule in the primary repository.
## Acknowledgements
The authors acknowledge the Aspen Center for Physics where this work was first conceptualized, Gordon Berman, Uspa Kiblaine, and Greg Stephens for inspirational discussion, and Diogo Melo for insightful comments on how to speed up our processing pipeline. This work was supported in part by the NSF through the Center for the Physics of Biological Function (PHY-1734030). SWW is supported by the NSF Graduate Research Fellowship Program (DGE-2039565). GCM-S is supported by the Paul F. Glenn Laboratories For Aging Research at Princeton University. JFA is funded by grants from the NIH National Institute of Experimental Health Sciences (R01-ES029929) and National Institute of General Medical Sciences (R35GM124881). We also acknowledge that the work reported in this paper was substantially performed using the Princeton Research Computing resources at Princeton University, which is a consortium of groups led by the Princeton Institute for Computational Science and Engineering (PICSeIE) and the Office of Information Technology's Research Computing group.
## Author contributions statement
Conceptualization, JWS and SWW; Initial methodology, SWW; Methodology development, SWW and GCM-S; Investigation, SWW and GCM-S; Normal analysis, SWW and GCM-S; Resources, JFA and JWS; Writing-Original draft, SWW and GCM-S; Writing-Review & Editing, SWW, GCM-S, JFA and JWS; Funding Acquisition, SWW, JFA, and JWS.
## Competing interests
The authors declare no competing interests.
|
2309.14257 | iMaNGA: mock MaNGA galaxies based on IllustrisTNG and MaStar SSPs. --
III. Stellar metallicity drivers in MaNGA and TNG50 | The iMaNGA project uses a forward-modelling approach to compare the
predictions of cosmological simulations with observations from SDSS-IV/MaNGA.
We investigate the dependency of age and metallicity radial gradients on galaxy
morphology, stellar mass, stellar surface mass density ($\Sigma_*$), and
environment. The key of our analysis is that observational biases affecting the
interpretation of MaNGA data are emulated in the theoretical iMaNGA sample. The
simulations reproduce the observed global stellar population scaling relations
with positive correlations between galaxy mass and age/metallicity quite well
and also produce younger stellar populations in late-type in agreement with
observations. We do find interesting discrepancies, though, that can inform the
physics and further development of the simulations. Ages of spiral galaxies and
low-mass ellipticals are overestimated by about 2-4 Gyr. Radial metallicity
gradients are steeper in iMaNGA than in MaNGA, a discrepancy most prominent in
spiral and lenticular galaxies. Also, the observed steepening of metallicity
gradients with increasing galaxy mass is not well matched by the simulations.
We find that the theoretical radial profiles of surface mass density $\Sigma_*$
are steeper than in observations except for the most massive galaxies. In both
MaNGA and iMaNGA [Z/H] correlates with $\Sigma_*$, however, the simulations
systematically predict lower [Z/H] by almost a factor of 2 at any $\Sigma_*$.
Most interestingly, for galaxies with stellar mass $\log M_*\leq 10.80 M_\odot$
the MaNGA data reveal a positive correlation between galaxy radius and [Z/H] at
fixed $\Sigma_*$, which is not recovered in iMaNGA. Finally, the dependence on
environmental density is negligible in both the theoretical iMaNGA and the
observed MaNGA data. | Lorenza Nanni, Justus Neumann, Daniel Thomas, Claudia Maraston, James Trayford, Christopher C. Lovell, David R. Law, Renbin Yan, Yanping Chen | 2023-09-25T16:15:08Z | http://arxiv.org/abs/2309.14257v2 | iMaNGA: mock MaNGA galaxies based on IllustrisTNG and MaStar SSPs. - III. Stellar metallicity drivers in MaNGA and TNG50
###### Abstract
The iMaNGA project uses a forward-modelling approach to compare the predictions of cosmological simulations with observations from SDSS-IV/MaNGA. We investigate the dependency of age and metallicity radial gradients on galaxy morphology, stellar mass, stellar surface mass density (\(\Sigma_{*}\)), and environment. The key of our analysis is that observational biases affecting the interpretation of MaNGA data are emulated in the theoretical iMaNGA sample. The simulations reproduce the observed global stellar population scaling relations with positive correlations between galaxy mass and age/metallicity quite well and also produce younger stellar populations in late-type in agreement with observations. We do find interesting discrepancies, though, that can inform the physics and further development of the simulations. Ages of spiral galaxies and low-mass ellipticals are overestimated by about 2-4 Gyr. Radial metallicity gradients are steeper in iMaNGA than in MaNGA, a discrepancy most prominent in spiral and lenticular galaxies. Also, the observed steepening of metallicity gradients with increasing galaxy mass is not well matched by the simulations. We find that the theoretical radial profiles of surface mass density \(\Sigma_{*}\) are steeper than in observations except for the most massive galaxies. In both MaNGA and iMaNGA [Z/H] correlates with \(\Sigma_{*}\), however, the simulations systematically predict lower [Z/H] by almost a factor of 2 at any \(\Sigma_{*}\). Most interestingly, for galaxies with stellar mass \(\log M_{*}\leq 10.80M_{\odot}\) the MaNGA data reveal a _positive correlation_ between galaxy radius and [Z/H] at fixed \(\Sigma_{*}\), which is not recovered in iMaNGA. Finally, the dependence on environmental density is negligible in both the theoretical iMaNGA and the observed MaNGA data.
keywords: methods: numerical - Galaxy: evolution - Galaxy: formation - Galaxy: general - Galaxy: stellar content - Galaxy: structure - catalogues
## 1 Introduction
Current cosmological (magneto-)hydrodynamic simulations have the goal of modelling how galaxies form and evolve in the universe described as from the current cosmological theory and accounting for all the physical processes considered fundamental for galaxies (e.g. the formation of dense cold clouds, stellar formation and evolution, stellar feedback, chemical enrichment, mergers etc.; see, for example Taylor & Kobayashi, 2014; Schaye et al., 2014; Dolag, 2015; McAlpine et al., 2016; Kaviraj et al., 2017; Grand et al., 2017, 2019; Nelson et al., 2019; Dave et al., 2019; Villaescusa-Navarro et al., 2021; Feldmann et al., 2023). Their predictions must be tested to evaluate the strength of the current theories on which they are based. Modern galaxy surveys such as the Sloan Digital Sky Surveys (SDSS, York et al., 2000; Abazajian et al., 2003) and the James Webb Space Telescope Mission (McElwain et al., 2023, JWST) provide the basis for the direct comparison between the observed and the simulated universe. A powerful way to test the simulations is to virtually observe the simulated galaxies by producing synthetic data products. This forward modelling method has been followed, for example, in Tonini et al. (2010); Snyder et al. (2015); Torrey et al. (2015); Trayford et al. (2015); Bottrell et al. (2017); Trayford et al. (2017); Rodriguez-Gomez et al. (2019); Huertas-Company et al. (2019); Schulz et al. (2020). With this method, the simulated galaxies are brought into the _observational space_, i.e. synthetic spectra and images are generated.
This approach is at the core of the iMaNGA project (see Nanni et al., 2022, 2023, from now on Paper I and Paper II respectively). With the first two papers of this series, we presented our methodology to produce synthetic data products. We mimic observations by the survey Mapping Nearby Galaxies at Apache point observatory (MaNGA, Bundy et al., 2015), an Integral Field Spectroscopy (IFS) survey, by generating MaNGA-like datacubes. In Paper II we present a method
to generate a MaNGA-like sample, constructing the iMaNGA sample, applying the selection criteria for the MaNGA-Primary sample to the cosmological volume. For this project, we work with IllustrisTNG cosmological magnetohydrodynamic simulations (Nelson et al., 2019), in particular adopting TNG50, which is the simulation with the highest spatial and mass resolution within the IllustrisTNG project. However, the methodology presented in Paper I and Paper II can be applied to any other modern cosmological simulation (e.g. McAlpine et al., 2016; Kaviayi et al., 2017; Grand et al., 2019; Dave et al., 2019; Villaescusa-Navarro et al., 2021; Feldmann et al., 2023).
In the present paper, we present a systematic comparison between the theoretical iMaNGA and the observed MaNGA samples. We focus on the direct comparison with work published by our group in Goddard et al. (2017) (hereafter G17) and Neumann et al. (2021) (hereafter N21). G17 use an early release of the MaNGA galaxy sample to study stellar population properties, focussing on age and metallicity gradients as a function of galaxy mass, type, and environment. N21 investigate the distribution of stellar metallicity within and across galaxies. The paper exploits the complete MaNGA sample, and the stellar population parameters are obtained through full-spectra fitting using the code FIREFLY (Wilkinson et al., 2017). A similar analysis was previously conducted in Lian et al. (2018) on a smaller sample of MaNGA galaxies. The MaNGA sample allows the authors to explore the relation between stellar metallicity and stellar surface mass density both globally and locally. In particular, N21 investigate the relation between stellar metallicity in galaxies as a function of galactocentric radius, and the interplay among stellar metallicity, stellar mass, stellar surface mass density, and galactocentric radius. N21 show that the surface mass density mainly drives the stellar metallicity distribution within galaxies, whereas radial dependencies at fixed surface mass density are secondary.
In this paper, we analyse at the same time both the iMaNGA and the MaNGA sample, presenting the results for both catalogues in the same manner, so that the comparison can be as direct as possible.
The paper is structured as follows: in SS2 we present the data and models necessary to carry out our analysis. In SS3 we summarise both Paper I (SS3.1) and Paper II (SS3.2) after which we present the iMaNGA Secondary sample in SS3.3. Furthermore, we discuss the T-morphology and the inclinations for the iMaNGA galaxies (see SS3.4). In SS4 we present the global stellar mass-age and -metallicity relations (SS4.1); the stellar populations' age, metallicity, and stellar surface mass density radial profiles (SS4.2); the local relation between metallicity and stellar surface mass density (SS4.3); the local relation between the stellar surface mass density and the effective radius and its trend with stellar metallicity (SS4.3.3); and finally the metallicity gradients as a function of galaxy environment (SS4.4). We draw our conclusions in SS5.
## 2 Data & Models
Here we introduce the models and data used in this work. We briefly summarise the MaNGA survey and its main characteristics in SS2.1. We then give an overview of the IllustrisTNG simulation suites in SS2.2. The stellar population models used in this work are introduced in SS2.3. Finally, the FIREFLY MaNGA VAC produced in Neumann et al. (2022) - hereafter N22VAC - is briefly presented in SS2.4.
### The MaNGA galaxy survey
MaNGA (Mapping Nearby Galaxies at Apache point observatory, Bundy et al., 2015), which is part of the SDSS-IV survey (Blanton et al., 2017) and concluded its observations in August 2020, observed 10, 010 unique galaxies at a median redshift of \(z\sim 0.03\), providing the largest sample of Integral Field Spectroscopy (IFS) data to date (Abdurro'uf et al., 2022).
MaNGA combines the SDSS 2.5-meter telescope at Apache Point Observatory (Gunn et al., 2006) with the SDSS-BOSS spectrograph (Smee et al., 2013; Dawson et al., 2013); see Drory et al. (2015) for more details. This spectrograph has a wavelength range of 3,600 to 10,300 A and has an average spectral resolution of \(R\approx 1800\).
MaNGA has 5 different configurations of hexagonal-formatted fiber bundles, which vary in diameter from 12\({}^{\circ}\).5 (19 fibers) to 32\({}^{\circ}\).5 (127 fibers; see Table 2 in Bundy et al., 2015) to collect the light of galaxies up to 1.5R\({}_{\rm eff}\) for galaxies in the Primary Sample and up to 2.5R\({}_{\rm eff}\) for galaxies in the Secondary Sample, with a 2-to-1 split. The MaNGA sample selection is solely based on the galaxies' absolute i-band magnitude and redshift, with the final sample being characterised by an approximately flat distribution in the i-band magnitude and galaxy mass (for more information see Yan et al., 2019, and the discussion in Paper II).
### The IllustrisTNG simulation suite
Illustris (Vogelsberger et al., 2014; Genel et al., 2014; Sijacki et al., 2015) is a suite of large-scale cosmological hydrodynamic simulations of galaxy formation and evolution. This first project forms the basis for IllustrisTNG (Pillepich et al., 2018; Pillepich et al., 2019; Nelson et al., 2018, 2019; Nelson et al., 2019; Springel et al., 2018; Marinacci et al., 2018; Naiman et al., 2018). In the latter, the scientific goals are broader and new physics is introduced. Indeed, IllustrisTNG includes larger cosmological volumes (up to 300 Mpc instead of 100 Mpc), and higher-resolution runs with a mass resolution for baryonic matter up to 8.5 \(\times\) 10\({}^{4}\)M\({}_{\odot}\) instead of 1.6 \(\times\) 10\({}^{6}\)M\({}_{\odot}\). IllustrisTNG further includes physics of magnetic fields and dual-mode black hole feedback (Weinberger et al., 2017; Pillepich et al., 2018), which are both not included in Illustris.
Many fundamental physical processes acting on a wide range of spatial and temporal scales must be included in cosmological simulations to predict the formation and evolution of galaxies, in terms of their stellar and gas chemical composition, star formation history, morphology, interaction with the environment through inflows and outflows, etc. The spatial and mass resolution achieved by TNG50 is among the highest in cosmological hydrodynamic simulations. However, subgrid physics still plays a role because many astrophysical phenomena occur on scales below the resolution limits of the simulations, such as, for example, gas cooling, star formation, evolution and chemical enrichment, super massive black holes accretion and feedback, and magnetic fields (see the discussion in Pillepich et al., 2018, and reference therein).
IllustrisTNG simulates three physical box sizes, with cubic volumes of approximately 50 (TNG50), 100 (TNG100) and 300 (TNG300) Mpc side lengths and different resolutions, all assuming Planck cosmology from Ade et al. (2016)1. This is the cosmological framework also assumed in the analysis presented in this paper.
Footnote 1: i.e. \(\Lambda\)CDM cosmology background, with matter density parameter \(\Omega_{\rm m}=0.31\); dark energy density parameter \(\Omega_{\rm s}=0.69\); Hubble constant \(H_{0}=100h\)km/sMpc, with \(h=0.68\); matter power spectrum amplitude of \(\sigma_{8}=0.82\) and spectral index \(n_{s}=0.97\).
Since the goal of this project is to obtain a simulated sample of galaxies closely resembling the MaNGA catalogues in terms of galaxy selection criteria and observational characteristics, TNG50
is most appropriate. TNG50 matches the spatial resolution of the MaNGA datacubes with a pixel size of 0.5", i.e. a spatial sampling raging from \(\approx 100\) pc at \(z\approx 0.01\) to \(\approx 1.5\) kpc at \(z\approx 0.15\) (see SS2.1).
In Paper I we discuss in detail how we generate MaNGA data cubes from the TNG50 simulations (see SS3.1). The construction of the iMaNGA galaxy catalogue, in particular reproducing the flat distribution in AB i-band magnitude and mass, is described in detail in Paper II (see SS3.2).
### MaStar: SDSS-based Stellar Population Models
To model light emitted by the simulated galaxies, we adopt the Maraston et al. (2020) stellar population models, which are based on the \(\sim 60,000\) stellar spectra collected by the MaNGA stellar library MaStar (Yan et al., 2019; Abdurro'uf et al., 2021) and energetics and synthesis methods as in Maraston (2005); Maraston & Stromback (2011).
With its \(\sim 60,000\) stellar spectra, MaStar is the largest stellar library ever assembled to date. The spectra were collected with the MaNGA instrument described in SS2.1. Therefore, since the observational set-up is the same, these stellar spectra have the same wavelength range, spectral resolution and flux calibration as the MaNGA datacubes.
In this project, we adopt an extended version of the Maraston et al. (2020) models. This updated version covers a wider range of stellar ages, with the youngest populations considered being \(\sim 3\) Myr. Below this value, we adopt MappingsIII star-forming region models (Groves et al., 2008). In the MaStar models, 42 age and 9 metallicity values are provided. The metallicity range goes from \(-2.25\) to \(0.35\) dex. The Maraston et al. (2020) models also include 8 different values for the low-mass IMF slope, ranging between 0.3 and 3.8, with the Salpeter (1955)'s slope being 2.35. For more details, see Maraston et al. (2020) and Hill et al. (2021). In this project, we adopt these models, assuming the Kroupa (2002) IMF, both in the construction of the mock MaNGA datacubes and when running FIREFLY to recover the stellar populations' properties.
Adopting the MaStar models to light up the simulated galaxies, the iMaNGA datacubes have the same spectral properties as MaNGA spectra. Furthermore, by also using these models in the FIREFLY full spectral fitting procedure, we ensure the exclusion of any bias that would be caused by the adoption of different spectral models. With this approach, we aim to minimise the differences between the MaNGA and iMaNGA catalogues, so that those remaining are intrinsic to TNG50. In Paper II, we demonstrate how with this approach we recover the intrinsic information in TNG50 when running the full-spectral fitting analysis at the 1-\(\sigma\) level for the entire catalogue.
### FireFLY MaNGA VAC
N22VAC present the FIREFLY MaNGA VAC (Value Added Catalog)2. In this MaNGA VAC, the FIREFLY algorithm is adopted to obtain the stellar population properties for all 10,010 galaxies observed by MaNGA (SS2.1). The catalogue provides spatially resolved properties such as stellar age, stellar chemical composition, star formation rates, dust attenuation, and stellar and remnant masses. The VAC also provides some global properties, for example, central values, median values within a given aperture, and so on. Also, stellar age and metallicity gradients are made available. For the stellar population properties, both locally and globally defined, both mass-weighted and light-weighted quantities are provided.
Footnote 2: [https://www.sdss4.org/dr16/manga/manga-data/manga-FIREFLY-value-added-catalog/](https://www.sdss4.org/dr16/manga/manga-data/manga-FIREFLY-value-added-catalog/)
The results are obtained by running the full spectral fitting code FIREFLY using both the MaStar (Maraston et al., 2020) and the MILES (Sanchez-Blazquez et al., 2006; Maraston & Strombick, 2011) stellar population models. N22VAC discuss the differences between the results obtained by FIREFLY employing these different stellar population models, and also a comparison with other analyses of the MaNGA catalogue, such as the results obtained employing Pire3D (Sanchez et al., 2016; Sachez et al., 2022). In the present work we use the FIREFLY MaNGA VAC in its MaStar version when comparing iMaNGA with MaNGA.
## 3 Methodology
The methodology of the present work is introduced in our two previous papers of this series, which we will summarise in SS3.1 and SS3.2. In the present paper, we expand the catalogue presented in Paper II, adding TNG50 galaxies that fall into the MaNGA Secondary sample selection boundaries, adding 500 objects to the iMaNGA catalogue as described in SS3.3. Finally, we introduce our methodology to determine T-morphology and inclination for the iMaNGA galaxies in SS3.4 and SS3.4.2, respectively.
### PaperI: mock MaNGA galaxies
Here we briefly summarise our methodology to generate and analyse mock MaNGA galaxies from the TNG50 simulations. This process is adopted for all the galaxies in our current sample.
When selecting a simulated TNG50 galaxy for post-processing, the light is modelled as discussed in SS2.3. Emulating real MaNGA observations, the light is collected by a synthetic IFU instrument, with a pixel size of 0.5". At this stage, we adopt an FoV of 150" per side. In this way, we can collect all the light emitted by the simulated galaxies. The kinematics are incorporated in the synthetic spectra, based on the stellar and gas kinematics provided by TNG50. Sec. 3.2 of Paper I contains a detailed description of this set-up. The iMaStar code developed to perform these steps is publicly available3. To have a random viewing angle over the entire sample we consider the z-axis of the cosmological volume as our line-of-sight (LoS), since the galaxies are randomly distributed within the simulation. The (random) inclination of galaxies in the iMaNGA catalogue is discussed in detail in SS3.4.2.
Footnote 3: [https://github.com/lonanni/iMaNGA](https://github.com/lonanni/iMaNGA).
To include the effects of dust in the synthetic data cubes, we run radiative transfer simulations with SKIRT (Baes et al., 2011; Baes & Camps, 2015). These calculations are carried out twice at low spectral resolution for each galaxy, with and without dust included, in order to reconstruct the attenuation curve in each spaxel as the ratio between these two output datacubes as presented in Sec. 3.2.2 of Paper I. With this approach we avoid including Poisson noise in the spectra and significantly reduce computing time.
From the synthetic datacubes, we generate SDSS-like images (see Sec. 3.3 in Paper I). The Sersic 2D fitting code statmorph (Rodriguez-Gomez et al., 2019) is then used to derive galaxy morphology and the effective radius \(\rm R_{eff}\), which is essential for the implementation of proper MaNGA FoVs into our simulated datacubes.
In MaNGA, 5 different hexagonal-fiber-bundle configurations are used to collect light within 1.5 \(\rm R_{eff}\) (in the MaNGA Primary sample)
or within \(2.5\) R\({}_{\rm eff}\) (in the MaNGA Secondary sample). These FoVs are adopted in MaNGA to observe a galaxy with a given R\({}_{\rm eff}\). We further consider the MaNGA effective PSF and mimic the typical noise in MaNGA observations, which is spatially and wavelength-dependent. These steps are discussed in detail in Sec. 3.4 of Paper I. Thanks to this approach, the iMaNGA datacubes have the same characteristics as MaNGA datacubes in terms of spatial sampling, spatial resolution, spectral resolution, flux calibration, and noise.
Once we have the mock MaNGA datacubes, we follow the steps of the MaNGA DAP (Westfall et al., 2019) to analyse MaNGA galaxies. We employ the Voronoi algorithm by Cappellari & Copin (2003) with target S/N\({}_{g}>10\), and then run the full-spectral-fitting code rPXF (Cappellari, 2017) over the spectra in order to obtain the stellar kinematics, the gas kinematics, as well as the gas emission lines.
The full spectral fitting code FIREFLY (Wilkinson et al., 2017) is then used to derive the stellar population properties age and metallicity, as well as stellar mass, reddening and SFH. This is equivalent to what was done for observed MaNGA galaxies (see, for example Goddard et al., 2016; Goddard et al., 2017; Lian et al., 2018; Oyarzin et al., 2019, N21 and N22VAC).
### PaperII: the iMaNGA sample
Here we summarise the second Paper of this series. The focus of this paper is on the construction of the iMaNGA sample.
To build a MaNGA-Primary sample of galaxies in TNG50, we selected all galaxies in the magnitude and redshift range of the MaNGA survey. This initial sample includes all the galaxies in TNG50 within the MaNGA redshift range, i.e. \(0.01\leq z\leq 0.15\) (see SS2.1). We rejected objects with fewer than 10,000 stellar particles, in order to ensure that galaxies are sufficiently resolved (see, for example Schulz et al., 2020). As discussed in Paper II, there are 48,248 galaxies in TNG50 which satisfy these selection criteria.
The MaNGA-Primary sample is characterised by a smooth distribution in cosmological sampling. This is not the case in this initial sample, since TNG50 provides galaxies in redshift snapshots, hence with a discrete redshift distribution. Therefore, we randomly assign a redshift \(z_{\rm random}\) to each galaxy, \(z_{\rm random}\) being between the redshift of the galaxy's snapshot and the redshift of the next lower redshift snapshot. In this way, the redshift distribution is continuous. At this stage, we apply the MaNGA-Primary sample selection boundaries (see Fig. 3 in Paper II), obtaining the 'parent sample'. To obtain a flat distribution in i-band AB magnitude, we randomly extract galaxies from this sample with higher probability when characterised by least probable magnitude values. This step leads to a final sample of \(\sim 1,000\) galaxies. We refer the reader to the discussion in Sec. 4 in Paper II for more detail.
### Extending the iMaNGA sample
Next, we extend the iMaNGA sample selection to mimic the MaNGA Secondary Sample. As for the Primary MaNGA sample, for the latter, the selection criteria are uniquely based on galaxies' i-band AB magnitude and redshift, but selecting galaxies at higher redshift in order to increase the effective field of view of the IFU instrument.
Imposing the MaNGA Secondary sample selection criteria in TNG50 yields 889 galaxies in the 'parent sample'. From this 'parent sample' we generate the final sample with a flat i-band magnitude distribution following the procedure as described in Paper II. This final selection yields \(\sim 500\) galaxies, which exactly matches the ratio of 2:1 between the MaNGA Primary and Secondary Samples.
In Fig. 1, we present the distribution of the galaxies in the 'initial sample', 'parent sample', and iMaNGA Secondary sample, in terms of stellar mass (upper panel), half-mass-stellar radius (i.e. a proxy for the galaxy size; central panel), and the i-band magnitude (lower panel), mimicking Figs. 5-6 in Paper II for the iMaNGA Primary sample. It can be appreciated how approximately flat distributions in i-band magnitude and stellar mass are achieved.
We post-processed and analysed all the TNG50 galaxies in the iMaNGA Secondary sample, following the method presented in Paper I, and as done in Paper II for the iMaNGA Primary sample. Hereafter, with 'iMaNGA sample', we will refer to the combination of the iMaNGA Primary sample and Secondary sample, characterised by \(1,511\) galaxies.
Figure 1: The distribution of the TNG50 galaxies from the initial sample (hatched black histograms), the parent sample (grey histograms), and the iMaNGA Secondary sample (blue empty histograms) in stellar mass (upper panel), half-mass-stellar radius (central panel) and i-band AB magnitude (bottom panel).
All the unique mock MaNGA synthetic datacubes generated for this project are now publicly available on the IllustrisTNG website4.
Footnote 4: [https://www.tng-project.org/data/docs/specifications/](https://www.tng-project.org/data/docs/specifications/).
### Galaxy Morphology & Inclination
In this Section, we discuss how we analyse and define the galaxy morphology, the stellar surface mass density, and galaxy inclination in the iMaNGA sample, following the method in N21 for the MaNGA catalogue.
#### 3.4.1 Galaxy morphology
We use statmorph (see SS3.1) to derive galaxy morphologies and to calculate Sersic indices and Petrosian radii (Sersic, 1963; Sersic, 1968; Petrosian, 1976). To determine the inclination as described in the following section, ellipticity and T-morphology are required. To this end, we adopt the Petrosian 'radius' ellipticity. In N21 T-morphology is based on Dominguez Sanchez et al. (2022). For our iMaNGA sample, we visually inspected all galaxies, dividing them into 4 categories: ETG, S0, LTG and irregulars/merging (see also Fig. 8 in Paper II for examples of these different morphologies in the iMaNGA sample). These correspond to T values: -3, 0, 3, 10, respectively. See Appendix B for a comparison between the T-values and the Sersi index.
#### 3.4.2 Galaxy inclination
As briefly discussed in SS3.1, the simulated galaxies are observed by the mock IFU instrument with an LoS fixed to the z-axis of the cosmological volume. Therefore, observations of galaxies in the iMaNGA sample are characterised by random viewing angles. Here we discuss how the inclination of the galaxies is computed for the iMaNGA sample.
We calculate the inclination between the assumed LoS and the galaxy-spin axis provided by TNG50. The total spin of the galaxies is computed in TNG50 as the mass-weighted sum of the relative coordinate times the relative velocity of all member particles/cells. This definition of the inclination is, therefore, kinematics-dependent and also 'theoretical', being based on the intrinsic information in TNG50, as opposed to observational information. To compute this 'theoretical' inclination, we measure the inclination between the viewing angle, that is the z-axis of the cosmological volume, and the spin of each galaxy, i.e. \(J=[J_{x},J_{y},J_{z}]\). We will refer to this inclination as 'kinematic-dependent inclination' or 'theoretical inclination'.
In N21, instead, the inclination is computed from the galaxy's morphology, assuming
\[cos(i)=\sqrt{\frac{(q^{2}-q_{0}^{2})}{(1-q_{0}^{2})}}, \tag{1}\]
where \(i\) is the inclination, \(q_{0}\) is the intrinsic thickness of the galaxy, and \(q\) is the observed axial ratio of the projected spheroid (Hubble, 1926). For MaNGA galaxies, \(q\) is obtained from the elliptical Petrosian analysis (Wake et al., 2017). Assuming that the intrinsic axial ratios only vary with morphology and that galaxies seen face-on are perfectly circular, N21 find the relation
\[-\log q_{0}=0.316+0.049T, \tag{2}\]
where T is the T-value for the T-morphology (see Dominguez Sanchez et al., 2022). For more information, see N21 and the references therein.
To mimic this approach in the present work, we assume Eqs. 1-2, the T-morphology of the iMaNGA galaxies, and the Petrosian axial ratio as defined above. We will refer to this inclination as'morphology-dependent inclination'.
Fig. 2 shows galaxy inclinations for the following three morphological types: ellipticals (E), lenticular (SO), and spirals (LTG). The MaNGA and iMaNGA samples are shown as empty and filled histograms, respectively. For the iMaNGA sample, we show both the kinematics-dependent (hatched histograms) and the morphology-dependent (solid histograms) inclinations. It can be noticed that the theoretical inclinations and the morphology-dependent inclinations
Figure 2: The distribution of galaxies’ inclination in the iMaNGA and MaNGA galaxy catalogue, divided into 3 morphological types (i.e. E, SO, and LTG, in the first three panels) and for the entire sample (last panel). The inclinations for the MaNGA sample are computed as in N21 (empty histograms). For the iMaNGA sample, the inclinations are computed following the same method as in N21, i.e. based on the T-morphology (solid histograms), and from the kinematics in TNG50 (hatched histograms), as explained within this Section.
do show an overall agreement, but differences appear, in particular at higher inclinations for early-type galaxies.
The differences between 'theoretical' and'morphology-dependant' inclinations are not surprising. The latter, which is based on axis ratios, can only be approximate, as galaxies are not infinitely thin or perfectly round (see Ryden, 2004, and the discussion in N21). Therefore, N21 account for an intrinsic thickness; we refer the reader to the discussion therein.
The distributions of the morphological inclinations in MaNGA and iMaNGA agree well overall. However, we can see a higher galaxy density of edge-on galaxies in the iMaNGA sample compared to the MaNGA sample. This discrepancy is likely caused by the fact that we have not made any cuts based on inclination, galaxy size, or the presence of dust in iMaNGA, while such observational selection effects are expected to play a role in MaNGA. For instance, edge-on galaxies are more rarely observed in MaNGA because of obscuration effects (see the discussion in Wake et al., 2017; Abril-Melgarejo et al., 2021, and N21).
Fig. 3 reports the difference between the theoretical inclinations and the morphology-dependent inclinations for the iMaNGA sample. Overall, the inclinations based on kinematics appear slightly higher than those based on morphology. To emulate the observationally determined inclination in N21, we will adopt the morphology-dependent inclination for both iMaNGA and MaNGA in the analysis.
#### 3.4.3 Stellar surface mass density
The stellar surface mass density \(\Sigma_{*}\) is calculated by dividing the total stellar mass by the surface area. In the FIREFLY VAC (see N22VAC) the stellar masses are provided by FIREFLY in each Voronoi tassel. In N21, the area of the Voronoi tassels is corrected for projection effects. Therefore, the area is \(A=A_{\rm obs}sec(i)\), where \(A_{\rm obs}\) is the observed area. Consequently, the surface mass density corrected for projection effects is \(\Sigma_{*}=\Sigma_{*,\rm obs}cos(i)\). This is the quantity that we will show throughout the paper for both the MaNGA and the iMaNGA samples.
Fig. 4 illustrates the distribution of stellar surface mass density (left panels) and stellar mass (right panels), in both the iMaNGA and MaNGA samples, divided into morphological types (E, S0 and LTG, in the first three rows) and for the entire sample (last row). It can be noticed how the distributions closely resemble each other, with the iMaNGA sample characterised by lower stellar masses in each morphological bin and, therefore, slightly lower stellar surface mass density. The lack of massive spheroidal galaxies has already been noticed and discussed in Paper II (see discussion and references therein). For both catalogues, to enhance the comparison, we consider the total stellar mass as obtained by running FIREFLY (see Appendix A for a discussion of other definitions for the total stellar mass in these samples).
## 4 Results
In this Section, we present the results of the direct comparison between the iMaNGA and the MaNGA sample. We focus on the stellar population properties age and metallicity, in the three-dimensional parameter space of stellar mass, stellar surface mass density, and galaxy radius. For the MaNGA sample, we use the data provided by the FIREFLY VAC in its MaStar run (see SS2.4). The stellar population properties for the iMaNGA sample are calculated using the full spectral fitting code FIREFLY with the same settings as for the MaStar-based FIREFLY VAC (see SS3.1) to ensure consistency. We show light-weighted stellar age and stellar metallicity, if not noted differently. Figures from N21 are replicated, however, it is also important to note that N21 report findings obtained using the MILES SSP models (Sanchez-Blazquez et al., 2006) in FIREFLY. Furthermore, there are general differences in the way data are analysed in this work compared to N21. The data analysis followed here is explained in full within the Sections. As in N21, we exclude galaxies that present signs of mergers, with an inclination angle \(>\) 80 degrees, with fewer than 30 Voronoi tassels, and reject Voronoi tassels with SNR\(<\)8.
### Stellar population scaling relations
Fig. 5 shows the global stellar mass-age and mass-metallicity relations in early-type galaxies (ETGs) and late-type galaxies (LTGs) in the iMaNGA and MaNGA sample, mimicking Fig. 4 of N21. Here, we report both light-weighted (on the left) and mass-weighted (on the right) quantities. All galaxies in the iMaNGA sample are shown as small dots in the background. The median of stellar age and metallicity within a 3" diameter aperture is measured in bins of mass of 0.25 \(\log{\rm M_{*}}\) dex. The error bars illustrate the standard deviation in each mass bin, to better illustrate the scatter of the distributions of galaxies in the samples.
The observed positive mass-age relations of the MaNGA sample for both ETGs and LTGs are well matched by the iMaNGA sample, with most massive galaxies being older. The difference between galaxy types is generally reproduced in iMaNGA with late-type galaxies having younger stellar population ages. However, the difference is predicted to be slightly smaller by the simulations. The only significant discrepancy can be seen for light-weighted ages of lower-mass late-type galaxies with the simulated galaxies being up to 2 Gyrs older than the corresponding observed ones.
The observed global stellar mass-metallicity relations, referred to as MZR in literature, are generally well reproduced in iMaNGA with galaxy metallicity increasing with galaxy mass for both ETGs and LTGs. The simulations further reproduce well the fact that metallicities are slightly lower in LTGs than in ETGs, even though the observed larger discrepancy at low masses is less pronounced in iMaNGA. We notice that the simulations slightly underestimate the metallicities of the most massive galaxies by about 0.1 dex for ETGs and 0.2 dex for LTGs.
Both nebular and stellar MZR have been investigated many times in the literature for observed galaxies (for example, Tremonti et al., 2004;
Figure 3: The distribution of the differences between the galaxy inclinations considering the morphology-dependent and the kinematics-dependent determinations of galaxies’ inclinations.
Gallazzi et al., 2005). In recent years, the global MZR has also been studied in simulations. For example, Torrey et al. (2019) investigated the gas-phase MZR in TNG100, finding good agreement with the observation looking at galaxies with a total stellar mass \(>10^{9}M_{\odot}\) and redshift between 0 and 2. Similarly, De Rossi et al. (2017) quantified the global gas and stellar MZR in the EAGLE simulations. These relations agree well with the observational findings up to redshift 3. Furthermore, Cook et al. (2016) produced the stellar MZR for simulated galaxies in Illustris, in particular, focussing on ETGs with stellar mass \(>10^{10}M_{\odot}\) finding good agreement with observations with the shape, but presenting overall lower metallicity, as found in this analysis. In these works, data in the simulations are utilised without forward modelling of the simulated galaxies to bring them into the observational plane. These trends are also investigated in Nelson et al. (2018), where galaxies in TNG100 and TNG300 at redshift 0.1 are selected and compared with observations at the same redshift (Gallazzi et al., 2005, in particular). For the stellar MZR in particular, SDSS-like spectra were generated for the comparison, finding an offset in the metallicity trends compared to observations of up to 0.5 dex at lower stellar mass. Therefore, while in this work excellent agreement for the galaxy colours was found between observations and simulations, the comparison between stellar ages
Figure 4: The distribution of galaxies’ stellar surface mass density \(\Sigma_{*}\) (left panels) and total stellar mass (right panels) in the iMaNGA and MaNGA catalogues. The distributions are divided into 3 morphological types (i.e. E, SO, and S, in the first three panels) and the entire sample (bottom panels). The stellar surface mass density \(\Sigma_{*}\) and stellar mass, for both samples, are computed for each galaxy by running the full-spectral-fitting code FIREFLY on each Voronoi tassel. These data from the MaNGA sample are retrieved from the FIREFLY VAC, see N22VAC. In the bottom right panel, the vertical dotted lines represent the extremes of the bins in stellar mass that are used in the following analysis; see Table 1.
and metallicity showed some tensions, concluding that this more direct comparison is somehow more difficult because of the necessity to bring the simulated galaxies into the observational plane in a rigorous manner.
### Radial profiles
In this section, we investigate the radial profiles of stellar metallicity, age, and stellar surface mass density. Figs. 6-7 show the radial profiles of age and metallicity in different mass-morphology bins, respectively. Fig. 8 shows the equivalent for the stellar surface mass density. The shadows in each panel are the iMaNGA galaxy density distribution, calculated with a Gaussian kernel density estimator. Median values in bins of 0.1 R\({}_{\textrm{eff}}\) are shown for both iMaNGA and MaNGA (see labels). The error bars represent the standard error on the median, defined as:
\[\sigma_{\textrm{err}}=\frac{\pi}{2}\frac{\sigma}{\sqrt{N}}, \tag{3}\]
where \(\sigma\) is the standard deviation in the bin and \(N\) is the number of data that populate the bin. Only light-weighted quantities are used. The gradients reported in the upper left corner of each panel are calculated as discussed in Sec. 4.1.2 in Paper I. Gradients are computed considering all data up to 1.5R\({}_{\textrm{eff}}\) and the linear regression lines are reported for both catalogues up to 2.5R\({}_{\textrm{eff}}\).
In Table 1 we list the characteristics of the mass-morphology bins used throughout the analysis presented in this paper. In particular, we report the number of galaxies and their average stellar mass for each mass-morphology bin.
#### 4.2.1 Stellar metallicity
The radial profiles in stellar metallicity are shown in Fig. 6. We can see that, overall, the iMaNGA gradients are steeper than the corresponding MaNGA ones. As also noted in Fig. 5, the MaNGA sample is also populated by more metal-rich stellar populations overall. It is striking that overall the metallicity distribution between iMaNGA and MaNGA agree reasonably well for elliptical galaxies, except for the lowest-mass bin. The picture is different for lenticular and spiral galaxies: while the central values are in reasonable agreement (see also Fig. 5), simulations predict significantly steeper radial metallicity gradients in most mass bins. This implies that metallicities around the effective radius and beyond are considerably underestimated in the simulations by up to 0.2 dex.
For both catalogues we can observe that along the columns, i.e. at fixed stellar mass, the metallicity increases going from spirals to ellipticals; looking along the rows, i.e. at fixed morphology, the metallicity increases going towards higher stellar mass.
An important result found with MaNGA data is that the generally negative metallicity gradients significantly steepen with increasing galaxy mass with flat and potentially positive gradients in the lowest-mass galaxies at around \(\log\textrm{M}_{*}\sim 9\textrm{M}_{\odot}\)(see also Goddard et al., 2016; Goddard et al., 2017, and N21). In other words, in MaNGA, there is a break in the negative trend of the [Z/H] with the stellar mass for low-mass galaxies. In particular, as noted in Goddard et al. (2017), the dependence on the mass is stronger for spiral galaxies.
Looking at elliptical galaxies in iMaNGA, as also observed in Cook et al. (2016), the trend with the stellar mass is absent: indeed, the bin at the lowest stellar mass is dominated by the strongest negative radial trends. Looking instead at lenticulars and spirals, what is seen in the literature is observed: moving toward higher stellar masses, the gradients are steeper, and this trend is stronger for spiral galaxies.
We will further discuss the relation between the metallicity gradients and the galaxy stellar mass in SS4.4.
Figure 5: The global stellar mass-age (top panels) and -metallicity relation (bottom panels) separated by morphology, for the MaNGA and MaNGA samples. The left panels show light-weighted quantities, and the right panels show mass-weighted ones. Age and metallicity are averaged within a central area of 3” diameter. The scatter points display each galaxy in the iMaNGA sample and the line plot shows the median age and metallicity across mass bins of 0.25 dex width, for the iMaNGA (solid diamonds) and MaNGA (empty circles) galaxies. The E+S0 galaxies are represented in red, while the LTGs are represented in blue. The error bars illustrate the standard deviation.
#### 4.2.2 Stellar age
Fig. 7 shows the age radial profiles. As for Fig. 5, iMaNGA is characterised by older stellar populations overall compared to MaNGA. While age gradients in MaNGA are only negative or flat, iMaNGA presents negative or flat gradients, with the exception of a positive gradient for lenticular galaxies with stellar mass between 8.5 and 9.5 \(M_{\odot}\). Cook et al. (2016) also observe not only negative gradients for the stellar age in Illustris for ETGs.
The comparison for age provides quite a different picture than the comparison for metallicity. While simulations and observations agree quite well for metallicity in elliptical galaxies, as discussed above, there is a stark discrepancy for age. Indeed, except in the highest mass bin, the simulations predict higher light-weighted stellar ages at all radii in elliptical galaxies. The discrepancy is as high as 4 Gyr for the lowest mass ellipticals. However, the age gradients are consistent: both samples present flat age gradients for low- and intermediate-mass galaxies, and negative gradients at \(\geq 10M_{\odot}\).
A similar pattern can be seen for lenticular galaxies, although slightly less pronounced. The largest discrepancy between simulation and observation can again be seen in spiral galaxies. iMaNGA galaxies exhibit ages systematically higher by about 2.5 Gyr for all galaxy masses, at all radii. The discrepancy tends to worsen to larger radii, with the simulations predicting slightly flatter age gradients than what is observed in MaNGA. This result mirrors the metallicity profiles in spiral galaxies, where we see steeper gradients in the simulations compared to observations as discussed above.
The gradients here discussed consider light-weighted quantities. Mass-weighted results are presented in appendix C. The overall results remain unchanged for mass-weighted quantities. In appendix C we also discuss our ability to recover the 'intrinsic' TNG50 information, and the 'intrinsic' gradients, in the mass-morphology plane, demonstrating in this way how the'recovered' gradients are compatible with the 'intrinsic' ones, i.e. the differences between iMaNGA and MaNGA are not generated by the forward-modelling approach adopted but are intrinsic to TNG50.
#### 4.2.3 Stellar surface mass density
Fig. 8 shows the radial profile of the stellar surface mass density, \(\Sigma_{*}\). \(\Sigma_{*}\) is computed for both samples considering the stellar mass as outputted by FIREFLY and corrected for the projection effect considering the morphology-dependent determination of the inclination of the galaxies (see SS3.4.2). The gradients for the stellar surface mass
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \cline{3-6} \multicolumn{1}{c}{} & & \multicolumn{1}{c|}{8.50 \(\leq\) log \(M_{*}/M_{\odot}<\) 9.50} & 9.50 \(\leq\) log \(M_{*}/M_{\odot}<\) 10.0 & 10.0 \(\leq\) log \(M_{*}/M_{\odot}<\) 10.50 & 10.50 \(\leq\) log \(M_{*}/M_{\odot}<\) 11.50 \\ \hline \multirow{2}{*}{E} & \(|\textbf{iMaNGA}|\) & \(N_{\textbf{gal}}=7\aas@@fstack{\circ}<M_{*}\Rightarrow\)9.14 & \(N_{\textbf{gal}}=17\aas@@fstack{\circ}<M_{*}\Rightarrow\)9.76 & \(N_{\textbf{gal}}=24\aas@@fstack{\circ}<M_{*}\Rightarrow\)10.29 & \(N_{\textbf{gal}}=26\aas@@fstack{\circ}<M_{*}\Rightarrow\) 10.78 \\ \cline{2-6} & MaNGA & \(N_{\textbf{gal}}=44\aas@@fstack{\circ}<M_{*}\Rightarrow\)9.24 & \(N_{\textbf{gal}}=11\aas@@fstack{\circ}<M_{*}\Rightarrow\)9.98 & \(N_{\textbf{gal}}=253\aas@@fstack{\circ}<M_{*}\Rightarrow\) 10.28 & \(N_{\textbf{gal}}=1947\aas@@fstack{\circ}<M_{*}\Rightarrow\) 10.8 \\ \hline \multirow{2}{*}{S0} & \(|\textbf{iMaNGA}|\) & \(N_{\textbf{gal}}=122\aas@@fstack{\circ}<M_{*}\Rightarrow\)9.06 & \(N_{\textbf{gal}}=40\aas@@fstack{\circ}<M_{*}\Rightarrow\)9.75 & \(N_{\textbf{gal}}=36\aas@@fstack{\circ}<M_{*}\Rightarrow\)10.26 & \(N_{\textbf{gal}}=39\aas@@fstack{\circ}<M_{*}\Rightarrow\)10.89 \\ \cline{2-6} & MaNGA & \(N_{\textbf{gal}}=64\aas@@fstack{\circ}<M_{*}\Rightarrow\)9.22 & \(N_{\textbf{gal}}=167\aas@@fstack{\circ}<M_{*}\Rightarrow\)9.77 & \(N_{\textbf{gal}}=279\aas@@fstack{\circ}<M_{*}\Rightarrow\) 10.27 & \(N_{\textbf{gal}}=363\aas@@fstack{\circ}<M_{*}\Rightarrow\) =10.86 \\ \hline \multirow{2}{*}{LTG} & \(|\textbf{iMaNGA}|\) & \(N_{\textbf{gal}}=122\aas@@fstack{\circ}<M_{*}\Rightarrow\)9.22 & \(N_{\textbf{gal}}=172\aas@@fstack{\circ}<M_{*}\Rightarrow\)9.78 & \(N_{\textbf{gal}}=180\aas@@fstack{\circ}<M_{*}\Rightarrow\)10.24 & \(N_{\textbf{gal}}=169\aas@@fstack{\circ}<M_{*}\Rightarrow\)10.81 \\ \cline{2-6} & MaNGA & \(N_{\textbf{gal}}=1293\aas@@fstack{\circ}<M_{*}\Rightarrow\)9.18 & \(N_{\textbf{gal}}=1128\aas@@fstack{\circ}<M_{*}\Rightarrow\)9.74 & \(N_{\textbf{gal}}=1171\aas@@fstack{\circ}<M_{*}\Rightarrow\)30.24 & \(N_{\textbf{gal}}=1074\aas@@fstack{\circ}<M_{*}\Rightarrow\)30.28 \\ \hline \end{tabular}
\end{table}
Table 1: The number of galaxies and the mean stellar mass in each bin in morphology and total stellar mass for both the MaNGA and iMaNGA galaxy catalogue. These bins constitute the mass-morphology plane adopted throughout the paper.
Figure 6: Radial metallicity profiles, dividing the iMaNGA and MaNGA sample in stellar mass (columns) and morphology (rows) bins; see Table 1. [Z/H] is recovered with FIREFLY for both samples in the same manner. In each panel, we show the median [Z/H] in 0.1 R\({}_{\textrm{eff}}\) width bins for iMaNGA (pink diamonds) and MaNGA (orange circles), considering all spaxels up to 2.5 R\({}_{\textrm{eff}}\). The error bars represent the standard error on the median, see Eq. 3. Linear regressions are presented up to 2.5 R\({}_{\textrm{eff}}\), computed on data up to 1.5 R\({}_{\textrm{eff}}\) (solid violet line for iMaNGA, orange dashed lines for MaNGA). Gradients are reported in the top-left corner of each panel for both catalogues. In the background of each panel, we show the distribution of the galaxies in the iMaNGA sample, calculated with a Gaussian kernel density estimator.
density are in reasonable agreement for spiral galaxies, both in terms of shape and normalisation. Instead, iMaNGA predicts steeper \(\Sigma_{*}\) gradients in elliptical and lenticular galaxies. The mass-dependence observed in MaNGA for these galaxy types, a steepening of the negative \(\Sigma_{*}\) gradient with increasing mass, is not entirely recovered by the simulations.
This result is partially consistent with the findings of Cannarozzo et al. (2022) that compare ETGs drawn from TNG100 with MaNGA observations and find agreement between the \(\Sigma_{*}\) profiles of the simulated and observational data.
As noted in N21, since both the stellar metallicity and the stellar surface mass density present negative radial trends, a relation within these two quantities must exist locally. We have found these two quantities to be characterised exclusively by negative trends in
Figure 8: \(\Sigma_{*}\)-radius trends in the MaNGA and iMaNGA sample, in the morphology-stellar mass plane (see Table 1). \(\Sigma_{*}\) is computed from the stellar mass recovered with FIREFLY and considering the inclination from the T-morphology for both samples, using the FIREFLY VAC dataset (see N22VAC) for the MaNGA galaxies. In each panel, we show the median \(\Sigma_{*}\) in 0.1 R\({}_{\rm eff}\) width bins, for iMaNGA (pink diamonds) and MaNGA (orange circles) galaxies, up to 2.5 \(R_{\rm eff}\). The error bars represent the standard error on the median, see Eq. 3. The linear regressions to the data up to 1.5 R\({}_{\rm eff}\) are presented (solid violet line for iMaNGA, orange dotted lines for MaNGA). The gradients are reported on the top left corner of each panel for both iMaNGA and the MaNGA galaxies. In the background of each panel, we show the distribution of the galaxies in the iMaNGA sample, calculated with a Gaussian kernel density estimator.
Figure 7: As Fig. 6, this time considering the stellar age.
the iMaNGA catalogue. In MaNGA instead, flatter radial trends for the metallicity are found for low- and intermediate-mass galaxies. Following N21, we will therefore now investigate the interplay between these two quantities to shed light on the local drivers of stellar metallicity.
### Dependence on surface mass density and radius
N21 investigate the spatially-resolved stellar surface mass density-metallicity relation for MaNGA galaxies. Here, we explore this relation in the iMaNGA sample following the analysis steps of N21 and directly compare with MaNGA.
#### 4.3.1 Metallicity as a function of surface mass density
Fig. 9 shows the \(\Sigma_{*}\)-metallicity relation for both iMaNGA and MaNGA galaxies. All spaxels within 3\(R_{\rm eff}\) with SNR\(>8\) are included. The contours indicate the 20, 40, 60, 80 percentiles for the iMaNGA sample (dashed white lines) and for the MaNGA sample (solid orange lines). We further show the median stellar population metallicity in 0.1 dex bins in \(\log\Sigma_{*}\) for both samples, as well as the linear regression lines (see legend). The corresponding equations are in the upper left corner. All galaxies in both samples are considered, combining all morphological types and all stellar masses (see Table 1).
In both MaNGA and iMaNGA we find a clear positive correlation between \(\Sigma_{*}\) and [Z/H]. Hence, surface mass density is identified as a significant driver of stellar metallicity in MaNGA, and this is well reproduced in the simulations. The slopes of these relationships are comparable, the observed relation being slightly steeper. Most strikingly, however, there is a clear offset between these two relationships. The simulations systematically predict lower stellar metallicities by almost a factor 2 ( 0.25 dex) across all surface mass densities.
#### 4.3.2 Metallicity as a function of surface mass density in the morphology-mass plane
In Fig. 10 we show the \(\Sigma_{*}\)-[Z/H] relation split by galaxy mass and morphology for both iMaNGA and MaNGA. The grid shows the global mass-morphology plane (columns for the stellar mass, rows for the galaxy morphology as described in Table table 1). In particular, \(\Sigma_{*}\) on the x-axis and the stellar metallicity [Z/H] on the y-axis are computed for both samples on the basis of the FIREFLY MaStar run (see N22VAC).
The positive correlation between surface mass density and metallicity is again apparent for both simulations and observations. As already shown in Fig. 9, we can further see that the metallicities in iMaNGA tend to be lower than in MaNGA across the full range of surface mass density. However, Fig. 10 provides us with more information on secondary dependencies on the mass and morphology of galaxies.
Most interestingly, the discrepancy between simulations and observations is strongest for late-type galaxies (up to \(\sim 0.25\) dex), a trend we have already noticed from the radial metallicity profiles presented in Fig. 6. MaNGA and iMaNGA instead agree well for elliptical galaxies, and lenticular galaxies seem to sit in between these two extremes.
At a further level of detail, we notice that metallicities are consistent at the highest surface mass densities for all galaxy morphologies. Furthermore, both samples present stronger positive relations between \(\Sigma_{*}\) and [Z/H] at higher mass (see the Pearson correlation coefficient reported in the figure). However, the positive correlation tends to be stronger in the simulations than in the observations, especially in the lowest stellar mass bin and in spiral galaxies at any mass. There is no particular dependence on the galaxy mass, except that the \(\Sigma_{*}\)-[Z/H] relation of lower-mass galaxies appears to turn around in MaNGA. Indeed, in MaNGA, metallicity increases again toward the lowest \(\Sigma_{*}\) values for galaxies with \(M_{*}<10^{10}M_{\odot}\) and this effect is not displayed by iMaNGA. In other words, this break of the relation at the low stellar mass bins (\(M_{*}<10^{10}M_{\odot}\)) is absent in iMaNGA. This rise is difficult to interpret and may be driven by radial effects rather than surface mass density - see the discussion in N21 on additional radial-dependent drivers of metallicity in low-mass galaxies in MaNGA. The next section will shed more light on this question, where both parameters are discussed simultaneously.
#### 4.3.3 The radius-surface mass density plane
From the analysis presented so far, we know that metallicity correlates with both surface mass density and galactocentric radius. However, the surface mass density also correlates with the galactocentric radius. Hence, the true drivers of local metallicity trends within galaxies can only be identified by analysing both parameters simultaneously.
Interestingly, N21 find in MaNGA that metallicity predominantly depends on the stellar surface mass density locally, with a strong positive correlation between \(\Sigma_{*}\) and [Z/H] at any fixed radius. At fixed surface mass density, instead, no radial dependence is found in massive galaxies, while an interesting secondary dependence is detected in galaxies with stellar mass \(\leq 10.80M_{\odot}\): metallicity _increases_ with increasing radius. The implication of this result is that the negative correlation found previously between metallicity and
Figure 9: Local \(\Sigma_{*}\)-[Z/H] relation for MaNGA and iMaNGA galaxies, considering together all morphology and mass bins in Table 1. We present the density plot of all spaxels in the iMaNGA sample up to 3 \(\rm R_{eff}\), smoothing the data with a Gaussian kernel. \(\Sigma_{*}\) is corrected for the projection effect by assuming the “morphology-dependent inclinations” for both catalogues. The contour lines enclose 20, 40, 60, and 80 per cent of the data for the MaNGA sample (orange lines) and the iMaNGA sample (white lines). The median of [Z/H] in 0.1 dex width bins in \(\log\Sigma_{*}\) is represented by violet diamonds for iMaNGA and with orange circles for MaNGA; the error bars (reported in the same colours) represent the standard error on the median. The linear regressions are represented (violet for the iMaNGA sample, and orange for the MaNGA galaxies). The gradients are reported in the top left corner of the panels.
radius is actually driven by the correlation between radius and surface mass density. In the following, we repeat this particular analysis with iMaNGA to test whether the TNG50 simulations reproduce this pattern observed with MaNGA.
Fig. 11 shows the radial \(\Sigma_{*}\) profiles of the iMaNGA sample, colour-coded by median stellar metallicity. The figure mimics Fig. 10 in N21. Again, we split by galaxy mass (columns) and morphology (rows). As in N21, the LOESS algorithm by (Cappellari et al., 2013) is adopted to better illustrate the underlying trends (see N21 for more details). For each mass-morphology bin, as in N21, we also report the minimum and maximum stellar metallicity [Z/H].
As in N21 for MaNGA galaxies, here we find that, at any given stellar mass and morphology, the metallicity increases with the surface mass density at almost any galactocentric distance. This is expected for the iMaNGA sample given the results presented in Figs. 6 and 8, and this result is consistent with what is observed in MaNGA.
As already mentioned, N21 find a constant metallicity or an _increase in metallicity with increasing radius_ at a fixed stellar surface mass density for almost any morphology and any total stellar mass. Specifically, this trend is seen for low- and intermediate-mass galaxies, and it breaks for high-mass galaxies, in particular at the low \(\Sigma_{*}\) regime. This is not seen in the iMaNGA data, as demonstrated by Fig. 11. At fixed stellar surface mass density at any point in the mass-morphology plane, _metallicity decreases (or remains constant) with increasing radius_. This anti-correlation is strongest in spiral galaxies and is in stark contrast to what is observed with MaNGA.
The conclusion in N21 is that metallicity is globally driven by galaxy mass and morphology, and locally by surface mass density. This is also what we see in the iMaNGA sample. Indeed, in both samples, we globally observe higher metallicity for more massive galaxies and for ETGs, and a local correlation between the surface mass density and the stellar metallicity. However, in both MaNGA and iMaNGA, there is evidence for galaxy radius being an additional, secondary local driver of metallicity. Interestingly, observations and simulations show opposite trends, though, with metallicity increasing with radius in MaNGA and decreasing with radius in iMaNGA at fixed surface mass density.
Furthermore, for low- and intermediate-mass galaxies in MaNGA, the [Z/H] radial profiles are flatter or even positive, while the \(\Sigma_{*}\) radial profiles have steep negative slopes over the entire morphology-mass plane (see Figs.6 and 8 and the discussion thereby).
### Dependence on galaxy mass and environment
Goddard et al. (2016) Goddard et al. (2017) investigate the correlations between radial metallicity gradients, galaxy mass and environmental density. Here, we repeat the analysis for both the MaNGA catalogue and the iMaNGA sample. Total stellar mass is adopted from FIREFLY, the metallicity gradients are based on light-weighted metallicities measured within 1.5 _R_e.
For the iMaNGA sample, we consider the environment as defined in Paper I (see Sec.3.1), and the gradients calculated in SS4.2. For MaNGA, we compute the gradients in the same manner, using the information provided by the FIREFLY VAC. To associate an environment to MaNGA galaxies we make use of the Galaxy Environment for MaNGA VAC5. The GEMA VAC provides galaxy environmental densities based on the N-th nearest neighbour method for 3287 MaNGA galaxies. We use this sub-sample of MaNGA galaxies for the following analysis. The division between ETGs and LTGs in this sample is as in SS4.1, so using the definition adopted by N21.
Footnote 5: GEMA VAC [https://www.sdss4.org/dr15/data_access/value-added-catalogs/](https://www.sdss4.org/dr15/data_access/value-added-catalogs/)
Fig. 12 shows metallicity gradient as a function of galaxy mass for different environmental densities. Early-type galaxies are shown in
Figure 10: \(\Sigma_{*}\)-[Z/H] trends in the MaNGA and iMaNGA sample, in the morphology-stellar mass plane (see Table 1). \(\Sigma_{*}\) is computed as discussed in §3.4.2, considering the correction for the morphological-dependent inclination for the iMaNGA sample. In each panel, we show the median stellar metallicity in 0.1 \(\log\Sigma_{*}\) dex width bins, for iMaNGA (pink diamonds) and MaNGA (orange circles) galaxies, considering all the spaxels up to 3 \(R_{\rm eff}\). In the background, we report the iMaNGA galaxy density distribution, calculated with the Gaussian kernel density estimator. The error bars represent the standard error on the median, see Eq. 3. In each panel, we report the Pearson coefficient in the upper-left corner for both the iMaNGA and MaNGA galaxies.
the top row, late-type galaxies in the bottom row. The iMaNGA and MaNGA samples are shown by the left-hand and right-hand panels, respectively.
#### 4.4.1 Galaxy mass
The figure demonstrates that the stellar metallicity gradients in early-type galaxies are systematically lower in the iMaNGA sample compared to the MaNGA sample (as already noted in Fig. 6). Most interestingly, the MaNGA data show a significant negative correlation between galaxy mass and metallicity gradient. Metallicity gradients in early-type galaxies are positive in low-mass galaxies and progressively steepen with increasing galaxy mass leading to the well-known negative gradients in intermediate- and high-mass galaxies. This pattern is not recovered by the simulations. The early-type galaxies in iMaNGA show no correlation between metallicity gradient and galaxy mass, and metallicity gradients are negative at all masses (see the Pearson coefficients reported in the figure).
This behaviour is a further manifestation of the role of surface mass density as principle local driver of stellar metallicity. The radial \(\Sigma_{*}\) gradients presented in Fig. 11 steepen with increasing galaxy mass in MaNGA, but remain constant in iMaNGA. This discrepancy leads to the different mass dependencies of the metallicity gradients in MaNGA and iMaNGA.
The picture is somewhat different for late-type galaxies (bottom row in the figure). The metallicity gradients are again slightly steeper in iMaNGA than in MaNGA, but the steepening of the gradient with increasing galaxy mass observed in MaNGA is recovered by the simulations. The trend is, however, stronger in the observations. This is consistent with Fig. 11, in which we show that a steepening of the \(\Sigma_{*}\) gradients with increasing mass is seen in both MaNGA and iMaNGA. Finally, it is worth noting that the stellar metallicity gradients are negative across all galaxy masses in iMaNGA, but positive in low-mass galaxies in MaNGA.
#### 4.4.2 Environmental density
In Fig. 12 we colour code the galaxies in the stellar mass-gradient plane according to their environment and report the gradient distributions divided into the 4 different environments. The labels show the median of the gradient distributions \(\mu\). We can see that there is no sign of any significant dependency on the environment in both MaNGA and iMaNGA for both morphological types.
This is in line with the study by Goddard et al. (2016) and Goddard et al. (2017), albeit based on a smaller sample of MaNGA galaxies. It is interesting that the observed lack of environmental dependence is replicated by the iMaNGA sample. However, the volume of the TNG50 simulations is relatively small, and it will be important to
Figure 11: \(\Sigma_{*}\)-radius relation, colour-coded by median [Z/H] for all the spaxels in the iMaNGA sample within \(3R_{\rm eff}\). The grid shows the global mass–morphology plane (columns for the stellar mass, rows for the morphology as in previous Figures and Table 1). The number of galaxies in each mass-morphology panel is reported. We use individual colour bar limits in each panel to highlight subtle trends. The minimum and maximum of each colour bar are shown in the lower right-hand corner of the corresponding panel. The data are smoothed using the LOESS algorithm.
repeat this test with future simulations at the resolution of TNG50 but for larger cosmological volumes.
## 5 Discussion and Conclusion
We conduct a statistically and methodologically significant comparison between the MaNGA survey and the cosmological simulations TNG50. To this end we employ a forward-modelling approach to generate a mock MaNGA sample from IllustrisTNG50, called iMaNGA, the characteristics of which are as close as possible to the observed MaNGA catalogue. In the first paper of the iMaNGA project (Paper I), we introduce our method to generate mock SDSS-IV/MaNGA integral-field spectroscopic galaxy observations from TNG50. In Paper II we present the construction of the iMaNGA sample (see SS3.3 and SS3.2), which we extend in the present work to include the selection criteria of the MaNGA Secondary Sample yielding a final catalogue of 1,500 mock MaNGA galaxies.
Following the sample selection from TNG50 we apply post-processing methods to transfer the theoretical galaxies from the simulated space into the MaNGA observational plane (see Paper I, summarised in SS3.2). The resulting MaNGA-like datacubes are then analysed through full spectral fitting with the codes pPXF and Firefly adopting MaStar stellar population models following the approach of the MaNGA VAC by N22.
The key of our analysis is that observational biases plaguing the interpretation of MaNGA data are emulated in the theoretical iMaNGA sample. Focusing on stellar population properties, we carry out the same analysis of the MaNGA and the iMaNGA samples. The scientific analysis discussed here follows earlier studies of MaNGA galaxies by our group presented in G17 and N21. In particular with the present work we investigate the interplay between galaxy morphology, stellar mass, stellar metallicity, stellar surface mass density, galactocentric distance, and environmental density. In the following, we discuss the main findings of this paper.
### Stellar population scaling relations
Looking at the global mass-metallicity and -age relations in iMaNGA (SS4.1), we show that galaxies in TNG50 recover the global trends observed in MaNGA. Indeed, ETGs are generally populated by older and more metal-rich stellar populations compared to LTGs in both samples. Both stellar age and metallicity increase with stellar mass, with iMaNGA and MaNGA following similar trends. The only significant discrepancy can be seen for light-weighted ages of lower-mass late-type galaxies with the simulations overestimating ages by about 2 Gyr. Furthermore, simulations slightly underestimate the metallicities of the most massive galaxies by about 0.1 dex. Moreover, in iMaNGA the difference between the trends followed by ETGs and LTGs is more subtle.
### Radial profiles
In SS4.2 we present the radial profiles for stellar age, stellar metallicity, and stellar surface mass density. We further calculate the gradients in the mass-morphology plane, as defined in Table 1, for all spaxels in the samples up to \(1.5R_{\rm eff}\).
Overall, stellar metallicities are lower and metallicity gradients are steeper in iMaNGA compared to observations. We note that iMaNGA and MaNGA agree well for elliptical galaxies (except for the lowest-mass bin), with iMaNGA reproducing well the metallicity distribution in this type of galaxy. The picture is different for lenticular and spiral galaxies, for which the simulations predict significantly steeper radial metallicity gradients at any mass bin. As already found in N21, the metallicity profiles at low- and intermediate-masses
Figure 12: Light-weighted stellar population metallicity gradients as a function of the galaxy stellar mass, colour-coded by different local environmental densities, (see Paper I), and gradients distributions (histograms), for the iMaNGA (in violets, left column) and the MaNGA (in reds, right column) samples. The galaxies are divided into early-type (i.e. E+S0, in the upper row), and late-type galaxies (i.e. LTG in the bottom row). We report the linear regressions, the gradients and the Pearson coefficient \(\rho\), considering all galaxies. For the distributions of the gradients (colour-coded by the environmental density), the median \(\mu\) is reported in the legend. We also report the distributions for the entire sample, as well as the median \(\mu\) (in grey).
(\(M_{*}\leq 10^{10}M_{\odot}\)) become flat or even positive in MaNGA, particularly in spiral galaxies. This effect is not recovered by iMaNGA, characterised by only negative gradients.
The simulations predict higher light-weighted stellar ages at all radii. The discrepancy is as high as 4 Gyr for the lowest-mass ellipticals. However, the age gradients are consistent, except we find positive age gradients in low-mass lenticular iMaNGA galaxies, which is not seen in MaNGA. The largest discrepancy between iMaNGA and MaNGA can again be seen in spiral galaxies, mirroring the metallicity profiles in spiral galaxies, where we see significantly steeper age gradients in iMaNGA compared to MaNGA.
We also present radial profiles of the stellar mass density (see SS4.2.3) for both iMaNGA and MaNGA. Both samples show a decrease of \(\Sigma_{*}\) going from the centre to the outskirts of the galaxies, at any stellar mass and at any morphology. The iMaNGA sample recovers the observed \(\Sigma_{*}\) radial profiles fairly well in massive galaxies. However, the profiles are notably steeper than the observed ones in intermediate- and low-mass galaxies - see the discussion in SS4.2.3. This suggests low-mass objects in iMaNGA to be more compact compared to MaNGA. Interestingly, this mismatch has been identified in Paper II, where an offset in the galaxy angular sizes between the two catalogues is shown. In particular, iMaNGA is characterised by smaller galaxy angular sizes compared to MaNGA in low-mass galaxies, while the angular sizes of massive objects match (see Fig. 10 in Paper II).
### Metallicity in the radius-surface mass density plane
We also investigate the local relation between \(\Sigma_{*}\) and stellar metallicity (see SS4.3). Although both MaNGA and iMaNGA have a positive trend between \(\Sigma_{*}\) and [Z/H], the trend is steeper in MaNGA. Furthermore, the simulations systematically predict lower stellar metallicities by almost a factor of 2 ( 0.25 dex) across all surface mass densities.
Analysing the [Z/H]-\(\Sigma_{*}\) trends in the mass-morphology plane (Fig.10), we find that in iMaNGA we always have a linear positive increase between these two quantities, while in MaNGA, for low-mass galaxies, there is no clear trend. Indeed, in MaNGA, the stellar metallicity increases again toward the lowest \(\Sigma_{*}\) values for galaxies with stellar mass \(\leq 10^{10}M_{\odot}\) and this effect is not displayed by iMaNGA. Also, in both samples, the correlation is stronger at higher mass and going from LTGs to ellipticals, and it is overall stronger in iMaNGA (as shown by the Pearson coefficient reported in the figure).
N21 find a constant metallicity or even an _increase in metallicity with increasing radius_ at a fixed stellar surface mass density for almost any morphology and any total stellar mass. This is not seen in the iMaNGA data. At fixed stellar surface mass density at any point in the mass-morphology plane, _metallicity decreases (or remains constant) with increasing radius_. These results indicate the presence of a strong local correlation between the surface mass density and the stellar metallicity in both observations and simulations. In both MaNGA and iMaNGA, there is evidence for galaxy radius being an additional, secondary local driver of metallicity. Interestingly, however, observations and simulations show opposite trends, with metallicity increasing with radius in MaNGA and decreasing with radius in iMaNGA at fixed surface mass density.
### Metallicity gradient as a function of mass and environment
Using the iMaNGA sample we also repeat the analysis presented in Goddard et al. (2016) and GD17 where the interplay between galaxy stellar mass, galaxy environmental density and metallicity gradients is investigated. In agreement with GD17, we find a significant negative correlation between metallicity gradient and the stellar mass in MaNGA, this correlation being stronger for LTGs than for ETGs. In other words metallicity gradient get steeper with increasing galaxy mass. This correlation is reasonably well reproduced by the simulations for LTGs, but not for ETGs. No correlation is found between metallicity gradient and stellar mass for early-type galaxies in iMaNGA. We discuss that this discrepancy is mostly caused by the lack of a mass-dependence of the gradient in surface mass density in iMaNGA.
Interestingly, both MaNGA and iMaNGA show no significant dependence of metallicity gradients on environmental density (see SS4.4).
### Drivers of stellar metallicity
To explain the observed trends in MaNGA, N21 proposes the presence of supplementary drivers of metallicity, which, acting together with the stellar mass, enrich the stellar composition in the outskirts of the galaxies, in particular for low- and intermediate-mass galaxies (\(M_{*}\leq 10^{10.8}M_{\odot}\)). This is not seen in the simulations.
We conclude that TNG50 includes the main drivers of stellar metallicity fairly well in massive elliptical galaxies, while the interplay between stellar surface mass density, stellar metallicity and galactic distance is not fully captured for lenticular and spiral galaxies, as well as low-mass ellipticals.
An analysis of the merger history, gas metallicity, ex-situ and in-situ stellar populations, SMBH activity at any mass in the TNG50 galaxies here adopted might shed light on the way the local metallicity trends are built in the simulations to understand why this discrepancy, noted in particular at low and intermediate mass (\(\leq 10^{10}M_{\odot}\)), arises. We can speculate that the sub-grid models, such as SN and stellar feedback, have a higher impact on lower mass galaxies, and such models might not be able to fully capture the galaxy properties observed by the MaNGA survey.
Since the simulations show steeper metallicity gradients than observed, it might be important to note how galactic winds can re-distribute the metals within the galaxies. The galactic winds in the sub-grid models are dependent on the definition of many properties and parameters, such as wind energy, velocity, mass loading, metal loading, and/or recoupling. Changing any or more of them can significantly alter the way galactic winds act on the simulated galaxies. In particular, in the Auriga simulations (Grand et al., 2017), changing the wind metal loading factor has produced flatter metallicity gradients (Grand et al., 2019).
Further exploration and discussion of the effects of sub-grid physics in TNG50 simulations will be needed to fully address the discrepancies identified in this paper between the theoretical iMaNGA sample and MaNGA observations.
## Acknowledgements
LN is supported by an STFC studentship. STFC is acknowledged for support through the Consolidated Grant Cosmology and Astrophysics at Portsmouth, ST/S000550/1. Numerical computations were done on the Sciama High Performance Compute (HPC) cluster which is supported by the ICG, SEPnet and the University of Portsmouth. JN acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 694343). Funding for the Sloan
Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. The primary TNG simulations were realised with compute time granted by the Gauss Centre for Supercomputing (GCS): TNG50 under GCS Large-Scale Project GCS-DWAR (2016; PIs Nelson/Pilleich), and TNG100 and TNG300 under GCS-ILLU (2014; PI Springel) on the GCS share of the supercomputer Hazel Hen at the High Performance Computing Center Stuttgart (HLRS).
## Data Availability
The iMaNGA catalogue is available through the following website: [https://www.tng-project.org/data/docs/specifications/](https://www.tng-project.org/data/docs/specifications/). All data for the analysis in this paper will be hosted on the same website as iMaNGA-VAC. Momentarily, the iMaNGA-VAC is hosted here: [https://drive.google.com/drive/folders/1efo5Kp01459IOM7Hjs85DJMLGjbeEksS?usp=sharing](https://drive.google.com/drive/folders/1efo5Kp01459IOM7Hjs85DJMLGjbeEksS?usp=sharing). Finally, the iMaStar code can be found here: [https://github.com/lonanni/iMaNGA](https://github.com/lonanni/iMaNGA). The codes for the analysis presented in this paper will be hosted on the same page once the paper is accepted for publication.
MaNGA data are part of SDSS-IV, publicly available at (Abdurro'uf et al., 2022). The FIREFLY code is available at: [https://www.icg.port.ac.uk/FIREFLY](https://www.icg.port.ac.uk/FIREFLY) and the MaStar population models at [https://www.icg.port.ac.uk/mastar](https://www.icg.port.ac.uk/mastar). Illustris and IllustrisTNG data are publicly available at [https://www.illustris-project.org/data](https://www.illustris-project.org/data) (Nelson et al., 2019).
|
2309.14531 | Pixel-Grounded Prototypical Part Networks | Prototypical part neural networks (ProtoPartNNs), namely PROTOPNET and its
derivatives, are an intrinsically interpretable approach to machine learning.
Their prototype learning scheme enables intuitive explanations of the form,
this (prototype) looks like that (testing image patch). But, does this actually
look like that? In this work, we delve into why object part localization and
associated heat maps in past work are misleading. Rather than localizing to
object parts, existing ProtoPartNNs localize to the entire image, contrary to
generated explanatory visualizations. We argue that detraction from these
underlying issues is due to the alluring nature of visualizations and an
over-reliance on intuition. To alleviate these issues, we devise new receptive
field-based architectural constraints for meaningful localization and a
principled pixel space mapping for ProtoPartNNs. To improve interpretability,
we propose additional architectural improvements, including a simplified
classification head. We also make additional corrections to PROTOPNET and its
derivatives, such as the use of a validation set, rather than a test set, to
evaluate generalization during training. Our approach, PIXPNET (Pixel-grounded
Prototypical part Network), is the only ProtoPartNN that truly learns and
localizes to prototypical object parts. We demonstrate that PIXPNET achieves
quantifiably improved interpretability without sacrificing accuracy. | Zachariah Carmichael, Suhas Lohit, Anoop Cherian, Michael Jones, Walter Scheirer | 2023-09-25T21:09:49Z | http://arxiv.org/abs/2309.14531v1 | # Pixel-Grounded Prototypical Part Networks
###### Abstract
Prototypical part neural networks (ProtoPartNNs), namely ProtoPnet and its derivatives, are an intrinsically interpretable approach to machine learning. Their prototype learning scheme enables intuitive explanations of the form, this (prototype) looks like that (testing image patch). But, does this actually look like that? In this work, we delve into why object part localization and associated heat maps in past work are misleading. Rather than localizing to object parts, existing ProtoPartNNs localize to the entire image, contrary to generated explanatory visualizations. We argue that detraction from these underlying issues is due to the alluring nature of visualizations and an over-reliance on intuition. To alleviate these issues, we devise new receptive field-based architectural constraints for meaningful localization and a principled pixel space mapping for ProtoPartNNs. To improve interpretability, we propose additional architectural improvements, including a simplified classification head. We also make additional corrections to ProtoPnet and its derivatives, such as the use of a validation set, rather than a test set, to evaluate generalization during training. Our approach, PixPnet (Pixel-grounded Prototypical part Network), is the **only** ProtoPartNN that truly learns and localizes to prototypical object parts. We demonstrate that PixPnet achieves quantifiably improved interpretability without sacrificing accuracy.
## 1 Introduction
Prototypical part neural networks (ProtoPartNNs) are an attempt to remedy the inscrutability and fundamental lack of trustworthiness characteristic of canonical deep neural networks [13]. By learning prototypes of object parts, ProtoPartNNs make human-interpretable predictions with justifications of the form: _this_ (training image patch) looks like _that_ (testing image patch). Since black-box AI systems often obfuscate their deficiencies [31, 50, 81], ProtoPartNNs represent a shift in the direction of transparency. With unprecedented interest in AI from decision-makers in high-stakes industries -, medicine, finance, and law [50, 54, 66, 82] - the demand for explainable AI systems is greater than ever. Further motivation for transparency is driven by real-world consequences of deployed black boxes [8, 60, 53] and mounting regulatory ordinance [23, 49, 22, 84].
ProtoPartNNs approach explainability from an intrinsically interpretable lens and offer many benefits over post hoc explanation. Whereas post hoc explainers estimate an explanation, ProtoPartNN explanations are part of the actual prediction process - explanations along the lines of "_this_ looks like _that_" follow naturally from the symbolic form of the model itself. This implicit explanation is characteristic of models widely considered to be human-comprehensible [63]. Moreover, ProtoPartNNs enable concept-level debugging, human-in-the-loop learning, and implicit localization [13, 55, 58]. Being independent of the explained model, post hoc explainers have been found to be unfaithful, inconsistent, and unreliable [10, 38, 47, 79] (see Section 2 for expanded discussion).
When misunderstood or used inappropriately, explain
Figure 1: The two primary issues identified with prototype visualization: _here_ (this embedded patch) does not correspond to _there_ (this image patch), and _this_ (prototype) does not correspond to _just that_ (test image patch). In the extreme case, _this_ can actually correspond to _the entire image_ (_i.e._, when the receptive field is 100%).
able AI (XAI) methods can have unintended consequences [39, 47]. This harm arises from unverified hypotheses, whether it is that explanations represent phenomena faithful to the predictor or meaningful properties of the predictor. So, why do we see such hypotheses proliferating throughout both academia and industry [39, 48]? The problem is very human - there is often an over-reliance on intuition that may lead to illusory progress or deceptive conclusions. Whether it is dependence on alluring visualization or behavioral extrapolation from cherry-picked examples, XAI methods often are left insufficiently scrutinized and subject to "researcher degrees of freedom" [48, 72].
Recent evidence indicates that ProtoPartNNs may suffer from these same issues: ProtoPNet and its variants exhibit irrelevant prototypes, a human-machine semantic similarity gap, and exorbitant explanation size [34, 43, 78]. Unfortunately, in our study, we confirm that this is the case - there are several facets of existing ProtoPartNN explanations that do _not_ result from the implicit form of the model: object part localization, pixel space grounding, and heat map visualizations. Instead, these are founded on unverified assumptions and an over-reliance on intuition, often justified _a posteriori_ by attractive visuals. We demonstrate that, colloquially, _this_ does not actually look like _that_, and _here_ may not actually correspond to _there_ - see Figure 1 for illustration. These issues with ProtoPartNNs are not limited to just ProtoPNet, but to all of its derivatives.
This work aims to elevate the interpretability of ProtoPartNNs by rectifying these facets. In doing so, all aspects of ProtoPartNN explanations are embedded in the symbolic form of the model. Our contributions are as follows:
* We identify that existing ProtoPartNNs based on ProtoPNet do not localize faithfully nor actually localize to object parts, but rather the full image in most cases.
* We propose a novel pixel space mapping based on the receptive fields of an architecture (we guarantee that _here_ corresponds to _there_).
* We propose architectural constraints that we efficiently discover through a transfer task to enable true object part localization (_this_ looks like _that_).
* We devise a novel functional algorithm for the receptive field calculation of any architecture.
* On several image classification tasks, our approach, PixPNet, achieves competitive accuracy with other ProtoPartNNs _while maintaining a higher degree of interpretability_, as substantiated by functionally grounded XAI metrics, and being the _only ProtoPartNN that truly localizes to object parts_.
## 2 Background
In this section, we give a brief background of explainable AI methods, the ProtoPNet formulation, and an overview of ProtoPNet extensions.
Explainable AI MethodsExplainable AI (XAI) solutions can be classified as post hoc, intrinsically interpretable, or a hybrid of the two [71]. Whereas intrinsically interpretable methods are both the explanator and predictor, post hoc methods act as an explanator for an independent predictor. Unfortunately, post hoc explainers are known to be inconsistent, unfaithful, and possibly even intractable [7, 10, 15, 47, 26]. Furthermore, they are deceivable [17, 18, 79, 3] and have been shown to not affect, or even reduce, end-user task performance [37, 38]. While this is the case, post hoc explanations have been shown to possibly increase user trust in AI systems [12], improve end-user performance for some explanation types and tasks [37], and explain black boxes in trustless auditing schemes [11]. However, for high-stakes domains, post hoc explanation is frequently argued to be especially inappropriate [66].
For these numerous reasons, our work concerns intrinsically interpretable machine learning solutions (see [71] for a methodological overview). In particular, we are interested in _prototypical part neural networks_ (ProtoPartNNs) [13].
ProtoPNet ArchitectureHere, we go over the ProtoPNet architecture [13], a type of ProtoPartNN. As much of the formalism overlaps with our approach, Figure 2 can be referred to for visualization of the architecture. Let \(\textbf{D}{=}\{(\mathbf{x}_{1},y_{1}),(\mathbf{x}_{2},y_{2}),\ldots,(\mathbf{x}_{N},y_{N})\}\) be the data set where each sample \(\mathbf{x}_{i}{\in}\mathbb{R}^{3\times H\times W}\) is an image with a height of \(H\) and a width of \(W\), and each label \(y_{i}{\in}\{1,\ldots,C\}\) represents one of \(C\) classes.
A ProtoPNet comprises a neural network backbone responsible for embedding an image. The first component of the backbone is the core \(f_{\text{core}}\), which could be a ResNet[30], VGG[73], or DenseNet[35] as in [13]. Proceeding, there are the add-on layers \(f_{\text{add}}\) that are responsible for changing the number of channels in the output of \(f_{\text{core}}\). In ProtoPNet, \(f_{\text{add}}\) comprises two \(1\times 1\) convolutional layers with ReLU and sigmoid activation functions for the first and second layers, respectively. The full feature embedding function is denoted by \(f=f_{\text{add}}\circ f_{\text{core}}\). This function gives us our _embedded patches_\(f(\mathbf{x}_{i})=\mathbf{Z}_{i}\in\mathbb{R}^{D\times H_{x}\times W_{x}}\) which have \(D\) channels, a height of \(H_{z}\), and a width of \(W_{z}\).
In ProtoPNet, we are interested in finding the most similar embedded patch \(\mathbf{z}\) for each prototype. Each prototype can be understood as the embedding of some prototypical part of an object, such as the head of a blue jay as in Figure 2. Each embedded patch can be thought of in the same way - ultimately, a well-trained network will find that the most similar embedded patch and prototype will both be, _e.g_., the head of a blue jay (_this_ prototype looks like _that_ embedded patch). This is accomplished using the prototype layer, \(g\). We use the notation \(g_{\mathbf{p}_{j}}\) to denote the unit that computes the most similar patch \(\mathbf{z}{\in}\texttt{patches}(\mathbf{Z}_{i})\) to prototype \(\mathbf{p}_{j}\). The function \(\texttt{patches}(\mathbf{Z}_{i})\) yields a set of \(D\times H_{p}\times W_{p}\) embedded patches in a sliding window manner (\(H_{p}{=}W_{p}{=}1\)
in ProtoPNet). First, the pairwise distances between patches\((\mathbf{Z}_{i})\) and prototypes \(\mathbf{P}{=}\{\mathbf{p}_{j}\}_{j=1}^{P}\) are computed using a distance function \(\varphi\) where \(\mathbf{p}_{j}{\in}\mathbb{R}^{D\times H_{p}\times W_{p}}\), \(H_{p}\) is the prototype kernel height, \(W_{p}\) is the prototype kernel width, and \(P\) is the total number of prototypes. Each prototype is class-specific and we denote the set of prototypes belonging to class \(y_{i}\) as \(\mathbf{P}_{y_{i}}{\subseteq}\mathbf{P}\). Subsequently, a min-pooling operation is performed to obtain the closest embedded patch for each prototype - each prototype (_this_) is "assigned" a single embedded patch (_that_). Finally, the distances are converted into similarity scores using a similarity function \(v\). Putting this process altogether for unit \(g_{\mathbf{p}_{j}}\), we have
\[g_{\mathbf{p}_{j}}(\mathbf{Z}_{i})=v\Big{(}\min_{\mathbf{z}\in\texttt{patches}(\mathbf{Z}_{i})} \varphi(\mathbf{z},\mathbf{p}_{j})\Big{)}. \tag{1}\]
We denote the vector of all similarity scores for a sample as \(\mathbf{s}_{i}=g(\mathbf{Z}_{i})\in\mathbb{R}^{P}\).
The architecture ends with a readout layer \(h\) that produces the logits as \(\hat{\mathbf{y}}_{i}=h(\mathbf{s}_{i})\). In ProtoPnet, \(h\) is a fully-connected layer with positive weights to same-class prototype units and negative weights to non-class prototype units. Each logit can be interpreted as the sum of similarity scores weighed by their importance to the class of the logit. Note that this readout layer is not reflected in Figure 2. The full ProtoPnet output for a sample is given by \((h\circ g\circ f)(\mathbf{x}_{i})\).
**ProtoPartNN Desiderata and ProtoPNet Variants** Many extensions of ProtoPnet have been proposed, some of which make alterations that fundamentally affect the interpretability of the architecture. To differentiate these extensions, we propose a set of desiderata for ProtoPartNNs:
1. _Prototypes must correspond directly to image patches_. This can be accomplished via prototype replacement, which grounds prototypes in human-interpretable pixel space (see Section 4 for details).
2. _Prototypes must localize to object parts_.
3. _Case-based reasoning must be describable by linear or simple tree models_.
Architectures that satisfy all three desiderata are considered to be _3-way ProtoPartNNs_ - satisfying fewer diminishes the interpretability of the algorithm.
The idea of sharing prototypes between classes has been explored in ProtoPshare[68] (prototype merge-pruning) and ProtoPool[67] (differential prototype assignment). In ProtoTree[59], the classification head is replaced by a differentiable tree, also with shared prototypes. An alternative embedding space is explored in TesNet[87] based on Grassmann manifolds. A ProtoPartNN-specific knowledge distillation approach is proposed in Proto2Proto[40] by enforcing that student prototypes and embeddings should be close to those of the teacher. Deformable ProtoPnet[20] extends the ProtoPnet architecture with deformable prototypes. ST-ProtoPnet[86] learns support prototypes that lie near the classification boundary and trivial prototypes that are far from the classification boundary.
In an attempt to improve ProtoPnet visualizations, an extension of layer-wise relevance propagation [2], Proto-typical Relevance Propagation (PRP), is proposed to create more model-aware explanations [28]. PRP is quantitatively more effective in debugging erroneous prototypes and assigning pixel relevance than the original approach.
**ProtoPartNN-Like Methods** The following papers are inspired by ProtoPNet but cannot be considered to be the same class of model. This is due to not fulfilling the proposed ProtoPartNN desiderata #1 (prototypes must correspond directly to image patches) and/or #3 (case-based reasoning must be describable by linear or simple tree models).
ViT-Net[42] combines a vision transformer (ViT) with a neural tree decoder that learns prototypes. In another transformer-based approach, ProtoPFormer[88] exploits the inherent architectural features (local and global branches) of ViTs. Semi-ProtoPNet[80] fixes the readout weights as Np-ProtoPNet[77] does and is used for power distribution network analysis. In SDFA-SA-ProtoPNet[36], a shallow-deep feature alignment (SDFA) module aligns the similarity structures between deep and shallow layers. In addition, a score aggregation (SA) mod
Figure 2: (a) Our proposed architecture, PixPNet. (b) An example of an explanation with PixPNet for a Groove-billed Ani. The following are important deviations from ProtoPNet: the backbone \(f\) receptive field is constrained, the readout layer \(h\) is simplified, both prototypes and embedded patches truly localize to object parts, and the pixel space mapping is corrected (see Figure 3).
ule aggregates similarity scores to avoid learning inter-class information. Unfortunately, each of these networks omits prototype replacement with the typical justification being that doing so improves task accuracy. In addition, ViT-Net has additional layers after \(g\) that break the mapping back to pixel space and complicate its case-based reasoning.
## 3 The Problem with Existing ProtoPartNNs
Despite the many extensions of ProtoPnet, there are still fundamental issues with object part localization, pixel space grounding, and heat map visualizations, which preclude _any existing ProtoPartNN from satisfying all three desiderata_ - all ProtoPartNNs violate desideratum #2: prototypes must localize to object parts. The underlying issues with existing ProtoPartNNs arise from 1) their pixel space mapping being reliant on spatial correlation between embedded patches and the input space, which is dubious; 2) their pixel space mapping being receptive field-invariant, arbitrarily localizing to some area in the input. Rather, intrinsically interpretable models should produce explanations _implicit in the symbolic form of the model itself_[63, 71].
As a refresher, the original visualization process involves three steps. First, a single similarity map \(\mathbf{S}_{ij}\)=\(\pi_{\mathbf{p}_{j}}(\mathbf{Z}_{i})\in\mathbb{R}^{H_{z}/H_{p}\times W_{z}/W_{p}}\) is selected for visualization where \(\pi_{\mathbf{p}_{j}}\) gives the similarity map for prototype \(\mathbf{p}_{j}\). Each element of \(\mathbf{S}_{ij}\) is given by \(v\left(\varphi(\mathbf{z},\mathbf{p}_{j})\right)\) where \(\mathbf{z}\)\(\in\)\(\mathtt{patches}(\mathbf{Z}_{i})\). Subsequently, this map is upsampled from \(H_{z}/H_{p}\)\(\times\)\(W_{z}/W_{p}\) to \(H\)\(\times\)\(W\) using bicubic interpolation, producing a heat map \(\mathbf{M}_{ij}\)\(\in\)\(\mathbb{R}^{H\times W}\). To localize within the image, the smallest bounding box is drawn around the largest 5% of heat map elements - this box is of variable size. While no justification is provided for this approach in the original paper [13], we believe that the intuition is that the embedded patches \(\mathbf{Z}_{i}\) maintain spatial correlation with the input. Finally, \(\mathbf{M}_{ij}\) and the bounding box can be superimposed on the input image for visualization. From here on out, we will refer to this as the _original pixel space mapping_, which is visualized in Figure 2(a). It should also be noted that while this pixel space mapping is crucial in establishing interpretability, it is left undiscussed in the vast majority of ProtoPnet extensions. Immediately, we can see several issues with this approach.
_Here_ **Does Not Correspond to _There_** The original pixel space mapping is based on naive upsampling, which is invariant to architectural details. The approach will always assume that all similarity scores can be mapped to pixel space with a single linear transformation - an embedded patch at position \(\langle t_{x},t_{y}\rangle\) is effectively localized to position \(\langle t_{x}\)\(\cdot\)\(W\)\(W_{p}/W_{z}\), \(t_{y}\)\(\cdot\)\(H\)\(\cdot\)\(H_{p}/H_{z}\rangle\) in pixel space. This assumption of spatial correlation from high to low layers is easy to invalidate. For instance, even a simple latent transpose eradicates this correlation. _The similarity scores of **embedded patches do not determine where the architecture "looked" in the image**. Rather, the architecture determines where the similarity scores correspond to in the image**._ Figure 1 demonstrates this discrepancy. Very recently, evidence in [69, 33] strongly corroborates our arguments about poor localization. We correct this pixel space mapping according to the receptive fields of the underlying neural architecture. The original approach also only provides a way to localize a prototype rather than any embedded patch - our method enables us to do so. Our approach is described in detail in Section 4 and we validate its correctness over the original approach in Section 5.
_This_ **Does not Correspond to _Just That_** ProtoPnet and its derivatives all elect to localize to a small region of the input by drawing a bounding box around the largest 5% of values of heat map \(\mathbf{M}_{ij}\) as shown in Figure 2(a). While this produces alluring visualizations, most of the architectures evaluated in all prior approaches have a mean receptive field of 100% at the embedding layer1. _A **mean receptive field of 100% means that every element of the embedding layer output is a complex function of every pixel in the input space. Is it fair to say that only \(\sim\)5% of the input contributed to some part of a decision?_** Attribution within the input space spanned by a receptive field is unverifiable from both the feature-selectivity and feature-additivity points of view [48, 9]. This issue is visualized in Figure 1 for an architecture with a mean receptive field under 100%. Moreover, while selecting the top 5% of \(\mathbf{M}_{ij}\) may localize in accordance with its (faulty) intuition, it can actually localize to wildly inaccurate parts of the image (_e.g_., if multiple top values in \(\mathbf{S}_{ij}\) are all close), breaking the intuition of the (unfaithful) pixel space mapping. We go on to discuss our solution to this problem in Section 4.
Footnote 1: The lowest mean receptive field of an evaluated architecture is from VGG19 (\(\sim\)70%) [13].
**The Allure of Visualization** The original pixel space mapping appears to satisfy human intuitions. However, it is not based on well-justified aspects of explainability. Beyond the assumption of spatial correlation and naive localization, bicubic interpolation artificially increases the resolution of maps (see Figure 2(a)), which leads non-experts to believe that per-pixel attributions are estimated. In our proposed approach, these explanation aspects follow naturally from the symbolic interpretation of the model itself.
## 4 Fixing ProtoPartNNs
As discussed in Section 3, the underlying issues with ProtoPartNNs arise from 1) the original pixel space mapping being reliant on spatial correlation between embedded patches and the input space, which is dubious; 2) the original pixel space mapping being receptive field-invariant, arbitrarily localizing to some area in the input. Our proposed architecture, PixPnet (Pixel-grounded Prototypical
part Network), is largely based on ProtoPNet but mitigates these issues through symbolic interpretation of its architecture - see Figure 2 for an overview. In this section, we first describe a new algorithm for the calculation of receptive fields, describe our proposed fixes for prototype visualization and localization, and proceed with additional ProtoPartNN corrections and improvements. With the proposed improvements, PixPnet is the _only ProtoPartNN that truly localizes to object parts_, satisfying all three desiderata.
**Receptive Field Calculation Algorithm** Before delving into our proposed remedies, we describe our approach to computing receptive fields precisely for any architecture. Our proposed algorithm, FunctionalRF, takes a neural network as input and outputs the _exact_ receptive field of every neuron in the neural network. Recall that a neuron is a function of a _subset_ of pixels defined by its receptive field. FunctionalRF represents receptive fields as hypercubes (multidimensional tensor slices). For instance, the slices for a 2D convolution with a \(5\times 5\) kernel, stride of 1, and \(c_{\text{in}}\) channels at output position \(3,3\) would be \(\{\{[\![1,c_{\text{in}}]\!],[1,5],[\![1,5]\!]\}\}\) where \([\![a,b]\!]\) denotes the slice between \(a\) and \(b\). We can compute the _mean receptive field_ of a layer as the average number of pixels within the receptive field of each hypercube element of a layer output. The algorithm does not rely on approximate methods nor architectural alignment assumptions like other approaches [1, 52]. The full algorithmic details are provided in Appendix C.
**Corrected Pixel Space Mapping Algorithm**_From Embedding Space to Pixel Space_For each prototype \(\mathbf{p}_{j}\), we have some \(\mathbf{z}\in\mathbf{Z}_{i}\) that is most similar. We are interested in knowing where \(\mathbf{z}\) localizes to in an image \(\mathbf{x}_{i}\). With FunctionalRF applied to the backbone, we have the precise pixel space region that \(\mathbf{z}\) is a function of - _this exactly corresponds to that_. This can also be done for any \(\mathbf{p}_{j}\) after prototype replacement. Additionally, this process can actually be used to visualize any \(\mathbf{z}\in\mathbf{Z}_{i}\), unlike the procedure specified in the original pixel space mapping [13]. See Figure 2(b) for intuition as to how this process works.
_Producing a Pixel Space Heat Map_In order to compute a pixel space heat map, we propose an algorithm based on FunctionalRF rather than naively upsampling an embedding space similarity map \(\mathbf{S}_{ij}\). Our approach uses the same idea as going from embedding space to pixel space. Each pixel space heat map \(\mathbf{M}_{ij}\in\mathbb{R}^{H\times W}\) is initialized to all zeros (\(\mathbf{0}^{H\times W}\)), and corresponds to a sample \(\mathbf{x}_{i}\) and a prototype \(\mathbf{p}_{j}\). Let \(\mathbf{M}_{ij}^{S}\) be the region of \(\mathbf{M}_{ij}\) defined by the receptive field of similarity score \(S\in\mathbf{S}_{ij}\). For each \(S\), the pixel space heat map is updated as \(\mathbf{M}_{ij}^{S}\leftarrow\max(\mathbf{M}_{ij}^{S},S)\) where \(\max(\cdot)\) is an element-wise maximum that appropriately handles the case of overlapping receptive fields. We take maxima instead of averaging values due to Eq. (1). Again, see Figure 2(b) for a visualization of this procedure. Further algorithmic details are provided in Appendix D.
**Improved Localization & the "Goldilocks" Zone** To reiterate, the region localized by a ProtoPartNN is controlled by the receptive field of the embedding layers of \(f\). A fundamental goal of ProtoPartNNs is to identify and learn prototypical object parts. We propose to achieve this by constraining the receptive field of \(f\) to a range that yields object parts that are both meaningful and interpretable to humans.
It is well known that the receptive field of a neural network correlates with performance [1, 52] to an extent - too small or large a receptive field can harm performance due to bias-variance trade-offs [45]. We hypothesize that there is a "Goldilocks" zone where the desired receptive field localizes to intelligible object parts without diminishing task performance. To corroborate this, we evaluate various backbone architectures at intermediate layers on ImageNette [25], a subset of ImageNet [16]. The evaluation aims to produce architectures suitable for the backbone of PixPNet according to the criteria outlined prior. We propose this approach as performance on subsets of ImageNet has been shown to be reflective of performance on the full dataset [19], and ImageNet performance strongly correlates with performance on other vision datasets [44]. We detail the full experiment setup in Appendix F. The Pareto front
Figure 3: Visualization of the original and proposed pixel space mapping approaches.
of mean receptive field and accuracy for the evaluated architectures is shown in Figure 4. This front informs our backbone selection as detailed in Section 5.
Simplified Classification HeadWhile the original fully-connected classification head \(h\) is human-interpretable, it has several weaknesses - its explanation size limits its comprehension [43, 78] and it requires an additional training stage, adding up to 100 additional epochs in ProtoPNet2. We quantify explanation size in terms of _positive reasoning_ and _negative reasoning_ about the prediction of a class. For positive reasoning, the number of elements in an explanation with the original fully-connected layer is \(2P/C\): one similarity score per class-specific prototype and a positive weight coefficient. However, considering both positive and negative reasoning involves \(2P\) total explanation elements.
Footnote 2: In the original ProtoPNet implementation, as well as subsequent extensions, the last layer is optimized 5 times, each for 20 epochs [13].
To address these limitations, we propose to replace the linear layer with a class-wise summation. This operation simply produces the logit of each class as the sum of class-specific similarity scores as \(\hat{y}_{ic}=\sum_{j:\mathbf{p}_{j}\in\mathbf{P}_{c}}s_{ij}\) where \(\hat{y}_{ic}\) is the logit for class \(c\) and \(s_{ij}\) is the similarity score for prototype \(\mathbf{p}_{j}\). The layer is visualized in Figure 2. Our new parameter-free readout layer removes the additional training stage and comprises only \(P/C\) explanation elements for _both_ positive and negative reasoning. Substituting our layer in the original ProtoPNet configuration for the CUB-200-2011 dataset [13, 85] reduces the number of explanation elements for a class prediction from 4,000 down to _just 10_.
Other ImprovementsWe also make a few smaller contributions. In prototype replacement, we remove duplicate prototypes (by image or sample) to encourage diversity. If duplicates are found, the next most-similar embedded patch is used in replacement instead. We also reformulate the similarity function \(v\) to have lower numerical error (see Appendix G for details) as \(v(d){=}\log(\frac{1}{d+\varepsilon}{+}1)\) where \(\varepsilon\) mitigates division by zero and the distance \(d{=}\varphi(\mathbf{z},\mathbf{p}_{j})\). While ProtoPNet uses \(\varphi(\mathbf{z},\mathbf{p}_{j}){=}\|\mathbf{z}-\mathbf{p}_{j}\|_{2}^{2}\), we elect to use \(\varphi(\mathbf{z},\mathbf{p}_{j}){=}1-\frac{\mathbf{z}\cdot\mathbf{p}}{\|\mathbf{z}\|_{2}}\) (cosine distance), which has a desirable normalizing factor. This distance is also used in [5, 20, 41, 87]. In implementation, the distances are computed using generalized convolution [13, 57, 29].
TrainingOur multi-stage training procedure is similar to that of ProtoPNet. The first stage optimizes the full network, except for the readout layer, by minimizing Eq. (2) via stochastic gradient descent
\[\frac{1}{N}{\sum_{i=1}^{N}}\mathcal{L}_{\text{xent}}(\hat{\mathbf{y}}_{i},y_{i}){ +}\lambda_{\text{cls}}\mathcal{L}_{\text{cls}}(\mathbf{P},\mathbf{Z}_{i}){+}\lambda_{ \text{sep}}\mathcal{L}_{\text{sep}}(\mathbf{P},\mathbf{Z}_{i}) \tag{2}\]
where \(\mathcal{L}_{\text{xent}}\) is the categorical cross-entropy loss function, \(\lambda_{\text{cls}}\) and \(\lambda_{\text{sep}}\) are auxiliary loss weights, and the auxiliary loss functions, \(\mathcal{L}_{\text{cls}}\) and \(\mathcal{L}_{\text{sep}}\), are defined as
\[\mathcal{L}_{\text{cls}}(\mathbf{P},\mathbf{Z}_{i}) =\frac{1}{N}\sum_{i=1}^{N}\min_{\begin{subarray}{c}\mathbf{p}_{j}\in \mathbf{P}_{y_{i}}\\ \mathbf{z}\in\text{patches}(\mathbf{Z}_{i})\end{subarray}}\varphi(\mathbf{z},\mathbf{p}_{j}) \tag{3}\] \[\mathcal{L}_{\text{sep}}(\mathbf{P},\mathbf{Z}_{i}) =-\frac{1}{N}\sum_{i=1}^{N}\min_{\begin{subarray}{c}\mathbf{p}_{j} \notin\mathbf{P}_{y_{i}}\\ \mathbf{z}\in\text{patches}(\mathbf{Z}_{i})\end{subarray}}\varphi(\mathbf{z},\mathbf{p}_{j}). \tag{4}\]
The goal of \(\mathcal{L}_{\text{cls}}\) is to ensure that at least one embedded patch of every training image is similar to at least one prototype belonging to the class of the image. In contrast, the goal of \(\mathcal{L}_{\text{sep}}\) is to ensure that the embedded patches of every training image are dissimilar from prototypes not belonging to the class of the image.
Subsequently, the prototypes are replaced, which is arguably the most important stage of training as it grounds prototypes in human-comprehensible pixel space. The process involves replacing each prototype \(\mathbf{p}_{j}\) with an embedded patch \(\mathbf{z}\) of a training sample of the same class - the most similar embedded patch replaces the prototype. In the literature, _prototype replacement_ is also referred to as prototype "pushing" or "projection." We stick with "replacement" for the sake of clarity. Formally, this update can be written as \(\mathbf{p}_{j}\leftarrow\operatorname*{arg\,min}_{\mathbf{z}\in\text{patches}(\mathbf{Z}_{i })}\varphi(\mathbf{z},\mathbf{p}_{j}),\text{ s.t. }\mathbf{p}_{j}\in\mathbf{P}_{y_{i}}\). Without this update, the human interpretation of prototypes is unclear as prototypes are not grounded in pixel space.
In ProtoPNet and its variants, a third stage optimizes the linear readout layer. However, we do not employ this stage as our readout layer is parameter-free. The multi-stage optimization process can be repeated until convergence.
## 5 Experiments & Discussion
To validate our proposed approach, PixPnet, we evaluate both its accuracy and interpretability on CUB-200-2011 [85]. We also show evaluation results on Stanford Cars [46] in Appendix B. We draw comparisons against other ProtoPartNNs with a variety of measures. We elect to not crop images in CUB-200-2011 by their bounding
Figure 4: The Pareto front of architectures trained on ImageNet [25] and evaluated at various intermediate layers. This front details the accuracy-localization size trade-offs and informs backbone selection of PixPNet as in Section 5.
box annotations to demonstrate the localization capability of PixPnet. Hyperparameters, software, hardware, and other reproducibility details are specified in Appendix E.
Lastly, upon inspection of the original code base4, we discovered that the test set accuracy is used to influence training of ProtoPnet. In fact, neither ProtoPnet nor its extensions for image classification that are mentioned in Section 2 employ a validation set in provided implementations. See Appendix H for further details.
Footnote 4: A “BBox” value of “?” means that preprocessing details and code are unavailable.
Footnote 5: [https://github.com/cfchen-duke/ProtoPNet](https://github.com/cfchen-duke/ProtoPNet)
In our implementation, we employ a proper validation set and tune hyperparameters only according to accuracy on this split.
AccuracyThe experimental results in Table 1 show that PixPnet obtains competitive accuracy with other approaches regardless of whether images are cropped by bird bounding box annotations - _while we trade off network depth for interpretability, we outperform ProtoPnet and several of its derivatives_. This is quite favorable as PixPnet is the only method that truly localizes to object parts.
InterpretabilityWe evaluate the interpretability of our approach with several functionally grounded metrics [21]. See Figure 1(b) for an example of a PixPnet explanation.
Relevance Ordering Test (ROT)The ROT is a quantitative measure of how well a pixel space mapping attributes individual pixels according to prototype similarity scores [28]. First, a pixel space heat map \(\mathbf{M}_{ij}\) is produced for a single sample \(\mathbf{x}_{i}\) and prototype \(\mathbf{p}_{j}\). Starting from a completely random image, pixels are added back to the random image one at a time in descending order according to \(\mathbf{M}_{ij}\). As each pixel is added back, the similarity score for \(\mathbf{p}_{j}\) is evaluated. This procedure is averaged over each class-specific prototype over 50 random samples. The faster that the original similarity score is recovered, the better the pixel space mapping is. Assuming a faithful pixel space mapping, a network with a mean receptive field of, _e.g._, 25%, will recover the original similarity score after 25% of the pixels are added back in the worst-case scenario.
We also introduce two aggregate measures of the ROT. First is the area under the similarity curve (**AUSC**) which is normalized by the difference between the original similarity score and the baseline value (similarity score for a completely random image)6. Second is the percentage of pixels added back to recover the original similarity score: pixel percentage to recovery (%2R).
Footnote 6: AUSC\(>1\) is possible as the maximum possible similarity is unknown.
We compare our pixel space mapping to the original upsampling approach and PRP[28]. However, the PRP implementation only supports ResNet architectures7, so it is not included in all experiments. The results in Table 2 demonstrate that our pixel space mapping best identifies the most
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l l l l l} \hline \hline BBox & D1 & D2 & D3 & Model & \(f\) & \begin{tabular}{l} Expl. \\ Size \(+\) \\ \end{tabular} & \(P\) & MRF & Acc. & \(\pm\) & \(S_{\text{con}}\) & \(S_{\text{stu}}\) & \begin{tabular}{l} Code \\ Avail. \\ \end{tabular} &
\begin{tabular}{l} Val. \\ Set \\ \end{tabular} \\ \hline \multirow{4}{*}{\(\mathbf{\times}\)} & \multirow{4}{*}{\(\mathbf{\times}\)} & \multirow{4}{*}{\(\mathbf{\times}\)} & \multirow{4}{*}{\(\mathbf{\times}\)} & PixPnet (Ours) & ResNekt@layer3 & **10** & **10** & 2000 & 100 & **81.76** & 0.2 & 56.4 & **64.7** & ✓ & ✓ \\ & & & & PixPnet (Ours) & VGG19\& maxpool5 & **10** & **10** & 2000 & 70.4 & 80.10 & 0.1 & 47.6 & 64.2 & ✓ & ✓ \\ & & & & PixPnet (Ours) & VGG16\& maxpool5 & **10** & **10** & 2000 & 52.5 & 79.75 & 0.2 & **69.5** & 51.6 & ✓ & ✓ \\ & & & & PixPnet (Ours) & VGG13\& maxpool4 & **10** & **10** & 2000 & 9.69 & 75.32 & 0.2 & 66.9 & 45.0 & ✓ & ✓ \\ \cline{2-16} & ✓ & ✗ & ✓ & ST-ProtoPnet[86] & DenseNet161 & 20 & 4000 & 2000 & 100 & 80.60 & – & – & – & ✗ &? \\ \hline \multirow{4}{*}{\(\mathbf{\times}\)} & \multirow{4}{*}{\(\mathbf{\times}\)} & \multirow{4}{*}{\(\mathbf{\times}\)} & ST-ProtoPnet[86] & DenseNet161 & 20 & 4000 & 2000 & 100 & 86.10 & 0.2 & – & – & ✗ &? \\ & & & & & TseNet[87] & DenseNet121 & 20 & 4000 & 2000 & 100 & 84.80 & 0.2 & 63.1 & 66.1 & ✓ & ✗ \\ & & & & ProtoPool[67] & ResNet152 & 20 & 404 & 202 & 100 & 81.50 & 0.1 & 35.7 & 58.4 & ✓ & ✗ \\ & & & & & ProtoPNet[13] & DenseNet121 & 20 & 4000 & 2000 & 100 & 80.20 & 0.2 & 24.9 & 58.9 & ✓ & ✗ \\ & & & & & Proto2Proto[40] & ResNet34 & 20 & 4000 & 2000 & 100 & 79.89 & – & – & – & ✓ & ✗ \\ & & & & ProtoPshare[68] & DenseNet161 & 1200 & 1200 & 600 & 100 & 76.45 & – & – & – & ✓ & ✗ \\ & & & & ProtoTree[36, 59] & DenseNet121 & 18 & 404 & 202 & 100 & 73.20 & – & 21.5 & 24.4 & ✓ & ✗ \\ \hline \hline \multirow{4}{*}{\(\mathbf{\times}\)} & ✗ & ✗ & ✓ & ProtoPformer[88] & DeiT-S & 40 & 8000 & 4000 & 100 & **84.85** & – & – & – & ✓ & ✗ \\ & & ✗ & ✗ & ✗ & ViT-Net[42, 88] & CaiT-XXS-24 & **8** & 30 & 15 & 100 & 84.51 & – & – & – & ✓ & ✗ \\ \hline ✓ & ✗ & ✗ & ✗ & ViT-Net[42] & SwinT-B & 10 & 62 & 31 & 100 & 91.60 & – & – & – & ✓ & ✗ \\ \hline? & ✗ & ✗ & ✓ & SDFA-SA[36] & DenseNet161 & 20 & 20 & 2000 & 100 & 86.80 & – & 73.2 & 73.5 & ✗ &? \\ \hline \hline \end{tabular}
\end{table}
Table 1: ProtoPartNN results on CUB-200-2011 with ImageNet used for pre-training. Columns D1, D2, and D3 correspond to the three desiderata established in Section 2. Our approach, PixPnet, is the only method that is a _3-way ProtoPartNN_, satisfying all three desiderata. “BBox” indicates whether a method crops each image using a bounding box annotation3. The best results of ProtoPartNNs with and without such annotations are **bold** and underlined, respectively. The table is split based on whether the method meets at least two desiderata. The \(S_{\text{con}}\) and \(S_{\text{stu}}\) scores for other methods are taken from [36] and the top reported accuracy score is taken for each method.
important pixels in an image. Naturally, the mean receptive field correlates with both ROT scores.
Explanation SizeRecall from Section 4 that the explanation size is the number of elements in an explanation,, similarity scores and weight coefficients. This number differs when considering positive or negative reasoning. Due to the original classification head being fully-connected, most ProtoPartNNs have large explanation sizes when considering both positive and negative reasoning, as shown in Table 1. In contrast, our explanation size comprises just 10 elements when reasoning about a decision. Our proposed classification head helps to prevent overwhelming users with information, which has been shown to be the case with other ProtoPartNNs [43].
ConsistencyThe consistency metric [36] quantifies how consistently each prototype localizes to the same human-annotated ground truth part. It evaluates both semantic similarity quality and the pixel space mapping to a degree. For a sample with label, the pixel space mapping is computed for each prototype. Let be a binary vector indicating which of object parts are contained within the region localized by the pixel space mapping. Let be a binary vector indicating which of the object parts are actually visible in. A single object part is associated with by taking the maximum frequency of an object part present in the pixel space mapping region across all applicable images. A prototype is said to be consistent if this frequency is at least,,
where are samples of the same class allocated to denotes element-wise division, and is the indicator function. To compare with results reported in [36], we change the receptive field size in our pixel space mapping to equal this, as well as set. A notable weakness of the evaluation approach is that it uses a fixed pixel region independent of the architecture. While the approach is not perfect, it allows for reproducible and comparative interpretability evaluation between ProtoPartNN variants.
Results are shown in Tables 1 and 2 for CUB-200-2011, which provides human-annotated object part annotations. We outperform ProtoPnet and many of its variants, as well as the original pixel space mapping (Table 2).
StabilityThe stability metric [36] measures how robust object part association is when noise is added to an image. Simply, some noise is added to each sample and the object part associations are compared as
Following [36], we set. Results in Tables 1 and 2 support the robustness of PixPnet compared to other ProtoPartNNs and the original pixel space mapping. There is a marginal decrease in stability as the receptive field lessens.
## 6 Limitations and Future Work
The receptive field constraint is a design choice and is inherently application-specific, subject to data characteristics and interpretability requirements. Future work should investigate multi-scale receptive fields and automated receptive field design techniques. Nevertheless, we _trade off network depth for significant gains in interpretability with very little penalty in accuracy_. Prior studies have shown that ProtoPartNNs have a semantic similarity gap with humans, prototypes can be redundant or indistinct, and limited utility in improving human performance [33, 34, 78, 43]. Moreover, the consistency and stability evaluation metrics are imperfect. Although we improve upon interpretability over other networks, human studies are needed to understand other facets of interpretability, such as trustworthiness, acceptance, and utility [71]. In the future, architectural improvements should be made,, the enriched embedding space of TesNet, prototype diversity constraints [83, 70, 86], and human-in-the-loop training [55].
\begin{table}
\begin{tabular}{l c c c c c c} Backbone & MRF & Acc. & PSM &, & & & & \\ & & & & & & & \\ VGG11 & & & & & & \\ @maxpool & & & & & & \\ & & & & & & \\ VGG13 & & & & & & \\ @maxpool & & & & & & \\ & & & & & \\ & & & & & & \\ & & & & & \\ & & & & & & \\ & & & & & \\ & & & & & & \\ & & & & & \\ & & & & & & \\ & & & & & \\ & & & & & \\ \end{tabular}
\end{table}
Table 2: Evaluation of pixel space mapping (PSM) methods with functionally-grounded interpretability metrics. Methods are compared on PixPNet with “Goldilocks” zone and ResNet backbones on CUB-200-2011 (no BBox cropping). Our PSM outperforms both the original and PRP PSMs across _all_ backbones. |
2309.04722 | TECVis: A Visual Analytics Tool to Compare People's Emotion Feelings | Twitter is one of the popular social media platforms where people share news
or reactions towards an event or topic using short text messages called
"tweets". Emotion analysis in these tweets can play a vital role in
understanding peoples' feelings towards the underlying event or topic. In this
work, we present our visual analytics tool, called TECVis, that focuses on
providing comparison views of peoples' emotion feelings in tweets towards an
event or topic. The comparison is done based on geolocations or timestamps.
TECVis provides several interaction and filtering options for navigation and
better exploration of underlying tweet data for emotion feelings comparison. | Ilya Nemtsov, MST Jasmine Jahan, Chuting Yan, Shah Rukh Humayoun | 2023-09-09T08:52:20Z | http://arxiv.org/abs/2309.04722v1 | # TECVis: A Visual Analytics Tool to Compare People's Emotion Feelings
###### Abstract
Twitter is one of the popular social media platforms where people share news or reactions towards an event or topic using short text messages called "tweets". Emotion analysis in these tweets can play a vital role in understanding peoples' feelings towards the underlying event or topic. In this work, we present our visual analytics tool, called TECVis, that focuses on providing comparison views of peoples' emotion feelings in tweets towards an event or topic. The comparison is done based on geolocations or timestamps. TECVis provides several interaction and filtering options for navigation and better exploration of underlying tweet data for emotion feelings comparison.
IEEE 2023 11th International Conference on Intelligent Robots and Systems (ICRI), Vol. 1, No. 1, No.
We categorize emotion feelings into positive feelings category (i.e.: _anticipation_, _trust_, _surprise_, and _joy_) and negative feelings category (i.e.: _anger_, _fear_, _sadness_, and _disgust_). We noticed that the NRC Emotion Lexicon library may provide score in both kind of feelings, as sometimes it is difficult to extract the exact feelings based on the short text in tweets. Therefore, we decided to associate a tweet either to positive feelings category or negative feelings category based on which category has higher aggregated value. Furthermore, for a better mean value we consider a feeling towards a tweet only if it has a value above the 0.1 threshold.
On the left side of each geolocation, TECVis uses a horizontal bar to represent the associated tweets' count to this geolocation. Each bar also shows the associated sentiment polarity distribution (i.e., red for negative, blue for neutral, and green for positive). Each polarity color length represents the count of associated tweets. Mouse hover a particular bar provides further details through a tooltip. The user can switch to the timestamp view, where y-axis is then used for comparison based on timestamp (e.g., days, weeks, or months).
For a side-by-side comparison of two geolocations or timestamps (users can click to select these on the main view), TECVis shows a new pop-up view with a Tornado chart (see Fig. 2. In this Tornado chart, each side shows emotion feeling scores of one selected geolocation or timestamp. For providing the differences between the scores, TECVis highlights the difference value using a darker color in the higher score side. This gives a quick indication of not only the high score value of an emotion feeling from one geolocation/timestamp but also the level of difference between them.
TECVis provides several interaction, filtering, and navigation options: Users can navigate from geolocation comparison to timestamp comparison and vice versa. For example, when users select a particular geolocation then TECVis updates the current view with showing the underlying geolocation tweet data with comparison perspective of timestamps, where users can see the comparison by days, weeks, or months. This option also works from the main timestamp comparison view to geolocation comparison view. TECVis also provides the facility to filter the data based on selected geolocations (see right-side panel in Fig. 1) or timestamps. Users can also filter the data based on a particular emotion feeling or the emotion feeling score range using a score range bar (see right-side panel in Fig. 1).
## 4 Concluding Remarks
Visual comparison and exploration of peoples' feelings, based on geolocations or timestamps, towards an event or topic using the emotions and sentiment polarities in tweets could be useful for understanding better the political and social norms of different geolocations towards the same event or topic. In the future, we plan to conduct detailed user study to evaluate the tool from the common usability aspects as well as explorative user study to find out how analysts can explore and compare political or social norms of different geolocations using the peoples' emotion feelings in tweets using different datasets. We also intend to open source the tool so researchers and analysts can use their datasets for exploring different events or topics.
|
2309.12140 | Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features | The rapid development of 3D object detection systems for self-driving cars
has significantly improved accuracy. However, these systems struggle to
generalize across diverse driving environments, which can lead to
safety-critical failures in detecting traffic participants. To address this, we
propose a method that utilizes unlabeled repeated traversals of multiple
locations to adapt object detectors to new driving environments. By
incorporating statistics computed from repeated LiDAR scans, we guide the
adaptation process effectively. Our approach enhances LiDAR-based detection
models using spatial quantized historical features and introduces a lightweight
regression head to leverage the statistics for feature regularization.
Additionally, we leverage the statistics for a novel self-training process to
stabilize the training. The framework is detector model-agnostic and
experiments on real-world datasets demonstrate significant improvements,
achieving up to a 20-point performance gain, especially in detecting
pedestrians and distant objects. Code is available at
https://github.com/zhangtravis/Hist-DA. | Travis Zhang, Katie Luo, Cheng Perng Phoo, Yurong You, Wei-Lun Chao, Bharath Hariharan, Mark Campbell, Kilian Q. Weinberger | 2023-09-21T15:00:31Z | http://arxiv.org/abs/2309.12140v1 | # Unsupervised Domain Adaptation for Self-Driving from
###### Abstract
The rapid development of 3D object detection systems for self-driving cars has significantly improved accuracy. However, these systems struggle to generalize across diverse driving environments, which can lead to safety-critical failures in detecting traffic participants. To address this, we propose a method that utilizes unlabeled repeated traversals of multiple locations to adapt object detectors to new driving environments. By incorporating statistics computed from repeated LiDAR scans, we guide the adaptation process effectively. Our approach enhances LiDAR-based detection models using spatial quantized historical features and introduces a lightweight regression head to leverage the statistics for feature regularization. Additionally, we leverage the statistics for a novel self-training process to stabilize the training. The framework is detector model-agnostic and experiments on real-world datasets demonstrate significant improvements, achieving up to a 20-point performance gain, especially in detecting pedestrians and distant objects. Code is available at [https://github.com/zhangtravis/Hist-DA](https://github.com/zhangtravis/Hist-DA).
## 1 Introduction
Self-driving cars need to detect objects like cars and pedestrians and localize them in 3D to drive safely. 3D object detection systems have advanced rapidly in accuracy, but still fail to generalize across the extremely diverse domains where vehicles are deployed: A perception system trained in sunny California may never have seen snow-covered cars, and may fail to detect these cars with disastrous consequences. Unfortunately, we cannot afford to separately annotate training data for every location a car might be driven in. We therefore need ways of adapting 3D perception systems to new driving environments without labeled training data. This is the problem of _unsupervised domain adaptation_, where the object detector must be adapted to a new _target_ domain where only unlabeled data is available. In this work, we explore 3D object detection from LiDAR data and how to best adapt it to a set of diverse, real-world scenarios.
Different from prior works in unsupervised domain adaptation, we follow [13] to include the assumption that unlabeled repeated traversals of the same locations are available to the adaptation algorithm. As discussed in the prior works, such assumptions are highly realistic: for example, roads and intersections are usually visited many times by many vehicles. Prior work has shown that the additional information from repeated traversals helps 3D detection in the same domain [11, 12], and helps the perception models adapt to a new domain [13].
However, it is not readily obvious how to best utilize the repeated traversals. Rode-DA [13] uses P2-score, which is a statistic computed from repeated LiDAR scans characterizing the persistence of different areas of the 3D scene, to correct the false positive detections in self-training and better supervise the model. We argue that this method has not fully exploited the information from the P2-score and it can be used in a more principled way to guide the adaptation process.
Our key insight is that the P2-score is a perfect signal to _regularize_ the feature in the detection training. Our full method is based on Hindsight [11]. Hindsight enhances LiDAR-based 3D object detection models with the spatial quantized historical (SQuASH) features computed from repeated past traversals. The authors show that Hindsight can greatly improve the detection performance when tested within similar areas. However, the SQuASH features do not guarantee to be invariant across different domains, resulting in limited performance gain. To prevent the SQuASH features overfit to the training domain, we propose to add the P2-score prediction task as an auxiliary task while training the SQuASH featurizer. Observing LiDAR points within each voxel sharing similar P2-score, we apply an extra light-weighted regression head after the SQuASH feature, and train the head with simple P2 regression task. The regres
sion head is only used in training and does not introduce latency overhead during testing.
Pairing with the typical self-training technique in domain adapation, we validate our method on two large, real-world datasets: Ithaca365 [2] and Lyft [4], as well as a suite of representative object detectors. Our method, which we term Historically Guided Domain Adaptation (Hist-DA), can achieve up to 20 points in improvement, most notably in difficult cases such as detecting pedestrians and far away objects. Furthermore, our method requires very little tuning to achieve strong performance for 3D object detection. Concretely, our contributions are as follows:
* Our methodology identifies a strong source of information with a high learning signal to improve self-supervised adaptation.
* We designed a model-agnostic adaptation framework to leverage repeated traversals effectively.
* We empirically validated our approach on two real-world datasets and show through ablation studies that Hist-DA is robust and generalizable.
## 2 Related Works
**Past Traversals in Autonomous Driving** Human drivers often drives through the same locations repeatedly, thus it is natural to assume that the (unlabeled) data collected for training perception systems for self-driving vehicle contains repeated traversals of different locations. Past works have leveraged this property to enhance the perception of autonomous vehicles. These include self-supervising 2D representation for visual odometry [1], uncovering mobile objects in LiDAR in an unsupervised manner [12], etc. The line of work that is directly related to ours is Hindsight [11] where the authors proposed to learn additional feature descriptors for each point in a LiDAR point cloud from the unlabeled past traversals for better downstream 3D object detection. Hindsight is simple and effective and would work with any downstream 3D detectors that consumes 3D LiDAR point cloud. In this work, we seek to adapt this family of detectors when deploying to a new domain where unlabeled past traversals are available.
**Unsupervised Domain Adaptation (UDA) for 3D Object Detection.** Adapting 3D object detectors to new domains where no labels are available is crucial to deployment of self-driving vehicles. The key to UDA is to understand the domain differences that the detector would encounter during deployment. SN discovers that car sizes could be a source of domain differences and propose to normalize the car sizes when training the detectors on the source domain [7]; SPG identifies point cloud density as one potential source of differences and propose to fill in point clouds during deployment [8]. Though these methods have shown remarkable progress in the problem, they all target specific domain differences, which is not feasible in all cases. One way to characterize domain differences is through the use of unlabeled data. Along this vein, ST3D [9] and ST3D++ [10] adopt conventional self-training approaches with improved filtering mechanism to stabilize adaptation whereas MLC-Net [6] achieves domain alignment via enforcing consistency between a source detector and its exponential moving average on the unlabeled data. Though these methods[6, 7, 9] are effective, they mostly assume that all the unlabeled data are i.i.d. which ignores other potential signals that could be inherent in the unlabeled data such as temporal signals [13] that are potentially useful in adaptation. In this work, we explore using unlabeled past traversals for domain adaptation. As shown in [13], these correlated data contains potent signals for aiding adaptation. However, crucially different from [13], we focus on adapting Hindsight -- a family of models that uses past traversals during inference time.
## 3 Historically Guided Domain Adaptation
We attempt to adapt a 3D object detector to a target domain using unlabeled data. Different from typical adaptation setup [7, 9], we assume the autonomous driving system has access to multiple traversals of the same driving scenes and accurate localization information, both in the source and target domain. In section 3.1, we will clearly lay out the adaptation setup. Then, we will discuss relevant background information to clarify our proposed methodology in section 3.2. Our key insight is to leverage P2-score information from repeated traversals to adapt the detector's point features from one domain to another --akin to feature-alignment works done in the 2D space-- as well as self-training to ensure stable predictions. We discuss the relevant adaptation strategies in section 3.3. Our overall method is shown in Figure 1.
### Unsupervised Domain Adaptation with Repeated Traversals
Our goal is to adapt a LiDAR-based detector using repeated traversals of unlabeled point clouds \(\{\mathbf{P}_{i}^{t}\}\) and the associated global localization \(\{G_{i}^{t}\}\) from the target domain, for the \(i\)-th frame in traversal \(t\). We assume that the source domain also has access to multiple repeated traversals, as well as the bounding box labels \(\mathbf{b}_{c}\) associated with training point clouds \(\mathbf{P}_{c}\), for the \(c\)-th training frame. To characterize these historical traversals, we combine the point clouds for a single traversal to create a dense point cloud in the same way as [11]. Specifically, for a single traversal in a domain that consists of a sequence of point clouds, we transform each point cloud into a fixed global coordinate system. Then, for a location \(l\) in a single frame \(i\) within traversal \(t\) every \(m\) meters along the road, the point clouds
from a range \([-H_{m},H_{m}]\) are combined to produce a dense point cloud \(\mathbf{D}_{l}^{t}=\bigcup_{G_{i}^{t}\in[l-H_{m},l+H_{m}]}\{\mathbf{P}_{t}^{i}\}\) (with a slight abuse of notation as we use \(G_{i}^{t}\) additionally for the location \(i\) was captured).
### Background
**Persistency Prior Score from multiple traversals.** Our goal is to exploit the inherent information from the unlabeled repeated traversals for adaptation. One source of information we can retrieve is the Persistency Prior (P2) score, that was introduced in [12]. To recap, P2-score uses entropy-based measures to quantify how persistent a single LiDAR point cloud is across multiple traversals. It is calculated using the set of dense point clouds \(\{\mathbf{D}_{l}^{t}\}_{t=1}^{T}\), for \(T\geq 1\) traversals of a location. For a given 3D point \(\mathbf{q}\) around location \(l\), we first count the number of neighboring points around \(\mathbf{q}\) within a certain radius \(r\) in each \(\mathbf{D}_{l}^{t}\):
\[N_{t}(\mathbf{q})=\left|\{\mathbf{p}_{i};||\mathbf{p}_{i}-\mathbf{q}||_{2}<r,\mathbf{p}_{i}\in \mathbf{D}_{l}^{t}\}\right| \tag{1}\]
We can then normalize the neighbor count \(N_{t}(\mathbf{q})\) across traversals \(t\in\{1,...T\}\) into a categorical probability:
\[P(t;\mathbf{q})=\frac{N_{t}(\mathbf{q})}{\sum_{t^{\prime}=1}^{T}N_{t^{\prime}}(\mathbf{q})} \tag{2}\]
Using \(P(t;\mathbf{q})\), we can then compute the P2-score \(\tau(\mathbf{q})\) the same way as [12]:
\[\tau(\mathbf{q})=\begin{cases}0&\text{if }N_{t}(\mathbf{q})=0\ \forall t;\\ \frac{H(P(t;\mathbf{q}))}{\log(T)}&\text{otherwise}\end{cases} \tag{3}\]
where \(H\) is the information entropy. Intuitively, a higher P2-score corresponds to a more persistent background, while a lower P2-score corresponds to a mobile foreground object. This value is a statistic that can be calculated from the repeated traversals, and as a result, it's natural for us to use an architecture that leverages these data. One such candidate is Hindsight.
**Hindsight.** Hindsight [11] is an end-to-end featurizer intended to extract contextual information from repeated past traversals of the same location. The authors proposed an easy-to-query data structure used to endow the current point cloud with information from past traversals to improve 3D detection.
Given the dense point cloud \(\mathbf{D}_{l}^{t}\), Hindsight encodes it using a spatial featurizer that results in a spatially-quantized feature tensor \(\mathbf{Q}_{l}^{t}\). This can be applied to the \(T\) tensors, one for each traversal in location \(l\), which is then aggregated into a single tensor \(\mathbf{Q}_{l}^{g}\), deemed SQuaSH, using a per-voxel aggregation function \(f_{agg}\):
\[\mathbf{Q}_{l}^{g}=f_{agg}(\mathbf{Q}_{l}^{1},...,\mathbf{Q}_{l}^{T}) \tag{4}\]
Once deployed, if the self-driving car captures a new scan \(\mathbf{P}_{c}\) at a new location \(G_{c}\) and the SQuaSH feature at this location is \(\mathbf{Q}_{l_{c}}^{g}\), Hindsight endows \(\mathbf{P}_{c}\) by querying the features \(\mathbf{Q}_{l_{c}}^{g}\) around it. In the work [11], the SQuaSH featurizer is trained concurrently with the object detector, using the detection loss as a signal for gradient updates.
### Adaptation Strategy
Our adaptation strategy consists of P2 feature alignment training in the source domain and unsupervised self-training in the target domain.
P2 Feature Alignment TrainingThough the computation of P2-score is of high latency and thus it is hard to be applied online, it serves perfectly as an additional signal for offline adaptation algorithm. With P2-score, we construct a simple self-supervised learning task to adapt the SQuaSH features after deployment.
Figure 1: **Method diagram of the adaptation process.** The method is divided into source domain training, target domain unsupervised training, and finally deployment on the target domain. The repeated traversals from the source domain are colored in blue, and those from the target domain are colored in yellow. Best viewed in color.
Consider a point \(\mathbf{q}\) in a point cloud \(\mathbf{P}_{c}\), we can obtain 1) its corresponding SQuaSH feature \(\mathbf{Q}_{l}^{g}(\mathbf{q})\); 2) its corresponding P2-score \(\tau(\mathbf{q})\). Since the SQuaSH feature is computed from the same traversals, it should contain sufficient information to reproduce the corresponding P2-score. However, the trained model might suffer from the domain difference, and thus in the target domain, it might not be able to encode sufficient information from the past traversals, including those for the P2 score. We thus construct a P2 score prediction task for the SQuaSH feature to help the model align the relevant information it extracts in the source domain to invariant information encoded in P2-scores. For each SQuaSH feature \(\mathbf{Q}_{l}^{g}(\mathbf{q})\), we apply a simple MLP to predict the corresponding P2 score,
\[\hat{\tau}(\mathbf{q})=\mathrm{MLP}(\mathbf{Q}_{l}^{g}(\mathbf{q})). \tag{5}\]
We compute the L1 distance between the predicted P2 score and the corresponding P2 score as the alignment loss,
\[l_{\text{alignment}}=\|\hat{\tau}(\mathbf{q})-\tau(\mathbf{q})\|_{1}. \tag{6}\]
The final objective for the detector training under the source domain consists of the alignment loss \(l_{\text{alignment}}\), in addition to the regular detection loss for the detector we are adapting \(l_{\text{detection}}\), computed from the predicted bounding boxes \(\mathbf{b}\) and the labels \(\mathbf{b}_{c}\). Our methodology is detector agnostic, and we do not assume the base detector or the detection loss \(l_{\text{detection}}\).
Unsupervised Self-TrainingTo stabilize finetuning in the target domain, we apply self-training in the target domain. Similar to works [5, 13] that showed the effectiveness of self-training, we leverage refined _pseudo-labels_ that we generate for adaptation into the target domain.
Given an aligned detector from P2 feature alignment training, we can generate bounding boxes in the target domain. Given a point cloud \(\mathbf{P}_{c}\) in the target domain, we can obtain bounding boxes \(\hat{\mathbf{b}}\) from the detector. Similar to the source domain, we can compute the P2-score for each point cloud in the target domain, \(\tau(\mathbf{q})\), \(\mathbf{q}\in\mathbf{P}_{c}\). To assess the quality of a particular bounding box, we can apply a simple criteria that points within the bounding box cannot be too _persistent_, having P2-scores that are too high. In this work, we filter out bounding boxes that capture points with P2-scores with a 20\(th\)-percentile larger than 0.7:
\[\hat{\mathbf{b}}_{\text{final}}=\{b\in\hat{\mathbf{b}}|\text{P}_{20}(\{\tau(\mathbf{q}_{j} )\}_{j\in b})<0.7\}, \tag{7}\]
with a slight abuse in notation, we denote \(j\in b\) as the \(j\)-th point that is in bounding box \(b\). This gives us the final set of pseudo-labels, \(\hat{\mathbf{b}}_{\text{final}}\), and we compute the detection loss for the model on the pseudo-labels, and the final objective for the unsupervised training on the target domain is this pseudo-label detection loss, \(l_{\text{detection}}\) computed on \(\hat{\mathbf{b}}_{\text{final}}\) as the labels.
## 4 Experiments
**Datasets.** We experiment with two large-scale autonomous driving datasets: the Lyft Level 5 Perception dataset [4] and the Ithaca-365 dataset [2]. To the best of our knowledge,
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Car} & \multicolumn{4}{c}{Pedestrian} \\ \cline{2-9} Method & 0-30 & 30-50 & 50-80 & 0-80 & 0-30 & 30-50 & 50-80 & 0-80 \\ \hline No Adapt/ No HS & 42.19 & 12.66 & 0.95 & 18.54 & 40.74 & 18.32 & 0.42 & 21.18 \\ ST3D & 61.63 & 38.70 & 4.73 & 35.89 & 44.37 & 26.94 & 0.00 & 24.97 \\ Rote-DA & **62.85** & **41.88** & 15.07 & 41.32 & 48.76 & 32.61 & 1.21 & 30.59 \\ \hline No Adapt + HS & 41.88 & 29.31 & 16.40 & 30.29 & 51.16 & 26.41 & 5.80 & 29.99 \\ Hist-DA (Ours) & 58.44 & 40.03 & **25.26** & **42.82** & **60.72** & **48.58** & **21.42** & **48.48** \\ Oracle (in domain) & 73.38 & 56.19 & 39.08 & 57.10 & 55.39 & 37.42 & 14.86 & 40.37 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of adapting a detector from Lyft to Ithaca365. Metrics are reported on nuScenes mAP at 1m matching.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Car} & \multicolumn{4}{c}{Pedestrian} \\ \cline{2-9} Method & 0-30 & 30-50 & 50-80 & 0-80 & 0-30 & 30-50 & 50-80 & 0-80 \\ \hline No Adapt/ No HS & 59.0 & 40.9 & 25.8 & 45.4 & 16.7 & 8.2 & 0.2 & 6.7 \\ ST3D & **71.8** & **52.1** & 30.4 & **55.7** & – & – & – & – \\ Rote-DA & 54.3 & 31.9 & 14.9 & 35.7 & **29.6** & **34.4** & 4.1 & **22.0** \\ \hline Hist-DA (Ours) & 62.6 & 49.2 & **34.9** & 51.8 & 25.6 & 26.5 & **7.9** & 16.7 \\ Oracle (in domain) & 69.1 & 71.5 & 49.0 & 65.7 & 37.0 & 38.2 & 26.3 & 32.2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of adapting a detector from Ithaca365 to Lyft. Metrics are reported at 0.7 IoU matching.
these are the only two publicly available autonomous driving datasets that have both bounding box annotations and multiple traversals with accurate 6-DoF localization. The Lytf dataset is collected in Palo Alto (California, US) and the Ithaca-365 dataset is collected in Ithaca (New York, US). We use the roof LiDAR (40/60-beam in Lyft; 128-beam in Ithaca-365), and the global 6-DoF localization with the calibration matrices directly from the raw data. We simulate adaptation scenarios in both ways: 1) train in Lyft and test in Ithaca-365; 2) train in Ithaca-365 and test in Lyft.
**Source 3D Object Detectors.** In the source domain, we train the default implementation of PointRCNN model with Hindsight [11] using both object detection and P2-score feature alignment training for 60 epochs. We modify the Hindsight model to predict P2-scores from the inputted dense point cloud \(\mathbf{S}_{l}^{\sharp}\). All models are trained with 4 GPUs (NVIDIA A6000).
It is worth noting that our methodology has the ability to be applied to other 3D object detectors as well, and leave this for future exploration.
**Evaluation Metrics.** On the Lyft dataset, we evaluate object detection performance in a bird's eye view (BEV) and use KITTI [3] metrics and conventions for 3D detection. We report average precision (AP) with the intersection over union (IoU) thresholds at 0.7 and 0.5 for Car and Pedestrians. Additionally, these evaluations are evaluated at various depth ranges. Due to space constraints, we report AP\({}_{BEV}\) at IoU=0.7 for Cars and Pedestrians. On the Ithaca365 dataset, the default match criterion is by the minimum distance to the ground-truth bounding boxes. We report the mean average precision (mAP) with match thresholds of 1-meter for Cars and Pedestrians. Since there are too few cyclists in the Ithaca-365 dataset to provide a reasonable performance estimate, we train and evaluate our models only on _Cars_ and _Pedestrians_.
**Adaptation Method Comparisons** We compare the proposed methodology against the following methods with publicly available code: ST3D [9] and Rote-DA [13].
### Domain Adaptation Performance Results
In Table 1 and Table 2, we show the results on adaptation from a detector trained in the Lyft dataset to the Ithaca365 dataset and vice versa. Based on the tables, we can see that our methodology, despite its simplicity, outperforms all baselines in almost all metrics in both adaptation directions. This goes to show not only that using multiple traversals serves as a strong learning signal for these models, but also that predicting P2-scores as a self-supervised learning task leads to a dramatic improvement.
Hist-DA works especially well in more challenging scenarios, specifically in the pedestrian scenario and with farther distances. Although it performs slightly worse for cars at close ranges, our methodology has a significantly stronger performance for pedestrians and for far away objects. The model even outperforms an in-domain detector in all distances for pedestrians by a substantial amount. Furthermore, due to the simple nature of the single round of self-training in Hist-DA, our method is significantly simpler to train than any of the baselines, which require many rounds of self-training. Consequentially, it is significantly simpler to tune and is faster to train. Observe that by adding in Hindsight features (+ HS), we are already able to observe performance gains over the model that doesn't leverage past traversal information. This shows that such historical features already improve adaption and are more robust across domains. By including our method, we are able to achieve the best performance by explicitly bootstrapping in the past traversal statistics in the form of P2-scores.
### Qualitative Results
We visualize our adaptation results in Figure 2, and compare the detections of Hist-DA (in yellow) to detections without adaptation (in blue). Observe that detection results using Hist-DA are qualitatively better than those without adaptation, both in the shape, as well as precision and recall. In particular, for smaller actors such as pedestrians and in actors that are further away. The feature alignment training allows for more robust features that generalizes across domains, and the unsupervised self-training allows for stronger adaptation into the new domain.
### Analysis
**Effect of different adaptation components.** We additionally ablate the different components and report our results in Table 3. Observe that adding in P2-score training is crucial to the generalizability of the features across domains. Additionally, adding in self-training ("Pseudo-Label") helps stabilize the model training in an unsupervised manner into the new domain. Although using both P2 Training and pseudo-labels and only using P2 Training have similar performances, we noticed that the number of traversals from the source to the target domain can affect the performance of including P2 Training in the target domain. This occurs because P2 scores are inherently derived from repeated traversals, and the number of traversals can affect its accuracy. To be more specific, we observed that going from Ithaca365, which had 20 traversals to Lyft, which had 5 traversals made the performance of both P2 and pseudo-labels worse than using pseudo-labels since the P2 scores derived from the target domain was introducing noise to cause the model's P2 backbone to decrease in accuracy.
**Effect of historical traversals.** We examine the effect of the additional information by including unlabeled, historical traversals. We report our findings in Table 4. Although directly evaluating on Ithaca365 using a PointRCNN model
trained on Lyft has noticeable performances, one can see that including the Hindsight model increases the performance in both cars and pedestrians, with some distances improving by almost two-fold. On the other hand, adapting a PointRCNN model without Hindsight leads to improvements specifically for pedestrians, but performs slightly worse than (-) adapt / HS in cars. Naturally, adding both would significantly improve the performance as shown in the table, with adapting improving the pedestrian performance and the hindsight model improving the car performance.
**Robustness of the framework.** We analyze the robustness of our method to localization error and number of past traversals used in computing the historical features of the model. Results for localization error are shown in Table 5; Hist-DA is robust to minor errors in noise. We additionally report results for robustness under number of past traversals in Table 6. Observe that performance gain in adaptation can be seen with even two past traversals of an area. Additionally, our method handles higher depths better than other methods as shown in Table 1 and Table 2, since P2-score as a self-supervision task acts as a prior over the point clouds and inherently removes static objects that normal object detectors might not catch at higher depths.
## 5 Discussion and Future Works
In this work, we propose our method, Hist-DA, for the task of domain adaptation in self-driving object detection. Our work is able to achieve strong performance by training well aligned features from past traversal statistics, and further leverage the statistics to stabilize model outputs in the test domain in an unsupervised manner. Our method is the first to approach domain adaptation for 3D object detection under a feature alignment perspective leveraging past traversal information. Furthermore, by bringing in an architecture specifically designed to leverage such information,
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{PRCNN Model} & \multicolumn{4}{c}{Car} & \multicolumn{4}{c}{Pedestrian} \\ \cline{2-9} & 0-30 & 30-50 & 50-80 & 0-80 & 0-30 & 30-50 & 50-80 & 0-80 \\ \hline baseline & 42.2 & 12.7 & 0.9 & 18.5 & 40.7 & 18.3 & 0.4 & 21.2 \\ (-) adapt, HS & 41.9 & 29.3 & 16.4 & 30.3 & 51.2 & 26.4 & 5.8 & 30.0 \\ adapt, (-) HS & 54.1 & 22.8 & 2.0 & 27.0 & 52.5 & 31.2 & 1.9 & 32.1 \\ \hline Ours & **58.4** & **40.0** & **25.3** & **42.8** & **60.7** & **48.6** & **21.4** & **48.5** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation results adapting a detector from Lyft to Ithaca365. Metrics reported on nuScenes mAP at 1m matching.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multicolumn{2}{c}{Source} & \multicolumn{2}{c}{Target} & \multicolumn{4}{c}{Car} & \multicolumn{4}{c}{Pedestrian} \\ \cline{2-9} P2 Training & P2 Training & Pseudo-Label & 0-30 & 30-50 & 50-80 & 0-80 & 0-30 & 30-50 & 50-80 & 0-80 \\ \hline & & & 41.88 & 29.31 & 16.4 & 30.29 & 51.16 & 26.41 & 5.8 & 29.99 \\ ✓ & & & 50.57 & 34.57 & 19.04 & 36.39 & 55.9 & 38.83 & 12.28 & 39.88 \\ ✓ & ✓ & & 37.24 & 15.26 & 2.85 & 18.37 & 41.16 & 26.32 & 12.05 & 27.73 \\ ✓ & & ✓ & **58.44** & 40.03 & 25.26 & 42.82 & 60.72 & **48.58** & **21.42** & **48.48** \\ ✓ & ✓ & ✓ & 58.06 & **42.34** & **25.82** & **43.31** & **60.87** & 48.42 & 20.41 & 47.92 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation results adapting a detector from Lyft to Ithaca365. Metrics are reported on nuScenes mAP at 1m matching.
Figure 2: **Qualitative visualization of adaptation results.** We visualize one example scene (above: LiDAR, below: image, not used for adaptation) from the adaptation results from the Ithaca-365 \(\rightarrow\) Lyft and Lyft \(\rightarrow\) Ithaca-365 datasets. Ground-truth bounding boxes are shown in green, detection boxes of no adaptation and our method are shown in blue and yellow, respectively. Best viewed in color.
we show state-of-the-art performance on two large, real world datasets. Future directions include expanding this framework into other object detectors and exploring other feature alignment methods leveraging past traversals.
|
2310.00427 | Technical Report of 2023 ABO Fine-grained Semantic Segmentation
Competition | In this report, we describe the technical details of our submission to the
2023 ABO Fine-grained Semantic Segmentation Competition, by Team "Zeyu\_Dong"
(username:ZeyuDong). The task is to predicate the semantic labels for the
convex shape of five categories, which consist of high-quality, standardized 3D
models of real products available for purchase online. By using DGCNN as the
backbone to classify different structures of five classes, We carried out
numerous experiments and found learning rate stochastic gradient descent with
warm restarts and setting different rate of factors for various categories
contribute most to the performance of the model. The appropriate method helps
us rank 3rd place in the Dev phase of the 2023 ICCV 3DVeComm Workshop
Challenge. | Zeyu Dong | 2023-09-30T16:32:22Z | http://arxiv.org/abs/2310.00427v1 | # Technical Report of 2023 ABO Fine-grained Semantic Segmentation Competition
###### Abstract
In this report, we describe the technical details of our submission to the 2023 ABO Fine-grained Semantic Segmentation Competition, by Team "Zeyu_Dong" (username:ZeyuDong). The task is to predicate the semantic labels for the convex shape of five categories, which consist of high-quality, standardized 3D models of real products available for purchase online. By using DGCNN as the backbone to classify different structures of five classes, We carried out numerous experiments and found learning rate stochastic gradient descent with warm restarts and setting different rate of factors for various categories contribute most to the performance of the model. The appropriate method helps us rank 3rd place in the Dev phase of the 2023 ICCV 3DVeComm Workshop Challenge.
## I Introduction
In e-commerce, 3D image semantic segmentation is of great significance. E-commerce platforms can create more vibrant and realistic product presentations by merging 3D photos with semantic segmentation technologies. In order to better grasp a product's design and features, users can rotate, zoom, and browse products in an interactive way, which lessess issues with information asymmetry in online shopping. Additionally, 3D picture semantic segmentation enables customers to edit products and see a real-time preview of the personalising effects in a virtual environment. Different components and features of the product can be highlighted through semantic segmentation. Users can make more informed purchases with the aid of better product information and visualization. Users can lessen returns because products don't live up to expectations, eliminate discontent after purchase, and better comprehend the qualities of the product.
This project code link is: [https://github.com/ZeUDong/2023-ABO-Fine-grained-Semantic-Segmentation-Competition](https://github.com/ZeUDong/2023-ABO-Fine-grained-Semantic-Segmentation-Competition)
The competition link is: [https://eval.ai/web/challenges/challenge-page/2027/overview](https://eval.ai/web/challenges/challenge-page/2027/overview)
## II 2023 ABO Fine-grained Semantic Segmentation Competition
The 3D models used to train and test the model are part of the Amazon Berkeley Objects (ABO) Dataset, which features real objects that can be bought online and are of high quality. These models were expertly designed by artists, and they are made up of build-aware connected components that reflect different form aspects like texture, motion, function, interaction, and construction. The main goal of the workshop challenge is to name the connected components in the ABO dataset with fine-grained semantic labels. As seen in the figure below, the 3D models with build-aware connected components are represented as a collection of convex shapes [1, 2].
## III Proposed method
### _Dgcnn_
Dynamic Graph Convolutional Neural Network (DGCNN) is a deep learning model for point cloud processing and semantic segmentation, whose main concept is to handle point cloud data using graph convolutional networks (GCN) [3, 4]. The adjacency links between the points in the point cloud must be determined in order to construct a graph structure. This can be accomplished by measuring the separation or connectedness between each point and the points close by. Typically, a graph or adjacency matrix is built using this data, with each point being connected to those of its neighbors.
### _Sgdr_
We use Stochastic Gradient Descent with Warm Restarts (SGDR) as the Learning rate adjustment strategy, which is a strategy for scheduling cyclic learning rates that is intended to increase the stability and generalizability of model training.
Cosine annealing scheduling, which is the main component of SGDR, is used to modify the learning rate. The learning rate fluctuates during the course of the training process, executing periodic annealing in the form of a cosine function. This type of cosine annealing learning rate scheduling starts out with a high learning rate, then steadily drops until it eventually approaches zero. It helps the model converge more quickly in the initial stages of training and then carry out more precise learning in later stages.
The learning rate is reset to its initial value at the conclusion of each cosine annealing period, and training is then continued in a new epoch. This occasional restart aids in breaking out of local minima and encourages the model to continue exploring a larger parameter space while being trained. Every cycle is a multiple of the one before it, and they all get longer with time. To better balance the demands of quick convergence and fine-grained model adjustment, this feature enables the learning rate to have variable adjustment speeds at various training phases.
### _Training Pipeline_
We conducted distinct training sessions based on five categories and varying dropout levels for the model training portion. The results for each of the five categories are then optimized separately, increasing the accuracy of the overall
results. The learning rate adjustment strategy of all models is SGDR, according to the result of experiments the best dropout for the class chair is 0.6, and the best dropout for others is 0.4.
### _Dataset_
The main goal of the workshop challenge is to give connected components in the ABO dataset fine-grained semantic labels. The convex forms used to depict the 3D models with build-aware connected components include. The data is 3D images of different parts of five classes of objects, which contain chair, bed, lamp, storage furniture, and table. All the images are well processed and are split into train, test, and dev. What we need to do is create five distinct models that correlate to various object component classification categories. Then combine the results of these models and output them to the submission file.
## IV Experiment
### _Evaliuation metrics_
In this competition, the evaluation will be conducted from two aspects, accuracy and Intersection over Union (IoU). Accuracy measures the proportion of samples correctly classified by the model, accuracy = (number of correctly classified samples) / (total number of samples). IoU measures the degree of overlap between the area predicted by the model and the real area, IoU = (intersection area of prediction regions) / (union area of prediction regions).
### _Implementation Details_
According to the baseline offered [1], the optimizer is Adam, and the total number of epoch is 250. The learning rate decay is to multiply the factor, which is 0.8, every 25 epoch. The learning rate is 0.001, and the scene per batch train is 2. The loss of baseline is 0.156.
At first, we changed the number of epoch to 300 at first, the loss is 0.0621 and the LB is 0.77 to 0.81. Then, we use SGDR to adjust the learning rate, we just use a single cycle, the number of epoch is 250, the loss is 0.019, and the LB is 0.77 to 0.82. We did several experiments based on the consequence of the model using SGDR and changed the number of epoch and dropouts to improve the performance. Finally, the dropout of chair is 0.6 and the dropout of other classes is 0.4.
We used a 3090ti graphics card with 24G video memory to train the model. The training time for a single category was 3.5 hours, and the training time for five categories was 17.5 hours. The time to infer a single graph is 3.2 ms.
## V Conclusion
In this challenge, Five models were trained to correspond to various categories, and the best five were pooled. On the ABO dataset, we highlighted the significance of our suggested model, and we nearly met SOTA performance. Along with the above-mentioned efficient techniques, we also experimented with a number of novel techniques during the participation process, such as batch size reduction and changing Relu to LeakyRelu. These techniques, however, will not lead to better performance.
For future works, we would try to use other powerful backbones, such as a 3D transformer [5], to test whether the performance can be improved. We believe that by using the appropriate method as we did in this challenge the accuracy as well as the IoU would be enhanced and improved.
Fig. 1: Pipeline |
2309.16326 | Numerical schemes for a multi-species quantum BGK model | We consider a kinetic model of an N-species gas mixture modeled with quantum
Bhatnagar-Gross-Krook (BGK) collision operators. The collision operators
consist of a relaxation to a Maxwell distribution in the classical case, a
Fermi distribution for fermions and a Bose-Einstein distribution for bosons. In
this paper we present a numerical method for simulating this model, which uses
an Implicit-Explicit (IMEX) scheme to minimize a certain potential function.
This is motivated by theoretical considerations coming from entropy
minimization. We show that theoretical properties such as conservation of mass,
total momentum and total energy as well as positivity of the distribution
functions are preserved by the numerical method presented in this paper, and
illustrate its usefulness and effectiveness with numerical examples | Gi-Chan Bae, Marlies Pirner, Sandra Warnecke | 2023-09-28T10:39:55Z | http://arxiv.org/abs/2309.16326v1 | # Numerical schemes for a multi-species quantum BGK model
# Numerical schemes for a multi-species quantum BGK model
Gi-Chan Bae, Marlies Pirner, Sandra Warnecke
Scoul National UniversityUniversity of MuensterUniversity of Wuerzburg
**Abstract:** We consider a kinetic model of an N-species gas mixture modeled with quantum Bhatnagar-Gross-Krook (BGK) collision operators. The collision operators consist of a relaxation to a Maxwell distribution in the classical case, a Fermi distribution for fermions and a Bose-Einstein distribution for bosons. In this paper we present a numerical method for simulating this model, which uses an Implicit-Explicit (IMEX) scheme to minimize a certain potential function. This is motivated by theoretical considerations coming from entropy minimization. We show that theoretical properties such as conservation of mass, total momentum and total energy as well as positivity of the distribution functions are preserved by the numerical method presented in this paper, and illustrate its usefulness and effectiveness with numerical examples
## 1 Introduction
In a kinetic description, the state of a dilute gas or plasma is given by a distribution function that prescribes the density of particles at each point in position-momentum phase space. In a time-dependent setting, the evolution of this distribution function is due to a balance of particle advection and binary collisions. Perhaps the most well-known model for collisions is the Boltzmann collision operator, an integral operator that preserves collision invariants (mass, momentum and energy) and dissipates the mathematical entropy of the system. Unfortunately, the expense of evaluating this operator can be prohibitive. Indeed, its evaluation requires the calculation of a five-dimensional integral at every point in phase-space. Thus even with fast spectral methods [35, 36, 16, 15], the collision operator is typically the dominant part of a kinetic calculation. The quantum modification of the celebrated Boltzmann equation was made in [39, 40] to incorporate the quantum effect that cannot be neglected for light molecules (such as Helium) at low temperature. Quantum Boltzmann equation is now fruitfully employed not just for low temperature gases, but in various circumstances such as scattering problem in solid [3, 13] and electrons on energy band structure in semiconductor [27].
In the classical case, the Bhatnagar-Gross-Krook (BGK) operator is a widely used surrogate for the Boltzmann operator that models collisions by a simple relaxation mechanism. This simplification brings significant computational advantages while also maintaining the conservation and entropy dissipation properties of the Boltzmann operator. As in the classical case, the quantum BGK models are widely used in place of the quantum Boltzmann equation [3, 13, 27, 31, 37, 32].
In the quantum case, an extension of the single-species BGK model to the multi-species setting was recently developed in [7]. There a sufficient condition is proven that guarantees the existence of equilibrium coefficients so that the model shares the same conservation laws and H-theorem with the quantum Boltzmann equation. Unlike the classical BGK model for gas mixtures [20, 25, 19, 18, 38, 28, 21, 10, 1, 11], the equilibrium coefficients of the local equilibrium for quantum multi-species gases are defined through highly nonlinear relations that are not explicitly
solvable. So it was necessary to verify that such nonlinear relations uniquely determine the equilibrium coefficients, leading to the well-definedness of the model.
In this paper we present a numerical implementation of the quantum multi-species BGK model developed in [7]. The main obstacle of the quantum mixture BGK model is that the equilibrium parameters can not be written by explicit relation with mass, momentum, and energy. Here, we present a method that enables an implicit treatment of the quantum BGK operator following the recently developed method on the BGK model with velocity-dependent collision frequency [22, 23]. The implementation is a discrete velocity method that relies on standard spatial and temporal discretizations from the literature. The crucial step involves the formulation of a convex entropy minimization problem. In particular, the solver uses a numerical minimization procedure in order to determine the coefficients of the attractors. The numerical method was originally developed for a BGK model for gas mixtures with velocity dependent collision frequencies [22, 23]. In this paper, we will see that this method can be also adapted for quantum BGK models for gas mixtures since both models have structural similarities. Both models have the difficulty that the dependency of the attractors on the solution can not be given in an explicit way. The implementation is a discrete velocity method that relies on standard spatial and temporal discretizations from the literature. The key new ingredient is a solver which enables an implicit treatment of the BGK operator. The crucial step involves the formulation of a convex entropy minimization problem. In particular, the solver uses a numerical minimization procedure in order to determine the coefficients of the attractors. This construction guarantees conservation and entropy properties at the discrete level, up to numerical tolerances, even when using a discrete velocity mesh.
The remainder of this paper is organized as follows. In Section 2, 3 and 4, we recall the multi-species quantum BGK model from [7] and its main important properties. In Section 5.1, we present the first- and second-order implicit-explicit time discretizations that are used in the paper. We also introduce the optimization-based approach for the implicit evaluation of the BGK operator. In Section 5.2, we describe the space discretization. In Section 5.3, we verify some structure preserving properties of the semi-discrete scheme. In Section 5.4, we introduce the momentum discretization and summarize the numerical implementation of the optimization algorithm introduced in Section 5.1. In Section 6, we provide an array of numerical results that illustrate the properties of our scheme.
## 2 A consistent multi-species quantum BGK model
We consider two distribution functions \(f_{1}=f_{1}(x,p,t)\geq 0\) and \(f_{2}=f_{2}(x,p,t)\geq 0\) for species with masses \(m_{1}>0\) and \(m_{2}>0\), respectively, with the phase space variables (position and momentum) \(x\in\Omega\) and \(p\in\mathbb{R}^{3}\) and time \(t\geq 0\). To be as general as possible, we generalize the quantum mixture BGK model in [7] describing a mixture of bosons and fermions to the mixture of gases including the interaction of quantum-classical particles:
\[\partial_{t}f_{1}+\frac{p}{m_{1}}\cdot\nabla_{x}f_{1} =\nu_{11}n_{1}(\mathcal{K}_{11}-f_{1})+\nu_{12}n_{2}(\mathcal{K} _{12}-f_{1}), \tag{1}\] \[\partial_{t}f_{2}+\frac{p}{m_{2}}\cdot\nabla_{x}f_{2} =\nu_{22}n_{2}(\mathcal{K}_{22}-f_{2})+\nu_{21}n_{1}(\mathcal{K} _{21}-f_{2}),\]
where \(\mathcal{K}_{ij}\) is the local equilibrium describing the interactions of \(i\)th and \(j\)th component and \(\nu_{ij}n_{j}\) for \(i,j=1,2\) are the collision frequencies. More explicitly, there can be the following cases for fermion \(\tau=+1\), for boson \(\tau=-1\), and for classical particle \(\tau=0\):
\[\mathcal{K}_{11}=\frac{1}{e^{m_{1}a_{1}\left|\frac{p}{m_{1}}-b_{ 1}\right|^{2}+c_{1}}+\tau}, \quad\mathcal{K}_{12}=\frac{1}{e^{m_{1}a\left|\frac{p}{m_{1}}-b \right|^{2}+c_{12}}+\tau},\] \[\mathcal{K}_{22}=\frac{1}{e^{m_{2}a_{2}\left|\frac{p}{m_{2}}-b_{ 2}\right|^{2}+c_{2}}+\tau^{\prime}}, \quad\mathcal{K}_{21}=\frac{1}{e^{m_{2}a\left|\frac{p}{m_{2}}-b \right|^{2}+c_{21}}+\tau^{\prime}}. \tag{2}\]
We denote \(\mathcal{K}\) as Fermi-Dirac distribution, Bose-Einstein distribution and Maxwellian for the case \(\tau=+1,-1,0\), respectively. The equilibrium parameters \((a_{i},b_{i},c_{i})\) and \((a,b,c_{12},c_{21})\) will be determined later to satisfy the conservation laws and the entropy principle. Note that the model includes the following cases depending on the types of the particles
\[(\tau,\tau^{\prime})=\left\{\begin{array}{ll}(+1,+1)&\text{(fermion-fermion)}\\ (-1,-1)&\text{(boson-boson)}\\ (+1,-1)&\text{(fermion-boson)}\\ (+1,\ 0\ )&\text{(fermion-classical)}\\ (\ 0,-1\ )&\text{(classical-boson)}\\ (\ 0\,\ 0\ )&\text{(classical-classical)}\end{array}\right.\]
We define the number density of particles \(n_{i}\), momentum \(P_{i}\), energy \(E_{i}\) of each species as
\[n_{i}=\int_{\mathbb{R}^{3}}f_{i}dp,\quad P_{i}=\int_{\mathbb{R}^{3}}f_{i}p\ dp,\quad E_{i}=\int_{\mathbb{R}^{3}}f_{i}\frac{|p|^{2} }{2m_{i}}dp, \tag{3}\]
and \(m_{i}n_{i}=N_{i}\). The parameters \((a_{i},b_{i},c_{i})\) in \(\mathcal{K}_{ii}\) are chosen such that the local equilibrium satisfies the following identities:
\[\int_{\mathbb{R}^{3}}\mathcal{K}_{ii}dp=n_{i},\quad\int_{\mathbb{R}^{3}} \mathcal{K}_{ii}p\ dp=P_{i},\quad\int_{\mathbb{R}^{3}}\mathcal{K}_{ii}\frac{ |p|^{2}}{2m_{i}}dp=E_{i},\quad(i=1,2). \tag{4}\]
This ensures conservation of the number of particles, momentum and energy in interactions of a species with itself.
The equilibrium parameters \((a,b,c_{12},c_{21})\) of \(\mathcal{K}_{ij}\) for \((ij)=(12),(21)\) are determined to satisfy the following identities:
\[\begin{split}&\int_{\mathbb{R}^{3}}\mathcal{K}_{12}dp=n_{1}, \qquad\int_{\mathbb{R}^{3}}\mathcal{K}_{21}dp=n_{2},\\ &\nu_{12}n_{2}\left(\int_{\mathbb{R}^{3}}\mathcal{K}_{12}p\ dp-P_{1} \right)+\nu_{21}n_{1}\left(\int_{\mathbb{R}^{3}}\mathcal{K}_{21}p\ dp-P_{2} \right)=0,\\ &\nu_{12}n_{2}\left(\int_{\mathbb{R}^{3}}\mathcal{K}_{12}\frac{ |p|^{2}}{2m_{1}}dp-E_{1}\right)+\nu_{21}n_{1}\left(\int_{\mathbb{R}^{3}} \mathcal{K}_{21}\frac{|p|^{2}}{2m_{2}}dp-E_{2}\right)=0.\end{split} \tag{5}\]
This ensures conservation of the number of particles, momentum and energy in interactions of a species with the other one.
The existence of these parameters \((a_{i},b_{i},c_{i})\) and \((a,b,c_{12},c_{21})\) is proven in [7] for quantum-quantum mixture case with unit collision frequencies. We will show the proof for any choice of mixture between classical particle, fermion, and boson for more general collision frequency the correspondence with the classical case.
**Remark 1**.: _For the correspondence of classical case and quantum case, let us assume that the velocity distribution function \(\bar{f}(x,v,t)\) and the momentum distribution function \(f(x,p,t)\) satisfy the following relation for \((i=1,2)\):_
\[\bar{f}_{i}(x,v,t)=\bar{f}_{i}\Big{(}x,\frac{p}{m_{i}},t\Big{)}=m_{i}^{3}f_{i} (x,p,t).\]
_This relation connects the conservation laws of classical case and quantum case as follows:_
\[\int_{\mathbb{R}^{3}}\bar{f}_{i}(x,v,t)\left(\begin{array}{c}1\\ m_{i}v\\ \frac{m_{i}}{2}|v|^{2}\end{array}\right)dv=\int_{\mathbb{R}^{3}}\frac{1}{m_{i}^ {3}}\bar{f}_{i}\left(x,\frac{p}{m_{i}},t\right)\left(\begin{array}{c}1\\ p\\ \frac{|p|^{2}}{2m_{i}}\end{array}\right)dp=\int_{\mathbb{R}^{3}}f_{i}(x,p,t) \left(\begin{array}{c}1\\ p\\ \frac{|p|^{2}}{2m_{i}}\end{array}\right)dp.\]
_Thus the macroscopic fields of the classical one_
\[n_{i}(x,t) =\int_{\mathbb{R}^{3}}\bar{f}_{i}(x,v,t)dv,\quad m_{i}n_{i}=\rho_{i}\] \[U_{i}(x,t) =\frac{1}{n_{i}}\int_{\mathbb{R}^{3}}\bar{f}_{i}(x,v,t)vdv,\] \[T_{i}(x,t) =\frac{1}{3n_{i}}\int_{\mathbb{R}^{3}}\bar{f}_{i}(x,v,t)m_{i}|v-U_ {i}|^{2}dv,\]
_and the quantum one_
\[n_{i}(x,t) =\int_{\mathbb{R}^{3}}f_{i}(x,p,t)dp,\quad m_{i}n_{i}=N_{i}\] \[P_{i}(x,t) =\int_{\mathbb{R}^{3}}f_{i}(x,p,t)pdp,\] \[E_{i}(x,t) =\int_{\mathbb{R}^{3}}f_{i}(x,p,t)\frac{|p|^{2}}{2m_{i}}dp,\]
_have the following relation:_
\[\rho_{i}= m_{i}\int_{\mathbb{R}^{3}}\bar{f}dv=m_{i}\int_{\mathbb{R}^{3}}fdp=N_ {i},\] \[\rho_{i}U_{i}= m_{i}\int_{\mathbb{R}^{3}}\bar{f}vdv=\int_{\mathbb{R}^{3}}fpdp=P_ {i}, \tag{6}\] \[\frac{3}{2}n_{i}T_{i}+\frac{1}{2}\rho_{i}|U_{i}|^{2}=\int_{ \mathbb{R}^{3}}\bar{f}\frac{m_{i}}{2}|v|^{2}dv=\int_{\mathbb{R}^{3}}f\frac{|p |^{2}}{2m_{i}}dp=E_{i}.\]
Having introduced our notation, we can deep dive into the model [7].
## 3 Properties of the model
In this section, we present the main properties of the quantum multi-species BGK model (1).
### Conservation properties, H-Theorem and Boundedness
First of all, the model satisfies the conservation properties and an \(H\)-Theorem if the macroscopic fields have some boundedness conditions. This is already proved in [7] for the Fermion-Fermion, Fermion-Boson and Boson-Boson case with collision frequency \(\nu_{ij}n_{j}=1,i,j=1,2\). We generalize the result to include the Fermion-classical, Boson-classical and classical-classical case with \((x,t)\) depending general collision frequency \(\nu_{ij}n_{j}\).
One of the most characterizing features of quantum mechanics is that the number of particles cannot exceed the specific ratio of the energy. If the number of particles exceeds that critical point, then Bose-Einstein condensation occurs for bosons or saturated state occurs for fermions. Consequently, the local equilibrium of the one-species quantum BGK model is only defined below the specific ratio between number of particles and energy (see [12, 4, 29, 30]). Hence
we introduce the function \(j_{\tau}\) by
\[j_{\tau}(x)=\frac{\int\frac{1}{e^{|p|^{2}+x}+\tau}dp}{\left(\int\frac{|p|^{2}}{e^{ |p|^{2}+x}+\tau}dp\right)^{3/5}}.\]
In the following, we will describe how the equilibrium parameters \((a_{i},b_{i},c_{i})\) and \((c_{12},c_{21},a,b)\) in (2) will be determined in order to ensure the conservation properties (4) and (5). We start withe the equilibrium parameters in the one-species term. The equilibrium parameter \(c_{i}\) is uniquely determined by the following implicit relation,
\[j_{\tau}(c_{i})=\frac{n_{i}}{\left(2m_{i}E_{i}-|P_{i}|^{2}/n_{i}\right)^{\frac {3}{5}}},\]
if the right-hand-side is bounded as follows:
\[\frac{n_{i}}{\left(2m_{i}E_{i}-|P_{i}|^{2}/n_{i}\right)^{\frac{3}{5}}}\leq \begin{cases}j_{+1}(-\infty)&\text{for fermion},\\ j_{-1}(0)&\text{for boson}.\end{cases}\]
(with notation \(j_{+1}(-\infty)=\lim\limits_{x\rightarrow-\infty}j_{+1}(x)\). The reason is that the functions \(j_{+1}\) and \(j_{-1}\) are strictly decreasing on \((-\infty,\infty)\) and \([0,\infty)\), respectively).
In the classical case, we can explicitly compute \(j_{0}(x)=\left(\frac{2\pi}{3}\right)^{\frac{3}{5}}e^{-\frac{2}{5}x}\), which is strictly decreasing. Thus the maximum value of \(j_{0}(x)\) is equal to \(\lim_{x\rightarrow-\infty}j_{0}(x)=\infty\), and the macroscopic fields of the classical case has no restriction from above:
\[0\leq\frac{n_{i}}{\left(2m_{i}E_{i}-|P_{i}|^{2}/n_{i}\right)^{\frac{3}{5}}}<\infty.\]
We define \(l_{+1},l_{0}\) and \(l_{-1}\) to be such points where the respective functions \(j_{+1},j_{0}\) and \(j_{-1}\) are maximal and use the following notation
\[l:\{+1,-1,0\}\rightarrow[-\infty,\infty],\quad l_{\tau}=\begin{cases}l_{+1}=- \infty,\\ l_{-1}=0,\\ l_{0}=-\infty.\end{cases}\]
Then, if the specific ratio of macroscopic fields has the following bound
\[\frac{n_{i}}{\left(2m_{i}E_{i}-|P_{i}|^{2}/n_{i}\right)^{\frac{3}{5}}}\leq j _{\tau}(l_{\tau}),\]
then the equilibrium parameter \(c_{i}\) is uniquely determined by
\[c_{i}=j_{\tau}^{-1}\left(\frac{n_{i}}{\left(2m_{i}E_{i}-|P_{i}|^{2}/n_{i} \right)^{\frac{3}{5}}}\right).\]
Finally, we can define \(a_{i}\), and \(b_{i}\) by
\[a_{i}=\left(\int_{\mathbb{R}^{3}}\frac{1}{e^{|p|^{2}+c_{i}}+\tau}dp\right)^{ \frac{2}{5}}n_{i}^{-\frac{2}{5}},\qquad b_{i}=\frac{P_{i}}{n_{i}}, \tag{7}\]
for \((i=1,2).\) This choice ensures the conservation properties (4) and is proven in [4]. To summarize, we have the following theorem.
**Theorem 3.1.1**.: _Let the macroscopic fields satisfy_
\[\frac{n_{1}}{\left(2m_{1}E_{1}-|P_{1}|^{2}/n_{1}\right)^{\frac{3}{5}}}\leq j_{ \tau}(l(\tau)),\qquad\frac{n_{2}}{\left(2m_{2}E_{2}-|P_{2}|^{2}/n_{2}\right)^{ \frac{3}{5}}}\leq j_{\tau^{\prime}}(l(\tau^{\prime})).\]
_Then the parameters of \(\mathcal{K}_{11}\) and \(\mathcal{K}_{22}\) are uniquely determined as_
\[c_{1}=j_{\tau}^{-1}\left(\frac{n_{1}}{\left(2m_{1}E_{1}-|P_{1}|^{2}/n_{1} \right)^{\frac{3}{5}}}\right),\qquad c_{2}=j_{\tau^{\prime}}^{-1}\left(\frac{n _{2}}{\left(2m_{2}E_{2}-|P_{2}|^{2}/n_{1}\right)^{\frac{3}{5}}}\right).\]
_and_
\[a_{1}=\left(\int_{\mathbb{R}^{3}}\frac{1}{e^{|p|^{2}+c_{1}}+\tau}dp\right)^{ \frac{2}{3}}n_{1}^{-\frac{2}{3}},\qquad a_{2}=\left(\int_{\mathbb{R}^{3}} \frac{1}{e^{|p|^{2}+c_{2}}+\tau^{\prime}}dp\right)^{\frac{2}{3}}n_{2}^{-\frac {2}{3}},\]
_and_
\[b_{1}=\frac{P_{1}}{n_{1}},\quad b_{2}=\frac{P_{2}}{n_{2}}.\]
_Then with this choice of the equilibrium parameters of the one species quantum equilibrium \(\mathcal{K}_{11}\) and \(\mathcal{K}_{22}\) the conservation properties (4) are satisfied._
We note that the local Maxwellian for the classical case corresponds to the original definition in [9]:
\[\mathcal{M}_{ii}=\frac{N_{i}}{\pi^{\frac{3}{5}}}\left(\frac{3}{2}\frac{1}{ \frac{E_{i}}{N_{i}}-\left|\frac{P_{i}}{N_{i}}\right|^{2}}\right)^{\frac{2}{5} }e^{-\frac{3}{5}\frac{\left|\frac{P_{i}}{N_{i}}\right|^{2}}{\frac{E_{i}}{N_{i }}-\left|\frac{P_{i}}{N_{i}}\right|^{2}}}=\frac{n_{i}}{\sqrt{2\pi\frac{T_{i}}{ m_{i}}^{3}}}\exp\left(-\frac{|v-U_{i}|^{2}}{2\frac{T_{i}}{m_{i}}}\right),\]
where we applied the correspondence of the macroscopic fields (6). Then we can see that with such choice of \(a_{i}\), \(b_{i}\) and \(c_{i}\), the local equilibria \(\mathcal{K}_{11}=\mathcal{M}_{11}\) and \(\mathcal{K}_{22}=\mathcal{M}_{22}\) satisfy (4).
Before stating the theorem for the mixture interaction terms, we introduce some notations. We define
\[\eta_{\tau}(x)=\int_{\mathbb{R}^{3}}\frac{1}{e^{|p|^{2}+x}+\tau}dp.\]
Note that \(\eta_{\tau}^{-1}\) always exist since \(\eta_{\tau}\) is strictly decreasing. We define \(k_{\tau,\tau^{\prime}}\) by
\[k_{\tau,\tau^{\prime}}(x,y)=\frac{m_{1}^{3/2}\int_{\mathbb{R}^{3}}\frac{1}{e^{ |p|^{2}+x}+\tau}dp}{\left(\frac{\nu_{12}}{m_{2}}\left(\frac{m_{1}^{3/2}}{2N_{1 }}\int_{\mathbb{R}^{3}}\frac{|p|^{2}}{e^{|p|^{2}+x}+\tau}dp\right)+\frac{\nu_{ 21}}{m_{1}}\left(\frac{m_{2}^{3/2}}{2N_{2}}\int_{\mathbb{R}^{3}}\frac{|p|^{2} }{e^{|p|^{2}+y}+\tau^{\prime}}dp\right)\right)^{\frac{3}{5}}}.\]
Using \(\eta_{\tau}\) and \(k_{\tau}\), we define \(g_{\tau,\tau^{\prime}}\), which is defined as a composite function of \(k_{\tau}\) and \(\eta_{\tau}^{-1}\), as follows:
\[g_{\tau,\tau^{\prime}}(x)=k_{\tau,\tau^{\prime}}\big{(}x,y(x)\big{)}=\frac{m_ {1}^{3/2}\int_{\mathbb{R}^{3}}\frac{1}{e^{|p|^{2}+x}+\tau}dp}{\left(\frac{\nu_{ 12}}{m_{2}}\left(\frac{m_{1}^{3/2}}{2N_{1}}\int_{\mathbb{R}^{3}}\frac{|p|^{2} }{e^{|p|^{2}+x}+\tau}dp\right)+\frac{\nu_{21}}{m_{1}}\left(\frac{m_{2}^{3/2}}{2 N_{2}}\int_{\mathbb{R}^{3}}\frac{|p|^{2}}{e^{|p|^{2}+y(x)}+\tau^{\prime}}dp \right)\right)^{\frac{3}{5}}}, \tag{8}\]
where \(y(x)\) denotes
\[y(x)=\eta_{\tau^{\prime}}^{-1}\left(\frac{m_{1}^{3/2}n_{2}}{m_{2}^{3/2}n_{1}} \eta_{\tau}(x)\right).\]
**Theorem 3.1.2**.: _Assume \(\nu_{12}n_{2}=\nu_{21}n_{1}\). Assume further that_
\[\frac{n_{1}}{\left(2E_{1}+2E_{2}-\frac{|P_{1}+P_{2}|^{2}}{N_{1}+N_{2}}\right)^{ \frac{3}{5}}}\leq g_{\tau,\tau^{\prime}}\left(\max\left\{l(\tau),h_{\tau}^{-1} \left(\frac{m_{2}^{\frac{3}{2}}n_{1}}{m_{1}^{\frac{3}{2}}n_{2}}h_{\tau^{\prime} }(l(\tau^{\prime}))\right)\right\}\right).\]
_Then \(c_{12}\), \(c_{21}\) are defined as a unique solution of the following relations:_
\[\frac{m_{1}^{\frac{3}{2}}h_{\tau}(c_{12})}{m_{2}^{\frac{3}{2}}h_{\tau^{\prime} }(c_{21})}=\frac{n_{1}}{n_{2}},\quad k_{\tau,\tau^{\prime}}(c_{12},c_{21})= \frac{n_{1}}{\left(2E_{1}+2E_{2}-\frac{|P_{1}+P_{2}|^{2}}{N_{1}+N_{2}}\right)^ {\frac{3}{5}}}. \tag{9}\]
_With such \(c_{12}\) and \(c_{21}\), we define \(a\) and \(b\) by_
\[a=\left(\frac{m_{1}^{\frac{3}{2}}\int_{\mathbb{R}^{3}}\frac{|p|^{2}}{e^{|p|^{2 }+c_{12}+\tau}}dp+m_{2}^{\frac{3}{2}}\int_{\mathbb{R}^{3}}\frac{|p|^{2}}{e^{|p|^ {2}+c_{21}+\tau^{\prime}}}dp}{2E_{1}+2E_{2}-\frac{|P_{1}+P_{2}|^{2}}{N_{1}+N_{ 2}}}\right)^{\frac{3}{5}},\quad b=\frac{P_{1}+P_{2}}{N_{1}+N_{2}}, \tag{10}\]
_Then with this choice of the equilibrium parameters of the mixture species quantum equilibrium \(\mathcal{K}_{12}\) and \(\mathcal{K}_{21}\) the conservation properties (5) are satisfied._
Another property of the model is that the distribution function in the fermion case remains bounded from above by \(1\) for all times \(t\geq 0\). The proof can be found in [7].
**Lemma 3.1.3**.: _Let \(f_{i}\) be a distribution function for fermions and \(f_{i}(x,p,0)<1\). Then we have \(f_{i}(x,p,t)<1\) for \(t\geq 0\)._
Now we consider the entropy principle. For the convenience of notation, we denote
\[H_{\tau}(f)=\begin{cases}\int_{\Omega}\int_{\mathbb{R}^{3}}f\ln fdpdx\quad \text{for}\quad\tau=0\\ \int_{\Omega}\int_{\mathbb{R}^{3}}f\ln f+\tau^{-1}(1-\tau f)\ln(1-\tau f)dpdx \quad\text{for}\quad\tau=\pm 1\end{cases}\]
(\(\tau=+1\) for fermion, \(\tau=-1\), for boson) By abusing the notation \(\lim_{\tau\to 0^{+}}\tau^{-1}(1-\tau f)\ln(1-\tau f)=0\), we also denote the integrand by \(h_{\tau}\), namely
\[h_{\tau}(z)=z\ln z+\tau^{-1}(1-\tau z)\ln(1-\tau z) \tag{11}\]
for \(z>0\) if \(\tau=0,-1\) and \(0<z<1\) if \(\tau=+1\). Then, we define the entropy for the gas mixture as follows:
\[H_{\tau,\tau^{\prime}}(f_{1},f_{2})=H_{\tau}(f_{1})+H_{\tau^{\prime}}(f_{2})\]
Th function \(H_{\tau,\tau^{\prime}}\) can be proven to be a non-increasing function for the model (1).
**Theorem 3.1.4**.: _Let \((f_{1},f_{2})\) be a solution of the equation (1). If the \(i\)-th component is a fermion, we give an additional assumption that \(f_{i}<1\). Then we have_
\[\frac{d}{dt}H_{\tau,\tau^{\prime}}(f_{1},f_{2})\leq 0.\]
_The equality is characterized by \(f_{1}\) and \(f_{2}\) being two Fermion distributions in the Fermion-Fermion case, two Boson distributions in the Boson-Boson case, two Maxwell distributions in the classical-classical case, a Fermi distribution and a Bose distribution in the Fermion-Boson case, a Fermi distribution and a Maxwell distribution in the Fermion-classical case and a Bose distribution and a Maxwell distribution in the Bose- classical case. In all cases, these equilibrium distributions have the same \(a\) and \(b\)._
The proof is a straightforward extension of the proof of Theorem 2.1(3) in [7].
### Macroscopic equations
From the quantum BGK model (1), one can derive the macroscopic equations for a mixture. We first denote the macroscopic fields of the inter-species local equilibrium \(\mathcal{K}_{12}\) and \(\mathcal{K}_{21}\):
\[\begin{split}& P_{12}=\int_{\mathbb{R}^{3}}\mathcal{K}_{12}\ p\ dp,\quad P_{21}=\int_{\mathbb{R}^{3}}\mathcal{K}_{21}\ p\ dp,\\ & E_{12}=\int_{\mathbb{R}^{3}}\mathcal{K}_{12}\frac{|p|^{2}}{2m_ {1}}dp,\quad E_{21}=\int_{\mathbb{R}^{3}}\mathcal{K}_{12}\frac{|p|^{2}}{2m_{2}} dp.\end{split} \tag{12}\]
**Theorem 3.2.1**.: _Assume \(\nu_{12}n_{1}=\nu_{21}n_{2}\). Let \((f_{1},f_{2})\) be a solution to (1), then we obtain the following formal conservation laws_
\[\begin{split}&\partial_{t}n_{1}+\frac{1}{m_{1}}\nabla_{x}\cdot P_{ 1}=0,\quad\partial_{t}n_{2}+\frac{1}{m_{2}}\nabla_{x}\cdot P_{2}=0,\\ &\partial_{t}P_{1}+\frac{1}{m_{1}}\nabla_{x}\cdot\int p\otimes pf _{1}dp=\nu_{12}n_{2}(P_{12}-P_{1})\\ &\partial_{t}P_{2}+\frac{1}{m_{2}}\nabla_{x}\cdot\int p\otimes pf _{1}dp=\nu_{21}n_{1}(P_{21}-P_{2})\\ &\partial_{t}E_{1}+\frac{1}{2m_{1}^{2}}\nabla_{x}\cdot\int|p|^{2 }pf_{1}dp=\nu_{12}n_{2}(E_{12}-E_{1}),\\ &\partial_{t}E_{2}+\frac{1}{2m_{2}^{2}}\nabla_{x}\cdot\int|p|^{2 }pf_{2}dp=\nu_{21}n_{1}(E_{21}-E_{2}),\end{split} \tag{13}\]
_where the exchange terms of momentum can be computed as_
\[P_{12}-P_{1}=-(P_{21}-P_{2})=\frac{N_{1}N_{2}}{N_{1}+N_{2}}\left(\frac{P_{2}}{ N_{2}}-\frac{P_{1}}{N_{1}}\right) \tag{14}\]
_Furthermore, we define the function \(\eta_{\tau}^{E}(c)=\int\frac{|p|^{2}}{e^{|p|^{2}+c}+\tau}dp,\) and obtain for the exchange of energy_
\[\begin{split}& E_{12}-E_{1}=-(E_{21}-E_{2})\\ &=\frac{1}{2}\frac{N_{1}|P_{1}+P_{2}|^{2}}{(N_{1}+N_{2})^{2}}+ \frac{(E_{1}+E_{2})-\frac{1}{2}\frac{|P_{1}+P_{2}|^{2}}{N_{1}+N_{2}}}{m_{1}^{ 3/2}\eta_{\tau}^{E}(c_{12})+m_{2}^{3/2}\eta_{\tau}^{E}(c_{21})}m_{1}^{3/2} \eta_{\tau}^{E}(c_{12})-E_{1}\end{split} \tag{15}\]
Proof.: We multiply the first equation of (1) by \((1,p,\frac{|p|^{2}}{2m_{1}})\), and the second one by \((1,p,\frac{|p|^{2}}{2m_{2}}).\) Then we integrate them with respect to the momentum \(p\) to obtain (13) after a straight-forward computation on the left-hand side.
The exchange of momentum can be computed as follows. After computing the integral in the definition of \(P_{12}\) and \(P_{21}\) in (12), we observe that \(P_{12}=bN_{1}\) and \(P_{21}=bN_{2}\). Substituting the quantity of \(b\) in (10), we obtain
\[\frac{P_{12}}{N_{1}}-\frac{P_{1}}{N_{1}}=\frac{P_{1}+P_{2}}{N_{1}+N_{2}}-\frac {P_{1}}{N_{1}}=\frac{N_{2}}{N_{1}+N_{2}}\left(\frac{P_{2}}{N_{2}}-\frac{P_{1}} {N_{1}}\right),\]
and
\[\frac{P_{21}}{N_{2}}-\frac{P_{2}}{N_{2}}=\frac{P_{1}+P_{2}}{N_{1}+N_{2}}=\frac {N_{1}}{N_{1}+N_{2}}\left(\frac{P_{1}}{N_{1}}-\frac{P_{2}}{N_{2}}\right).\]
Then, we can compute (similar as in section 2 in [4])
\[E_{12}-\frac{1}{2}\frac{|P_{12}|^{2}}{N_{1}}=\frac{1}{2}a^{-5/2}m_{1}^{3/2}\eta_{ \tau}^{E}(c_{12}) \tag{16}\]
We can replace \(a^{-5/2}\) with the formula (10):
\[E_{12}-\frac{1}{2}\frac{|P_{12}|^{2}}{N_{1}}=\frac{(E_{1}+E_{2})-\frac{1}{2} \frac{|P_{1}+P_{2}|^{2}}{N_{1}+N_{2}}}{m_{1}^{3/2}\eta_{\tau}^{E}(c_{12})+m_{2 }^{3/2}\eta_{\tau^{\prime}}^{E}(c_{21})}m_{1}^{3/2}\eta_{\tau}^{E}(c_{12})\]
So we obtain
\[\begin{split}& E_{12}-E_{1}=-(E_{21}-E_{2})\\ &=\frac{1}{2}\frac{N_{1}|P_{1}+P_{2}|^{2}}{(N_{1}+N_{2})^{2}}+ \frac{(E_{1}+E_{2})-\frac{1}{2}\frac{|P_{1}+P_{2}|^{2}}{N_{1}+N_{2}}}{m_{1}^{ 3/2}\eta_{\tau}^{E}(c_{12})+m_{2}^{3/2}\eta_{\tau^{\prime}}^{E}(c_{21})}m_{1} ^{3/2}\eta_{\tau}^{E}(c_{12})-E_{1}\end{split} \tag{17}\]
**Remark 2**.: _For later purposes, we remark that in the space-homogeneous case the system of equations reduces to_
\[\begin{split}&\partial_{t}n_{1}=0,\qquad\qquad\qquad\partial_{t}n_ {2}=0,\\ &\partial_{t}P_{1}=\nu_{12}n_{2}(P_{12}-P_{1}),\quad\partial_{t}P_ {2}=\nu_{21}n_{1}(P_{21}-P_{2}),\\ &\partial_{t}E_{1}=\nu_{12}n_{2}(E_{12}-E_{1}),\quad\partial_{t}E _{2}=\nu_{21}n_{1}(E_{21}-E_{2}).\end{split} \tag{18}\]
**Remark 3**.: _In the classical case (\(\tau=\tau^{\prime}=0\)) we get from the relationship (9)_
\[\frac{n_{1}}{n_{2}}=\frac{m_{1}^{3/2}\eta_{0}(c_{12})}{m_{2}^{3/2}\eta_{0}(c_{ 21})}=\frac{m_{1}^{3/2}\int_{\mathbb{R}^{3}}\frac{1}{e^{|\tau|^{2}+c_{12}}}dp} {m_{2}^{3/2}\int_{\mathbb{R}^{3}}\frac{1}{e^{|\tau|^{2}+c_{21}}}dp}=\frac{m_{ 1}^{3/2}}{m_{2}^{3/2}}\frac{e^{-c_{12}}}{e^{-c_{21}}}\]
_by computing the integrals explicitly. Using this, we can calculate_
\[\frac{m_{1}^{3/2}\eta_{0}^{E}(c_{12})}{m_{1}^{3/2}\eta_{0}^{E}(c_{12})+m_{2}^{ 3/2}\eta_{0}^{E}(c_{21})}=\frac{m_{1}^{3/2}e^{-c_{12}}}{m_{1}^{3/2}e^{-c_{12} }+m_{2}^{3/2}e^{-c_{21}}}=\frac{n_{1}}{n_{1}+n_{2}}\]
_and obtain_
\[E_{12}-E_{1} =\frac{n_{1}n_{2}}{n_{1}+n_{2}}\left(\frac{E_{2}}{n_{2}}-\frac{E _{1}}{n_{1}}+\frac{m_{1}-m_{2}}{(N_{1}+N_{2})^{2}}\frac{1}{2}|P_{1}+P_{2}|^{2 }\right)\] \[=\frac{n_{1}n_{2}}{n_{1}+n_{2}}\Bigg{(}\frac{E_{2}}{n_{2}}-\frac{ 1}{2}\frac{|P_{2}|^{2}}{n_{2}N_{2}}-\frac{E_{1}}{n_{1}}+\frac{|P_{1}|^{2}}{n_ {1}N_{1}}+m_{1}m_{2}\frac{n_{1}N_{1}+2n_{1}N_{2}+n_{2}N_{2}}{(N_{1}+N_{2})^{2} }\frac{1}{2}\frac{|P_{2}|^{2}}{N_{2}^{2}}\] \[-m_{1}m_{2}\frac{n_{1}N_{1}+2N_{1}n_{2}+n_{2}N_{2}}{(N_{1}+N_{2})^ {2}}\frac{1}{2}\frac{|P_{1}|^{2}}{N_{1}^{2}}+m_{1}m_{2}\frac{(m_{1}-m_{2})n_{1 }n_{2}}{(N_{1}+N_{2})^{2}}\frac{P_{1}}{N_{1}}\cdot\frac{P_{2}}{N_{2}}\Bigg{)}\] \[=\frac{n_{1}n_{2}}{n_{1}+n_{2}}\Bigg{(}\frac{E_{2}}{n_{2}}-\frac{ 1}{2}\frac{|P_{2}|^{2}}{n_{2}N_{2}}-\frac{E_{1}}{n_{1}}+\frac{|P_{1}|^{2}}{n_ {1}N_{1}}+m_{1}m_{2}\frac{n_{1}N_{1}+n_{2}N_{2}}{(N_{1}+N_{2})^{2}}\frac{1}{2 }\left(\frac{|P_{2}|^{2}}{N_{2}^{2}}-\frac{|P_{1}|^{2}}{N_{1}^{2}}\right)\] \[+\left(\frac{P_{2}}{N_{2}}-\frac{P_{1}}{N_{1}}\right)\cdot\left( \frac{P_{1}}{n_{1}}+\frac{P_{2}}{n_{2}}\right)\Bigg{)}.\]
### Decay rate for the velocities and kinetic temperatures in the space-homogeneous case
In equilibrium, both distribution functions \(f_{1}\) and \(f_{2}\) will finally share the same velocity and the same temperature. We can make those decay rates explicit in the space-homogeneous case.
**Theorem 3.3.1**.: _Assume \(\nu_{12}n_{1}=\nu_{21}n_{2}\) and \(\nu_{12},\nu_{21}\) independent of \(t\). In the space-homogeneous case of (1), we have the following convergence rate for the momentum_
\[\frac{P_{1}}{N_{1}}-\frac{P_{2}}{N_{2}}=e^{-\frac{\nu_{12}n_{2}N_{2}+\nu_{21}n _{1}N_{1}}{N_{1}+N_{2}}t}\left(\frac{P_{1}(0)}{N_{1}}-\frac{P_{2}(0)}{N_{2}} \right). \tag{19}\]
Proof.: We start with calculating
\[\partial_{t}\left(\frac{P_{1}}{N_{1}}\right)=\frac{1}{N_{1}} \partial_{t}P_{1}=\nu_{12}n_{2}\frac{1}{N_{1}}(P_{12}-P_{1}).\]
The first equality uses that \(N_{1}=m_{1}n_{1}\) is constant in the space-homogeneous case (18). Then, we used the macroscopic equation for the time evolution of \(P_{1}\) from (18) Using the expression (14) for the exchange of momentum leads to
\[\partial_{t}\left(\frac{P_{1}}{N_{1}}\right)=\nu_{12}n_{2}\frac{N _{2}}{N_{1}+N_{2}}\left(\frac{P_{2}}{N_{2}}-\frac{P_{1}}{N_{1}}\right). \tag{20}\]
In a similar way, we can compute
\[\partial_{t}\left(\frac{P_{2}}{N_{2}}\right)=\nu_{21}n_{1}\frac{N _{1}}{N_{1}+N_{2}}\left(\frac{P_{1}}{N_{1}}-\frac{P_{2}}{N_{2}}\right).\]
If we subtract the two equations we obtain
\[\partial_{t}\left(\frac{P_{1}}{N_{1}}-\frac{P_{2}}{N_{2}}\right)=- \frac{\nu_{12}n_{2}N_{2}+\nu_{21}n_{1}N_{1}}{N_{1}+N_{2}}\left(\frac{P_{1}}{N _{1}}-\frac{P_{2}}{N_{2}}\right)\]
It follows the result
\[\frac{P_{1}}{N_{1}}-\frac{P_{2}}{N_{2}}=e^{-\frac{\nu_{12}n_{2}N_{ 2}+\nu_{21}n_{1}N_{1}}{N_{1}+N_{2}}t}\left(\frac{P_{1}(0)}{N_{1}}-\frac{P_{2} (0)}{N_{2}}\right).\]
**Remark 4**.: _Using the relationship \(P_{j}=N_{j}b_{j}\) from (7), one can equivalently write_
\[b_{1}-b_{2}=e^{-\frac{\nu_{12}n_{1}N_{2}+\nu_{21}n_{1}N_{1}}{N_{1}+N_{2}}t}(b_ {1}(0)-b_{2}(0)).\]
We continue with the convergence rates of the quantities \(\frac{E_{1}}{n_{1}}-\frac{1}{2}\frac{|P_{1}|^{2}}{n_{1}N_{1}}\) and \(\frac{E_{2}}{n_{2}}-\frac{1}{2}\frac{|P_{1}|^{2}}{n_{2}N_{2}}\). In the classical case, this quantity corresponds to the temperature.
**Theorem 3.3.2**.: _Let \(\nu_{12}n_{1}=\nu_{21}n_{2}=:\tilde{\nu}\) and \(\tilde{\nu}\) be independent of \(t\). In the space-homogeneous case of (1), it is_
\[\begin{split}&\left(\frac{E_{1}}{n_{1}}-\frac{1}{2}\frac{|P_{1}|^{ 2}}{n_{1}N_{1}}\right)-\left(\frac{E_{2}}{n_{2}}-\frac{1}{2}\frac{|P_{2}|^{2}} {n_{2}N_{2}}\right)\\ &=e^{-\tilde{\nu}t}\left(\left(\frac{E_{1}(0)}{n_{1}}-\frac{1}{2} \frac{|P_{1}(0)|^{2}}{n_{1}N_{1}}\right)-\left(\frac{E_{2}(0)}{n_{2}}-\frac{1} {2}\frac{|P_{2}(0)|^{2}}{n_{2}N_{2}}\right)\right)+\frac{1}{2}m_{1}m_{2}\frac{ n_{2}N_{2}-n_{1}N_{1}}{(N_{1}+N_{2})^{2}}e^{-\tilde{\nu}t}(1-e^{-\tilde{\nu}t}) \left|\frac{P_{2}(0)}{N_{2}}-\frac{P_{1}(0)}{N_{1}}\right|^{2}\\ &+\left(E_{1}(0)+E_{2}(0)-\frac{1}{2}\frac{|P_{1}(0)+P_{2}(0)|^{2 }}{N_{1}+N_{2}}\right)e^{-\tilde{\nu}t}\int_{0}^{t}e^{\tilde{\nu}s}\left[ \frac{\frac{m_{1}^{3/2}H_{\tau}^{E}(c_{12}(s))}{n_{1}}-\frac{m_{2}^{3/2}H_{ \tau}^{E}(c_{12}(s))}{n_{2}}}{\frac{m_{1}^{3/2}H_{\tau}^{E}(c_{12})+m_{2}^{3/2} H_{\tau^{\prime}}^{E}(c_{21})}{m_{1}^{3/2}H_{\tau}^{E}(c_{12})+m_{2}^{3/2}H_{ \tau^{\prime}}^{E}(c_{21})}\right]ds.\end{split} \tag{21}\]
Proof.: We compute
\[\partial_{t}\left(\frac{E_{1}}{n_{1}}-\frac{1}{2}\frac{|P_{1}|^{2}}{n_{1}N_{1} }\right)=\frac{1}{n_{1}}\partial_{t}\left(E_{1}\right)-\frac{P_{1}}{n_{1}} \partial_{t}\left(\frac{P_{1}}{N_{1}}\right)=\tilde{\nu}\left(\frac{E_{12}}{n _{1}}-\frac{E_{1}}{n_{1}}-\frac{P_{1}}{n_{1}}\frac{N_{2}}{N_{1}+N_{2}}\left( \frac{P_{2}}{N_{2}}-\frac{P_{1}}{N_{1}}\right)\right). \tag{22}\]
using that \(n_{1}\) is constant in the space-homogeneous case, equation (18) and inserting (20).
Applying (17), it follows
\[\begin{split}&\partial_{t}\left(\frac{E_{1}}{n_{1}}-\frac{1}{2} \frac{|P_{1}|^{2}}{n_{1}N_{1}}\right)\\ &=\tilde{\nu}\left(\frac{1}{2}\frac{m_{1}|P_{1}+P_{2}|^{2}}{(N_{ 1}+N_{2})^{2}}+\frac{(E_{1}+E_{2})-\frac{1}{2}\frac{|P_{1}+P_{2}|^{2}}{N_{1} +N_{2}}}{m_{1}^{3/2}H_{\tau}^{E}(c_{12})+m_{2}^{3/2}H_{\tau^{\prime}}^{E}(c_{ 21})}\frac{m_{1}^{3/2}H_{\tau}^{E}(c_{12})}{n_{1}}-\frac{E_{1}}{n_{1}}-\frac{P _{1}}{n_{1}}\frac{N_{2}}{N_{1}+N_{2}}\left(\frac{P_{2}}{N_{2}}-\frac{P_{1}}{N _{1}}\right)\right).\end{split} \tag{23}\]
Analogously for species 2 we obtain
\[\begin{split}&\partial_{t}\left(\frac{E_{2}}{n_{2}}-\frac{1}{2} \frac{|P_{2}|^{2}}{n_{2}N_{2}}\right)\\ &=\tilde{\nu}\left(\frac{1}{2}\frac{m_{2}|P_{1}+P_{2}|^{2}}{(N_{ 1}+N_{2})^{2}}+\frac{(E_{1}+E_{2})-\frac{1}{2}\frac{|P_{1}+P_{2}|^{2}}{N_{1}+N _{2}}}{m_{1}^{3/2}H_{\tau}^{E}(c_{12})+m_{2}^{3/2}H_{\tau^{\prime}}^{E}(c_{21 })}\frac{m_{2}^{3/2}H_{\tau}^{E}(c_{21})}{n_{2}}-\frac{E_{2}}{n_{2}}-\frac{P_{2 }}{n_{2}}\frac{N_{1}}{N_{1}+N_{2}}\left(\frac{P_{1}}{N_{1}}-\frac{P_{2}}{N_{2}} \right)\right).\end{split}\]
Subtracting both, we get
\[\begin{split}&\partial_{t}\left(\left(\frac{E_{1}}{n_{1}}-\frac{1}{2} \frac{|P_{1}|^{2}}{n_{1}N_{1}}\right)-\left(\frac{E_{2}}{n_{2}}-\frac{|P_{2}|^{ 2}}{n_{2}N_{2}}\right)\right)\\ &=\tilde{\nu}(\frac{E_{2}}{n_{2}}-\frac{E_{1}}{n_{1}}+\frac{ \frac{1}{2}(m_{1}-m_{2})n_{1}N_{1}+N_{2}(N_{1}+N_{2})}{(N_{1}+N_{2})^{2}}\frac{|P _{1}|^{2}}{n_{1}N_{1}}+\frac{\frac{1}{2}(m_{1}-m_{2})n_{2}N_{2}-N_{1}(N_{1}+N_{2} )}{(N_{1}+N_{2})^{2}}\frac{|P_{2}|^{2}}{n_{2}N_{2}}\\ &+\frac{(n_{1}N_{1}-n_{2}N_{2})}{(N_{1}+N_{2})^{2}n_{1}n_{2}}P_{1} \cdot P_{2}+\frac{(E_{1}+E_{2})-\frac{1}{2}\frac{|P_{1}+P_{2}|^{2}}{N_{1}+N_{2} }}{m_{1}^{3/2}H_{\tau}^{E}(c_{12})+m_{2}^{3/2}H_{\tau^{\prime}}^{E}(c_{21})} \left[\frac{m_{1}^{3/2}H_{\tau}^{E}(c_{12})}{n_{1}}-\frac{m_{2}^{3/2}H_{\tau^{ \prime}}^{E}(c_{21})}{n_{2}}\right]).\end{split}\]
This can be written as
\[\partial_{t}\left(\left(\frac{E_{1}}{n_{1}}-\frac{1}{2}\frac{|P_{1}|^ {2}}{n_{1}N_{1}}\right)-\left(\frac{E_{2}}{n_{2}}-\frac{1}{2}\frac{|P_{2}|^{2}}{ n_{2}N_{2}}\right)\right)\] \[=\tilde{\nu}(-\left(\left(\frac{E_{1}}{n_{1}}-\frac{1}{2}\frac{|P_ {1}|^{2}}{n_{1}N_{1}}\right)-\left(\frac{E_{2}}{n_{2}}-\frac{1}{2}\frac{|P_{2} |^{2}}{N_{2}}\right)\right)+\frac{1}{2}\frac{m_{2}(n_{2}N_{2}-n_{1}N_{1})}{(N_ {1}+N_{2})^{2}}\frac{|P_{1}|^{2}}{n_{1}N_{1}}+\frac{1}{2}\frac{m_{1}(n_{2}N_{2 }-n_{1}N_{1})}{(N_{1}+N_{2})^{2}}\frac{|P_{2}|^{2}}{n_{2}N_{2}}\] \[+P_{1}\cdot P_{2}\frac{n_{1}N_{1}-n_{2}N_{2}}{(N_{1}+N_{2})^{2}n_ {1}n_{2}}+\frac{(E_{1}+E_{2})-\frac{1}{2}\frac{|P_{1}+P_{2}|^{2}}{N_{1}+N_{2}} }{m_{1}^{3/2}H_{\tau}^{E}(c_{12})+m_{2}^{3/2}H_{\tau^{\prime}}^{E}(c_{21})} \left[\frac{m_{1}^{3/2}H_{\tau}^{E}(c_{12})}{n_{1}}-\frac{m_{2}^{3/2}H_{\tau^{ \prime}}^{E}(c_{21})}{n_{2}}\right])\] \[=\tilde{\nu}(-\left(\left(\frac{E_{1}}{n_{1}}-\frac{|P_{1}|^{2}}{ n_{1}N_{1}}\right)-\left(\frac{E_{2}}{n_{2}}-\frac{|P_{2}|^{2}}{n_{2}N_{2}} \right)\right)+\frac{1}{2}m_{1}m_{2}\frac{n_{2}N_{2}-n_{1}N_{1}}{(N_{1}+N_{2} )^{2}}\Bigg{|}\frac{P_{1}}{N_{1}}-\frac{P_{2}}{N_{2}}\Bigg{|}^{2}\] \[+\frac{(E_{1}+E_{2})-\frac{1}{2}\frac{|P_{1}+P_{2}|^{2}}{N_{1}+N_ {2}}}{m_{1}^{3/2}H_{\tau}^{E}(c_{12})+m_{2}^{3/2}H_{\tau^{\prime}}^{E}(c_{21} )}\left[\frac{m_{1}^{3/2}H_{\tau}^{E}(c_{12})}{n_{1}}-\frac{m_{2}^{3/2}H_{\tau }^{E}(c_{21})}{n_{2}}\right]).\]
Now, Duhamels formula gives
\[\left(\frac{E_{1}}{N_{1}}-\frac{1}{2}\frac{|P_{1}|^{2}}{m_{1}N_{1 }^{2}}\right)-\left(\frac{E_{2}}{N_{2}}-\frac{1}{2}\frac{|P_{2}|^{2}}{n_{2}N_ {2}}\right)\] \[=e^{-\tilde{\nu}t}\left(\left(\frac{E_{1}(0)}{n_{1}}-\frac{1}{2} \frac{|P_{1}(0)|^{2}}{n_{1}N_{1}}\right)-\left(\frac{E_{2}(0)}{n_{2}}-\frac{ 1}{2}\frac{|P_{2}(0)|^{2}}{n_{2}N_{2}}\right)\right)+\frac{1}{2}m_{1}m_{2} \frac{n_{2}N_{2}-n_{1}N_{1}}{(N_{1}+N_{2})^{2}}e^{-\tilde{\nu}t}\int_{0}^{t}e^ {\tilde{\nu}s}\Bigg{|}\frac{P_{2}(s)}{N_{2}}-\frac{P_{1}(s)}{N_{1}}\Bigg{|}^ {2}ds\] \[+\left(E_{1}(0)+E_{2}(0)-\frac{1}{2}\frac{|P_{1}(0)+P_{2}(0)|^{2} }{N_{1}+N_{2}}\right)e^{-\tilde{\nu}t}\int_{0}^{t}e^{\tilde{\nu}s}\left[\frac{ \frac{m_{1}^{3/2}H_{\tau}^{E}(c_{12}(s))}{n_{1}}-\frac{m_{2}^{3/2}H_{\tau}^{E }(c_{21}(s))}{n_{2}}}{m_{1}^{3/2}H_{\tau}^{E}(c_{12})+m_{2}^{3/2}H_{\tau^{ \prime}}^{E}(c_{21})}\right]ds\] \[=e^{-\tilde{\nu}t}\left(\left(\frac{E_{1}(0)}{n_{1}}-\frac{1}{2} \frac{|P_{1}(0)|^{2}}{n_{1}N_{1}}\right)-\left(\frac{E_{2}(0)}{n_{2}}-\frac{ 1}{2}\frac{|P_{2}(0)|^{2}}{n_{2}N_{2}}\right)\right)+\frac{1}{2}m_{1}m_{2} \frac{n_{2}N_{2}-n_{1}N_{1}}{(N_{1}+N_{2})^{2}}e^{-\tilde{\nu}t}\int_{0}^{t}e^ {\tilde{\nu}s}e^{-2\tilde{\nu}s}\Bigg{|}\frac{P_{2}(0)}{N_{2}}-\frac{P_{1}(0)} {N_{1}}\Bigg{|}^{2}ds\] \[+\left(E_{1}(0)+E_{2}(0)-\frac{1}{2}\frac{|P_{1}(0)+P_{2}(0)|^{2} }{N_{1}+N_{2}}\right)e^{-\tilde{\nu}t}\int_{0}^{t}e^{\tilde{\nu}s}\left[\frac{ \frac{m_{1}^{3/2}H_{\tau}^{E}(c_{12}(s))}{n_{1}}-\frac{m_{2}^{3/2}H_{\tau}^{E }(c_{21}(s))}{n_{2}}}{m_{1}^{3/2}H_{\tau}^{E}(c_{12}(s))+m_{2}^{3/2}H_{\tau^{ \prime}}^{E}(c_{21}(s))}\right]ds.\]
We get the last but one equality by using theorem 3.3.1 and the last equality by computing the integral.
**Remark 5**.: _In the classical case we get, as in remark 3, the relationship_
\[\frac{n_{1}}{n_{2}}=\frac{m_{1}^{3/2}}{m_{2}^{3/2}}\frac{e^{-c_{12}}}{e^{-c_{21}}}.\]
_If we then compute the bracket_
\[\left[\frac{\frac{m_{1}^{3/2}H_{\tau}^{E}(c_{12}(s))}{n_{1}}-\frac{m_{2}^{3/2}H_{ \tau}^{E}(c_{21}(s))}{n_{2}}}{m_{1}^{3/2}H_{\tau}^{E}(c_{12})+m_{2}^{3/2}H_{\tau ^{\prime}}^{E}(c_{21})}\right]\]
_for \(\tau=\tau^{\prime}=0\), we obtain by an explicit computation of \(H_{\tau}^{E}\)_
\[\frac{\frac{m_{1}^{3/2}}{n_{1}}\frac{3}{2}e^{-c_{12}}\pi^{3/2}- \frac{m_{2}^{3/2}}{n_{2}}\frac{3}{2}e^{-c_{21}}\pi^{3/2}}{m_{1}^{3/2}\frac{3}{ 2}e^{-c_{12}}\pi^{3/2}+\frac{m_{2}^{3/2}}{2}\frac{3}{2}e^{-c_{21}}\pi^{3/2}}= \frac{\frac{m_{1}^{3/2}}{n_{1}}e^{-c_{12}}-\frac{m_{2}^{3/2}}{n_{2}}e^{-c_{21} }}{m_{1}^{3/2}e^{-c_{12}}+m_{2}^{3/2}e^{-c_{21}}}=0\]
_So in the classical-classical case, the last term of (21) vanishes, and we obtain the temperature convergence rate_
\[\left(\frac{E_{1}}{n_{1}}-\frac{1}{2}\frac{|P_{1}|^{2}}{n_{1}N_{1 }}\right)-\left(\frac{E_{2}}{n_{2}}-\frac{1}{2}\frac{|P_{2}|^{2}}{n_{2}N_{2}}\right)\] \[=e^{-\tilde{\rho}t}\left(\left(\frac{E_{1}(0)}{n_{1}}-\frac{1}{2} \frac{|P_{1}(0)|^{2}}{n_{1}N_{1}}\right)-\left(\frac{E_{2}(0)}{n_{2}}-\frac{1} {2}\frac{|P_{2}(0)|^{2}}{n_{2}N_{2}}\right)\right)+\frac{1}{2}m_{1}m_{2}\frac{ n_{2}N_{2}-n_{1}N_{1}}{(N_{1}+N_{2})^{2}}e^{-\tilde{\rho}t}(1-e^{-\tilde{\rho}t}) \left|\frac{P_{2}(0)}{N_{2}}-\frac{P_{1}(0)}{N_{1}}\right|^{2}.\]
## 4 Motivation of the structure of the local equilibria
In this section, we motivate the form of the local equilibria in (1) This will motivate later the choice of our numerical scheme. For this, it will be convenient to define the following notations. We denote
\[\begin{split}\mathcal{K}_{11}=(e^{-\alpha_{11}\cdot\mathbf{p}_{1} }+\tau)^{-1},&\mathcal{K}_{22}=(e^{-\alpha_{22}\cdot\mathbf{p}_ {2}}+\tau^{\prime})^{-1},\\ \mathcal{K}_{12}=(e^{-\alpha_{12}\cdot\mathbf{p}_{1}}+\tau)^{-1},& \mathcal{K}_{12}=(e^{-\alpha_{21}\cdot\mathbf{p}_{2}}+\tau^{\prime})^{-1} \end{split} \tag{24}\]
where
\[\mathbf{p}_{k}(p):=(1,p,\frac{|p|^{2}}{2m_{k}})^{\top},\quad k=1,2 \tag{25}\]
and the parameters \((a_{1},a_{2},a,b_{1},b_{2},b,c_{1},c_{2},c_{12},c_{21})\) can be mapped one-to-one to \(\alpha_{kj}=(\alpha_{k}^{0}j,\alpha_{kj}^{1},\alpha_{kj}^{2}),\ k,j=1,2\). Further, we recall the function \(h_{\tau}\) defined by (11) is convex, therefore it follows
\[h_{\tau}(z)\geq h_{\tau}(y)+\ln(y)(z-y), \tag{26}\]
for all \(y,z>0\) if \(\tau=0,-1\) and \(0<y,z<1\) if \(\tau=+1\).
### The one species local equilibria
We seek a solution of the entropy minimization problem
\[\min_{g\in\chi_{k}}\int h_{\tau}(g)dv,\quad k\in\{1,2\}, \tag{27}\]
where
\[\chi_{k}=\left\{g\ \Big{|}\ g\geq 0,\,(1+|p|^{2})g\in L^{1}(\mathbb{R}^{3}), \,\int\mathbf{p}_{k}(p)(g-f_{k})dp=0\right\}. \tag{28}\]
The choice of the set \(\chi_{k}\) ensures the conservation properties (4) for intra-species collisions. Indeed, by standard optimization theory, any critical point \((\mathcal{K}_{kk},\lambda^{kk})\) of the Lagrange functional \(L_{k}\colon\chi_{k}\times\mathbb{R}^{5}\to\mathbb{R}\), given by
\[L_{k}(g,\alpha)=\int h(g)dp-\alpha\cdot\int\mathbf{p}_{k}(p)(g-f_{k})dp, \tag{29}\]
satisfies the first-order optimality condition
\[\frac{\delta L_{k}}{\delta g}(\mathcal{K}_{kk},\lambda^{kk})=(\ln\frac{ \mathcal{K}_{kk}}{1-\tau_{k}\mathcal{K}_{kk}}-\lambda^{kk}\cdot\mathbf{p}_{k}( p))=0, \tag{30}\]
with \(\tau_{1}=\tau\) and \(\tau_{2}=\tau^{\prime}\). This implies then that
\[\mathcal{K}_{kk}=(\exp\left(\lambda^{kk}\cdot\mathbf{p}_{k}(p)\right)+\tau_{k })^{-1}. \tag{31}\]
Theorem 3.1.1 shows in a rigorous way that there exists a unique function of the form (31) that satisfies these constraints. Therefore, we can prove the following theorem.
**Theorem 4.1.1**.: _The local equilibrium \(\mathcal{K}_{kk}\) is the unique minimizer of (27)._
Proof.: According to (26)
\[h_{\tau_{k}}(g)\geq h_{\tau_{k}}(\mathcal{K}_{kk})+\lambda^{kk}\cdot\mathbf{ p}_{k}(g-\mathcal{K}_{kk}), \tag{32}\]
point-wise in \(p\). Thus it follows that for all \(g\in\chi_{k}\),
\[\int h_{\tau_{k}}(g)dp\geq\int h_{\tau_{k}}(\mathcal{K}_{kk})dp+\int\lambda^{ kk}\cdot\mathbf{p}_{k}(g-\mathcal{K}_{kk})dp=\int h_{\tau_{k}}(\mathcal{K}_{kk})dp \tag{33}\]
Hence \(\mathcal{K}_{kk}\) is a minimizer of (27), and uniqueness follows directly from the strict convexity of \(h_{\tau_{k}}\).
### The mixture local equilibria
For interactions between species, we seek a solution of the entropy minimization problem
\[\min_{g_{1},g_{2}\in\chi_{12}}\int h_{\tau}(g_{1})dp+\int h_{\tau^{\prime}}(g_ {2})dp, \tag{34}\]
where
\[\chi_{12}=\Bigg{\{}(g_{1},g_{2}) \Big{|} g_{1},g_{2}>0,\,(1+|p|^{2})g_{1},\,(1+|p|^{2})g_{2}\in L^{1}( \mathbb{R}^{3}), \tag{35}\] \[\int g_{1}dp=\int f_{1}dp,\quad\int g_{2}dp=\int f_{2}dp,\] \[\int\left(\frac{p}{\frac{|p|^{2}}{2m_{1}}}\right)(g_{1}-f_{1})dp +\int\left(\frac{p}{\frac{|p|^{2}}{2m_{2}}}\right)(g_{2}-f_{2})dp=0\Bigg{\}}.\]
Here, \(\chi_{12}\) is chosen such that the constraints (5) for inter-species collisions are satisfied. Similar to the case of intra-species collisions, we consider the Lagrange functional \(L\colon\chi\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}^{3}\times\mathbb{R}\to\mathbb{R}\)
\[L(g_{1},g_{2},\alpha_{0}^{1},\alpha_{0}^{2},\alpha_{1},\alpha_{2}) =\int h(g_{1})dp+\int h(g_{2})dp \tag{36}\] \[-\alpha_{0}^{1}\int(g_{1}-f_{1})dp-\alpha_{0}^{2}\int(g_{2}-f_{2 })dp\] \[-\alpha_{1}\cdot\left(\int p(g_{1}-f_{1})dp+\int p(g_{2}-f_{2})dp\right)\] \[-\alpha_{2}\left(\int\frac{|p|^{2}}{2m_{1}}(g_{1}-f_{1})dp+\int \frac{|p|^{2}}{2m_{2}}(g_{2}-f_{2})dp\right).\]
Any critical point \((\mathcal{K}_{12},\mathcal{K}_{21},\lambda_{0}^{1},\lambda_{0}^{2},\lambda_{1 },\lambda_{2})\) of \(L\) satisfies the first-order optimality conditions
\[\frac{\delta L}{\delta g_{1}}(\mathcal{K}_{12},\mathcal{K}_{21}, \lambda_{0}^{1},\lambda_{0}^{2},\lambda_{1},\lambda_{2}) =\ln\frac{\mathcal{K}_{12}}{1-\tau\mathcal{K}_{12}}-\lambda^{12} \cdot\mathbf{p}_{1}(p)=0, \tag{37}\] \[\frac{\delta L}{\delta g_{2}}(\mathcal{K}_{12},\mathcal{K}_{21}, \lambda_{0}^{1},\lambda_{0}^{2},\lambda_{1},\lambda_{2}) =\ln\frac{\mathcal{K}_{21}}{1-\tau\mathcal{K}_{21}}-\lambda^{21} \cdot\mathbf{p}_{2}(p)=0, \tag{38}\]
where \(\lambda^{12}=(\lambda_{0}^{1},\lambda_{1},\lambda_{2})\) and \(\lambda^{21}=(\lambda_{0}^{2},\lambda_{1},\lambda_{2})\). Therefore
\[\mathcal{K}_{12} =(\exp(\lambda^{12}\cdot\mathbf{p}_{1}(p))+\tau)^{-1}, \tag{39}\] \[\mathcal{K}_{21} =(\exp(\lambda^{21}\cdot\mathbf{p}_{2}(p))+\tau)^{-1}. \tag{40}\]
Since we only require conservation of the combined momentum and kinetic energy, there is only one Lagrange multiplier for the momentum constraint and one Lagrange multiplier for the energy constraint. Hence, \(\lambda_{1}^{12}=\lambda_{1}^{21}\) and \(\lambda_{2}^{12}=\lambda_{2}^{21}\) in (40). When we are in the classical case (\(\tau=0\)), this restriction is the same as the one used in [21], but more restrictive than the model in [28].
Theorem 3.1.2 shows the existence of functions of the form (40) which satisfy the constraints in (4) and (5). As in the single species case, it follows that these functions are unique minimizer of the corresponding minimization problem.
**Theorem 4.2.1**.: \((\mathcal{K}_{12},\mathcal{K}_{21})\) _as defined in (40) is the unique minimizer of (34)._
Proof.: According to (26)
\[h_{\tau}(g)\geq h_{\tau_{k}}(\mathcal{K}_{kj})+\lambda^{kj}\cdot\mathbf{p}_{k }(g-\mathcal{K}_{kj}), \tag{41}\]
point-wise in \(p\), for any measurable function \(g\) and \(k,j\in\{1,2\}\). Therefore it follows that for any measureable functions \(g_{1}\) and \(g_{2}\),
\[\int h_{\tau}(g_{1})dp+\int h_{\tau^{\prime}}(g_{2})dp\geq\int h_{ \tau}(\mathcal{K}_{12})dp+\int h_{\tau^{\prime}}(\mathcal{K}_{21})dp\] \[\qquad\qquad\qquad\qquad+\lambda^{12}\cdot\int\mathbf{p}_{1}(g_{ 1}-\mathcal{K}_{12})dp+\lambda^{21}\cdot\int\mathbf{p}_{2}(g_{2}-\mathcal{K}_ {21})dp. \tag{42}\]
Since \(\lambda_{1}^{12}=\lambda_{1}^{21}\) and \(\lambda_{2}^{12}=\lambda_{2}^{21}\),
\[\lambda^{12}\cdot\int\mathbf{p}_{1}(g_{1}-\mathcal{K}_{12})dp+ \lambda^{21}\cdot\int\mathbf{p}_{2}(g_{2}-\mathcal{K}_{21})dp=\lambda_{0}^{12 }\int(g_{1}-\mathcal{K}_{12})dp+\lambda_{0}^{21}\int(g_{2}-\mathcal{K}_{21})dp\] \[\qquad\qquad\qquad+\lambda_{1}^{12}\cdot\left(\int p(g_{1}- \mathcal{K}_{12})dp+\int p(g_{2}-\mathcal{K}_{21})dp\right)+\lambda_{2}^{12} \left(\int\frac{|p|^{2}}{2m_{1}}(g_{1}-\mathcal{K}_{12})dp+\int\frac{|p|^{2} }{2m_{2}}(g_{2}-\mathcal{K}_{21})dp\right). \tag{43}\]
If \((g_{1},g_{2})\) and \((\mathcal{K}_{12},\mathcal{K}_{21})\) are elements of \(\chi_{12}\), then the constraints in (35) imply that each of the terms above is zero. In such cases, (42) reduces to
\[\int h_{\tau}(g_{1})dp+\int h_{\tau^{\prime}}(g_{2})dp\geq\int h_{ \tau}(\mathcal{K}_{12})dp+\int h_{\tau^{\prime}}(\mathcal{K}_{21})dp, \tag{44}\]
which shows that \((\mathcal{K}_{12},\mathcal{K}_{21})\) solves (34). Since \(h_{\tau_{k}}\) is strictly convex, it follows that this solution is unique.
## 5 Numerical scheme
### Time discretization
Let \(k,j=1,2\) and \(k\neq j\). We write (1) as
\[\partial_{t}f_{k}+\mathcal{T}_{k}(f_{k})=\mathcal{R}_{k}(f_{k},f _{j}) \tag{45}\]
with the combined relaxation operator
\[\mathcal{R}_{k}(f_{k},f_{j})=\mathcal{R}_{kk}+\mathcal{R}_{kj}= \nu_{kk}n_{k}\left(\mathcal{K}_{kk}-f_{k}\right)+\nu_{kj}n_{j}\left(\mathcal{ K}_{kj}-f_{k}\right) \tag{46}\]
and the transport operator
\[\mathcal{T}_{k}(f_{k})=\frac{p}{m_{k}}\cdot\nabla_{x}f_{k}. \tag{47}\]
In the following, for simplicity, we assume that the collision frequencies \(\tilde{\nu}_{kk}:=\nu_{kk}n_{k}\) and \(\tilde{\nu}_{kj}:=\nu_{kj}n_{k}\) are constant in \(x\) and \(t\). But an extension to an \(x\) and \(t\) dependence of the collision frequency would also be possible. Large collision frequencies result in a stiff relaxation operator such that an implicit time discretization for the relaxation part is a convenient choice. So we pursue implicit-explicit (IMEX) schemes where \(\mathcal{R}_{k}\) is treated implicitly and \(\mathcal{T}_{i}\) is treated explicitly.
Given \(t_{\ell}=\ell\Delta t\) for \(\ell\in\mathbb{N}_{0}\), a simple update of \(f_{k}^{\ell}\approx f_{k}(x,p,t_{\ell})\) from \(t_{\ell}\) to \(t_{\ell+1}\) uses the approximation
\[\mathcal{R}_{k}(f_{k}^{\ell+1},f_{j}^{\ell+1})\approx\tilde{\nu} _{kk}\left(\mathcal{K}_{kk}^{\ell+1}-f_{k}^{\ell+1}\right)+\tilde{\nu}_{kj} \left(\mathcal{K}_{kj}^{\ell+1}-f_{k}^{\ell+1}\right), \tag{48}\]
where \(\mathcal{K}_{kk}^{\ell+1}\) and \(\mathcal{K}_{kj}^{\ell+1}\) are discrete target functions that, as described in Section 5.1.3, depend on \(f_{k}^{\ell+1}\) and \(f_{j}^{\ell+1}\) via the solution of a convex minimization problem that is inspired by the work in [23]. By this procedure, \(\mathcal{K}_{kk}\) and \(\mathcal{K}_{kj}\) are evaluated exactly at the next time step (up to numerical tolerances) which results in the preservation of conservation properties, and the first-order version inherits additional properties from the continuum model.
#### 5.1.1 First-order splitting
We split the relaxation and transport operators in (1).
Relaxation.We perform the relaxation step in each spatial cell by a backward Euler method
\[\frac{f_{k}^{*}-f_{k}^{\ell}}{\Delta t}=\mathcal{R}_{k}(f_{k}^{*},f_{j}^{*}), \tag{49}\]
which can be rewritten into the convex combination
\[f_{k}^{*}=d_{k}f_{k}^{\ell}+d_{k}\Delta t(\tilde{\nu}_{kk}\mathcal{ K}_{kk}^{*}+\tilde{\nu}_{kj}\mathcal{K}_{kj}^{*}) \tag{50}\]
with
\[d_{k}=\frac{1}{1+\Delta t(\tilde{\nu}_{kk}+\tilde{\nu}_{kj})}. \tag{51}\]
The equation (50) represents an explicit update formula for \(f_{k}^{*}\) provided that \(\mathcal{K}_{kk}^{*}\) and \(\mathcal{K}_{kj}^{*}\) can be expressed as functions of \(f_{k}^{\ell}\). In Section 5.1.3 we show how to determine \(\mathcal{K}_{kk}^{*}\) and \(\mathcal{K}_{kj}^{*}\) in a structure-preserving way.
Transport.We compute the transport in \(x\) for \(f_{k}^{\ell+1}\) by a forward Euler method with initial data \(f_{k}^{*}\):
\[\frac{f_{k}^{\ell+1}-f_{k}^{*}}{\Delta t}+\mathcal{T}_{k}(f_{k}^{*})=0. \tag{52}\]
Details on the numerical approximation of \(\mathcal{T}_{k}\) are presented in section 5.2.
#### 5.1.2 Second-order IMEX Runge-Kutta
We use the following Butcher tableaux [2] for a second-order approach
\[\begin{array}{c|cccc}0&&&&0&&\\ \gamma&0&\gamma&&\\ 1&0&1-\gamma&\gamma&\\ \hline&0&1-\gamma&\gamma&\\ \end{array}\qquad\qquad\qquad\begin{array}{c|cccc}0&&&&\\ \gamma&\gamma&&\\ 1&\delta&1-\delta&0\\ \hline&\delta&1-\delta&0\\ \end{array} \tag{53}\]
with
\[\gamma=1-\frac{\sqrt{2}}{2}\quad\text{and}\quad\delta=1-\frac{1}{2\gamma}. \tag{54}\]
The left table applies to the relaxation part, and the right table applies to the transport terms. This IMEX Runge-Kutta scheme is L-stable and globally stiffly accurate.
Applying this method to (1) and using the constants
\[d_{k}=\frac{1}{1+\gamma\Delta t(\tilde{\nu}_{kk}+\tilde{\nu}_{kj})}, \tag{55}\]
we can write the stages in the scheme as convex combination of three terms
\[f_{k}^{(1)} =d_{k}G_{k}^{(1)}+d_{k}\gamma\Delta t\,\tilde{\nu}_{kk}\mathcal{ K}_{kk}^{(1)}+d_{k}\gamma\Delta t\,\tilde{\nu}_{kj}\mathcal{K}_{kj}^{(1)} \tag{56a}\] \[f_{k}^{(2)} =d_{k}G_{k}^{(2)}+d_{k}\gamma\Delta t\,\tilde{\nu}_{kk}\mathcal{ K}_{kk}^{(2)}+d_{k}\gamma\Delta t\,\tilde{\nu}_{kj}\mathcal{K}_{kj}^{(2)},\] (56b) \[f_{k}^{\ell+1} =f_{k}^{(2)} \tag{56c}\]
where
\[G_{k}^{(1)} =f_{k}^{\ell}-\Delta t\,\gamma\,\mathcal{T}_{k}(f_{k}^{\ell}) \tag{57a}\] \[G_{k}^{(2)} =f_{k}^{\ell}-\Delta t\,\delta\,\mathcal{T}_{k}(f_{k}^{\ell})- \Delta t\,(1-\delta)\mathcal{T}_{k}(f_{k}^{(1)})+\Delta t\,(1-\gamma)\mathcal{ R}_{k}(f_{k}^{(1)},f_{j}^{(1)}) \tag{57b}\]
depend on known data. For each stage, we have to determine the corresponding values of the target functions in order to update the distribution functions. In the following section, we explain how this can be done.
#### 5.1.3 General implicit solver
We write the implicit updates in (50) and (56) in a generic steady state form
\[\psi_{k}=d_{k}G_{k}+d_{k}\gamma\Delta t(\tilde{\nu}_{kk}\mathcal{K}_{kk}+\tilde{ \nu}_{kj}\mathcal{K}_{kj}). \tag{58}\]
The functions \(\mathcal{K}_{kk}\) and \(\mathcal{K}_{kj}\) are the unique target functions associated to \(\psi_{k}\),
\[d_{k}=\frac{1}{1+\gamma\Delta t(\tilde{\nu}_{kk}+\tilde{\nu}_{kj})}, \tag{59}\]
and \(G_{k}\) is a known function. We want to express \(\mathcal{K}_{kk}\) and \(\mathcal{K}_{kj}\) as functions of \(G_{k}\) and \(G_{j}\) so that (58) is an explicit update formula for \(\psi_{k}\). In Section 3, the existence and uniqueness of \(\mathcal{K}_{kk}\) and \(\mathcal{K}_{kj}\) are proven by algebraic considerations. In order to determine their values we follow a different approach inspired by section 4. Applying the conservation properties (4) and (5) to (58) leads to
\[\begin{split}&\int\tilde{\nu}_{11}\mathcal{K}_{11}\,\mathbf{p}_{1} dp+\int\tilde{\nu}_{22}\mathcal{K}_{22}\,\mathbf{p}_{2}dp+\int\tilde{\nu}_{12} \mathcal{K}_{12}\,\mathbf{p}_{1}dp+\int\tilde{\nu}_{21}\mathcal{K}_{21}\, \mathbf{p}_{2}dp\\ &\stackrel{{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq
for \(k=1,2\); and
\[\mu=\begin{pmatrix}\mu_{12}^{0}\\ \mu_{21}^{1}\\ \mu^{2}\end{pmatrix}=\int\left[\begin{pmatrix}\begin{array}{c}1\\ 0\\ p\\ \frac{|p|^{2}}{2m_{1}}\end{array}\right]d_{1}\tilde{\nu}_{12}G_{1}+\begin{pmatrix} 0\\ 1\\ p\\ \frac{|p|^{2}}{2m_{2}}\end{array}\right]d_{2}\tilde{\nu}_{21}G_{2}\Bigg{]}\,dp. \tag{65}\]
The minimization problem can be decoupled as follows:
**Proposition 5.1**.: _The components of the minimizer of (62) can be found by minimizing the following three convex potential functions independently:_
\[\varphi_{k}(\alpha_{k}) =\int d_{k}\tilde{\nu}_{kk}\,w(\mathcal{K}_{kk})dp+\mu_{k}\cdot \alpha_{k}\quad\text{for}\quad k=1,2\quad\text{and} \tag{66}\] \[\varphi(\alpha) =\int\left[d_{1}\tilde{\nu}_{12}w(\mathcal{K}_{12})+d_{2}\tilde{ \nu}_{21}w(\mathcal{K}_{21})\right]dp+\mu\cdot\alpha \tag{67}\]
_and the minimum of (62) is the sum of their minima._
Proof.: The statement is trivial because \(\varphi_{\text{tot}}(\alpha_{1},\alpha_{2},\alpha)=\varphi_{1}(\alpha_{1})+ \varphi_{2}(\alpha_{2})+\varphi_{1}(\alpha)\).
The minimum of each potential function in (66) and (67) is found using Newton's method for convex optimization. More details are given in Section 5.4.
Actually, we can link these potential functions to dual problems when we reformulate the modelling problem by using Lagrange functionals. For intra-species interactions, the Lagrange functional reads
\[L_{k}(g,\lambda)=\int h_{\tau_{k}}(g)dp-\lambda\cdot\int\mathbf{p}_{k}(g-f_{k} )dp \tag{68}\]
using \(h_{\tau_{k}}(g)\) given by (11). The first integral in (68) is the entropy functional; the other integrals describe the conservation properties as constraints. The Lagrange multipliers \(\lambda\) solve the dual problem
\[\alpha_{k}=\operatorname*{argmin}_{\lambda\in\Lambda_{k}}\int w(\mathcal{K}_ {kk}(\lambda))dp+\lambda\cdot\int\mathbf{p}_{k}f_{k}dp \tag{69}\]
where \(\Lambda_{k}=\{\lambda\in\mathbb{R}^{5}\,|\,\int\mathcal{K}_{kk}(\lambda)(1+| p|^{2})dp<\infty\}\). Analogously, we can formulate the dual problem for inter-species interactions:
\[\begin{split}(\alpha_{12},\alpha_{21})=\operatorname*{argmin}_{( \lambda_{12},\lambda_{21})\in\Lambda}\Bigg{\{}\int& w(\mathcal{K}_{12}(\lambda))+w(\mathcal{K}_{21}(\lambda))dp+ \lambda_{12}^{0}\int f_{1}dp+\lambda_{21}^{0}\int f_{2}dp\\ &+\lambda^{1}\cdot\int p(f_{1}+f_{2})dp+\lambda^{2}\int|p|^{2} \left(\frac{1}{2m_{1}}f_{1}+\frac{1}{2m_{2}}f_{2}\right)dp\Bigg{\}}\end{split} \tag{70}\]
for \(\alpha_{kj}=(\alpha_{kj}^{0},\alpha^{1},\alpha^{2})\) and where \(\Lambda=\{(\lambda_{12},\lambda_{21})\in\mathbb{R}^{5}\,|\,\int\mathcal{K}_{kj }(\lambda_{kj})(1+|p|^{2})dp<\infty\text{ for }k,j=1,2;k\neq j\}\). We recognize the close relationship of (66) with (69), respective of (67) with (70). The dual problems have unique solutions according to Section 4. This is inherited to the potential functions because \(d_{k}\tilde{\nu}_{kj}\) is independent of \(p\).
### Space discretization
We assume a slab geometry, i.e. \(\partial_{x^{2}}f_{k}=\partial_{x^{3}}f_{k}=0\). So we reduce the physical space dimension to one dimension and set \(x:=x^{1}\) while the momentum domain remains three dimensional (\(p=(p^{1},p^{2},p^{3})\)). We divide the spatial domain \([x_{\min},x_{\max}]\) into uniform cells \(I_{i}=[x_{i}-\frac{\Delta x}{2},x_{i}+\frac{\Delta x}{2}]\) for \(i\in\{0,\ldots,I\}\).
We employ a second-order finite volume framework using approximate cell-averaged quantities
\[f_{k,i}^{\ell}\approx\frac{1}{\Delta x}\int_{I_{i}}f_{k}(x,p,t^{ \ell})dx. \tag{71}\]
The relaxation operators are approximated to second order by
\[\mathcal{R}_{k,i}^{\ell}=\mathcal{R}_{k}(f_{k,i}^{\ell},f_{j,i}^ {\ell})\approx\frac{1}{\Delta x}\int_{I_{i}}\mathcal{R}\left(\,f_{k}(x,p,t^{ \ell}),f_{j}(x,p,t^{\ell})\,\right)dx. \tag{72}\]
Whereas the transport operator \(\mathcal{T}_{k}\) is discretized with numerical fluxes \(\mathscr{F}_{i+\frac{1}{2}}\) by
\[\mathcal{T}_{k}\left(g\right)\approx\mathcal{T}_{i;k}(g)=\frac{ 1}{\Delta x}\left(\mathscr{F}_{k+\frac{1}{2}}(g)-\mathscr{F}_{i-\frac{1}{2}}( g)\right) \tag{73}\]
for any grid function \(g=\{g_{i}\}\). We follow [33] and use
\[\mathscr{F}_{i+\frac{1}{2}}(g)=\frac{p^{1}}{2m}\left(g_{i+1}+g_{ i}\right)-\frac{|p^{1}|}{2m}\left(g_{i+1}-g_{i}-\phi_{i+\frac{1}{2}}(g)\right) \tag{74}\]
where \(\phi_{i+\frac{1}{2}}\) is a flux limiter. The choice \(\phi_{i+\frac{1}{2}}=0\) leads to a first-order approximation, and a second-order method is provided by
\[\phi_{i+\frac{1}{2}}(g)=\text{minmod}\left((g_{i}-g_{i-1}),(g_{i +1}-g_{i}),(g_{i+2}-g_{i+1})\right) \tag{75}\]
where
\[\text{minmod}(a,b,c)=\begin{cases}s\min(|a|,|b|,|c|),&\text{sign }(a)=\text{sign}(b)=\text{sign}(c)=:s,\\ 0,&\text{otherwise}.\end{cases} \tag{76}\]
We guarantee positivity during a simple forward Euler update of (52) by enforcing the CFL condition
\[\Delta t<\alpha\frac{m\Delta x}{\max|p^{1}|} \tag{77}\]
with \(\alpha=1\) for the first-order flux and \(\alpha=\frac{2}{3}\) for the second-order flux. (See Proposition 5.2.)
### Properties of the semi-discrete scheme
In this section, we review the positivity preservation, conservation properties, and the entropy behavior of the semi-discrete scheme.
#### 5.3.1 Positivity of distribution functions
The first-order time stepping scheme in Section 5.1.1 preserves positivity for both first- and second-order numerical fluxes in space; see Proposition 5.2. We discuss the positivity for the second-order scheme 5.1.2 in Proposition 5.3, and give a sufficient criterion for the space homogeneous case. Additionally, we show that the upper bound for distribution functions of fermions is preserved by our scheme; see Proposition 5.4.
**Proposition 5.2**.: _The first-order time discretization in Section 5.1.1 together with the space discretization described in Section 5.2 is positivity preserving, provided that_
\[\Delta t\leq\beta\frac{m_{k}\Delta x}{\max|p^{1}|}, \tag{78}\]
_with \(\beta=1\) and \(\beta=\frac{2}{3}\) for the first-order and second-order fluxes, respectively._
Proof.: The proof can be performed analogously to the proof of proposition 5.1 in [23].
Second-order time-stepping makes it more difficult to guarantee positivity. Nevertheless, we derive some sufficient conditions on \(\Delta t\) in order to preserve positivity in the second-order scheme presented in Section 5.1.2.
**Proposition 5.3**.: _For the space homogeneous case, the second-order IMEX scheme presented in Section 5.1.2 is positivity preserving provided that_
\[\Delta t\leq\frac{1}{(1-2\gamma)(\tilde{\nu}_{kk}+\tilde{\nu}_{kj})} \tag{79}\]
_for \(k,j=1,2\)._
Proof.: The proof can be performed analogously to the proof of proposition 5.2 in [23].
For large collision frequencies \(\nu_{kj}\), the time step condition (79) can be restrictive. So one might be interested in enforcing the milder (but still sufficient) local condition
\[\Delta t\leq\frac{f_{k}^{\ell}}{(1-\gamma)\left[(\tilde{\nu}_{kk}+\tilde{\nu }_{kj})f_{k}^{(1)}-(\tilde{\nu}_{kk}A_{kk}^{(1)}+\tilde{\nu}_{kj}A_{kj}^{(1)} )\right]}. \tag{80}\]
Large collision frequencies push the numerical kinetic distribution to the corresponding target function. Hence, the denominator in (80) becomes large, and the condition is not restrictive.
A distribution function of a fermion has the additional upper bound \(f<1\). Our scheme preserves this property which is shown in the following propositions.
**Proposition 5.4**.: _If \(f_{k}\) represents the distribution function of a fermion with \(f_{k}^{\ell}<1\), the time discretization in Section 5.1.1 together with the space discretization described in Section 5.2 leads to \(f_{k}^{\ell+1}<1\)._
Proof.: Let \(f_{k}^{\ell}<1\). The local equilibrium of a fermion is a Fermi-Dirac distribution function \(\mathcal{F}\) for which \(0<\mathcal{F}<1\) by definition. Hence, for the relaxation step it holds
\[f_{k}^{*}=d_{k}f_{k}^{\ell}+d_{k}\Delta t\left(\tilde{\nu}_{kk} \mathcal{F}_{kk}^{*}+\tilde{\nu}_{kj}\mathcal{F}_{kj}^{*}\right)<d_{k}+d_{k} \Delta t(\tilde{\nu}_{kk}+\tilde{\nu}_{kj})=1. \tag{81}\]
Here, we used the definition of \(d_{k}\) given by (55). For the transport step (52), the first-order fluxes lead to
\[f_{k,i}^{\ell+1}=\left(1-\frac{\Delta t}{m_{k}\Delta x}|p^{1}|\right)f_{k,i}^{*}+ \frac{\Delta t}{m_{k}\Delta x}|p^{1}|f_{k,i-\text{sign}(p^{1})}^{*}\overset{( \ref{eq:1})}{\leq}\left(1-\frac{\Delta t}{m_{k}\Delta x}|p^{1}|\right)+\frac{ \Delta t}{m_{k}\Delta x}|p^{1}|=1.\]
For the second-order fluxes, define \(\sigma:=\text{sign}(f_{k,i}^{*}-f_{k,i-1}^{*})\). We conclude that
\[\phi_{i+\frac{1}{2}}(f_{k}^{*}) \leq\begin{cases}0&\text{if}\quad\sigma=-1\\ f_{k,i+1}^{*}-f_{k,i}^{*}&\text{if}\quad\sigma=+1\end{cases},\] \[-\phi_{i-\frac{1}{2}}(f_{k}^{*}) \leq\begin{cases}f_{k,i-1}^{*}-f_{k,i}^{*}&\text{if}\quad\sigma=- 1\\ 0&\text{if}\quad\sigma=+1\end{cases}.\]
It follows that
\[f_{k,i}^{\ell+1} \overset{(\ref{eq:1})}{=}\left(1-\frac{\Delta t}{m_{k}\Delta x} |p^{1}|\right)f_{k,i}^{*}+\frac{\Delta t}{m_{k}\Delta x}|p^{1}|f_{k,i-\text{ sign}(p^{1})}^{*}+\frac{\Delta t}{m_{k}\Delta x}\frac{|p^{1}|}{2}(\phi_{i+ \frac{1}{2}}(f_{k}^{*})-\phi_{i-\frac{1}{2}}(f_{k}^{*}))\] \[\leq\left(1-\frac{\Delta t}{m_{k}\Delta x}|p^{1}|\right)f_{k,i}^{ *}+\frac{\Delta t}{m_{k}\Delta x}|p^{1}|f_{k,i-\text{sign}(p^{1})}^{*}+\frac{ \Delta t}{m_{k}\Delta x}\frac{|p^{1}|}{2}\begin{cases}(f_{k,i-1}^{*}-f_{k,i}^{ *})&\text{if}\quad\sigma=-1\\ (f_{k,i+1}^{*}-f_{k,i}^{*})&\text{if}\quad\sigma=+1\end{cases}\] \[=\left(1-\frac{3}{2}\frac{\Delta t}{m_{k}\Delta x}|p^{1}|\right) f_{k,i}^{*}+\frac{\Delta t}{m_{k}\Delta x}|p^{1}|f_{k,i-\text{sign}(p^{1})}^{*}+ \frac{\Delta t}{m_{k}\Delta x}\frac{|p^{1}|}{2}\begin{cases}f_{k,i-1}^{*}& \text{if}\quad\sigma=-1\\ f_{k,i+1}^{*}&\text{if}\quad\sigma=+1\end{cases}\] \[\overset{(\ref{eq:1})}{\leq}1.\]
**Proposition 5.5**.: _If \(f_{k}\) represents the distribution function of a fermion with \(f_{k}^{\ell}<1\), the time discretization in Section 5.1.2 leads to \(f_{k}^{\ell+1}<1\) for the space homogeneous case._
Proof.: Let \(f_{k}^{\ell}<1\). The local equilibrium of a fermion is a Fermi-Dirac distribution function \(\mathcal{F}\) for which \(0<\mathcal{F}<1\) by definition. Hence,
\[f_{k}^{\ell+1} =d_{k}\left[f_{k}^{\ell}+\Delta t(1-\gamma)(\tilde{\nu}_{kk} \mathcal{F}_{kk}^{(1)}+\tilde{\nu}_{kj}\mathcal{F}_{kj}^{(1)}-(\tilde{\nu}_{kk} +\tilde{\nu}_{kj})f_{k}^{(1)})\right]+\gamma\Delta td_{k}(\tilde{\nu}_{kk} \mathcal{F}_{kk}^{(2)}+\tilde{\nu}_{kj}\mathcal{F}_{kj}^{(2)}) \tag{82}\] \[=d_{k}\left[f_{k}^{\ell}(1-2\Delta t(1-\gamma)d_{k})+\Delta t(1- \gamma)d_{k}(\tilde{\nu}_{kk}\mathcal{F}_{kk}^{(1)}+\tilde{\nu}_{kj}\mathcal{F} _{kj}^{(1)})\right]+\gamma\Delta td_{k}(\tilde{\nu}_{kk}\mathcal{F}_{kk}^{(2)}+ \tilde{\nu}_{kj}\mathcal{F}_{kj}^{(2)})\] \[<d_{k}\left[1-2\Delta t(1-\gamma)d_{k}+2\Delta t(1-\gamma)d_{k}+ \gamma\Delta t(\tilde{\nu}_{kk}+\tilde{\nu}_{kj})\right]=1.\]
#### 5.3.2 Conservation of mass, total momentum and total energy
In this section, we concern the conservation of mass, total momentum, and total energy for the semi-discrete scheme. The proofs of the following propositions work analogously as and can be found in the proofs of proposition 5.3 and 5.4 in [23].
**Proposition 5.6**.: _The relaxation step in the first-order splitting scheme presented in Section 5.1.1 satisfies the conservation laws_
\[\int m_{1}f_{1}^{*}dp=\int m_{1}f_{1}^{\ell}dp,\quad\int m_{2}f_{2} ^{*}dp=\int m_{2}f_{2}^{\ell}dp, \tag{83}\] \[\int\left(m_{1}pf_{1}^{*}+m_{2}pf_{2}^{*}\right)dp=\int\left(m_{1 }pf_{1}^{\ell}+m_{2}pf_{2}^{\ell}\right)dp,\] (84) \[\int\left(\frac{|p|^{2}}{2m_{1}}f_{1}^{*}+\frac{|p|^{2}}{2m_{2}}f _{2}^{*}\right)dp=\int\left(\frac{|p|^{2}}{2m_{1}}f_{1}^{\ell}+\frac{|p|^{2}}{ 2m_{2}}f_{2}^{\ell}\right)dp. \tag{85}\]
**Proposition 5.7**.: _For each \(i=1,2\), the transport step in the first-order splitting scheme in Section 5.1.1, combined with the space discretization presented in Section 5.2 satisfies the conservation laws_
\[\sum_{i=0}^{I}\int\mathbf{p}_{k}f_{k,i}^{\ell+1}dp\Delta x=\sum_{i=0}^{I}\int \mathbf{p}_{k}f_{k,i}^{*}dp\Delta x \tag{86}\]
_for periodic or zero boundary conditions._
Since the second-order time-stepping scheme in Section 5.1.2 can be broken into relaxation and transport parts, each of which preserves the conservation of mass, total momentum, and total energy, we can state the following:
**Corollary 5.3.1**.: _For periodic or zero boundary conditions, any combination of temporal and space discretization presented to Sections 5.1 and 5.2, respectively, conserves mass, total momentum and total energy._
#### 5.3.3 Entropy inequality
We study the entropy behavior for the first-order scheme in Section 5.1.1. Both the relaxation and the transport step dissipate entropy; see Propositions 5.8 and 5.10. Moreover, the minimal entropy is reached for the relaxation step if the distribution functions coincide with the corresponding target functions; see Proposition 5.9.
**Proposition 5.8**.: _Let \(h_{\tau}\) given by (11). The relaxation step in the first-order splitting scheme in Section 5.1.1 fulfills the discrete entropy inequality_
\[\int h_{\tau}(f_{1}^{*})+h_{\tau^{\prime}}(f_{2}^{*})dp\leq\int h_{\tau}(f_{1} ^{\ell})+h_{\tau^{\prime}}(f_{2}^{\ell})dp. \tag{87}\]
Proof.: By convexity
\[h_{\tau_{k}}(f_{k}^{\ell})\geq h_{\tau_{k}}(f_{k}^{*})+h_{\tau_{k}}^{\prime}( f_{k}^{*})(f_{k}^{\ell}-f_{k}^{*}). \tag{88}\]
For \(f\geq 0\) (\(\tau\in\{-1,0\}\)), respective \(0\leq f<1\) (\(\tau=+1\)), the derivative
\[h_{\tau}^{\prime}(f)=\log\frac{1}{1-\tau f} \tag{89}\]
is monotonically increasing such that
\[(h_{\tau}^{\prime}(x)-h_{\tau}^{\prime}(y))(y-x)\leq 0 \tag{90}\]
for all \(x,y\geq 0\) (\(\tau\in\{-1,0\}\)) and \(0\leq x,y<1\) (\(\tau=+1\)), respectively. Moreover, since
\[h_{\tau_{k}}^{\prime}(\mathcal{K}_{kk}^{*})=\alpha_{k}\cdot\mathbf{p}_{k}, \tag{91}\]
it holds
\[\int h^{\prime}_{\tau_{k}}(\mathcal{K}^{*}_{kk})\tilde{\nu}_{kk}(\mathcal{K}^{*}_{ kk}-f^{*}_{k})dp=\int\alpha_{k}\cdot\mathbf{p}_{k}\,\tilde{\nu}_{kk}(\mathcal{K}^{*}_{ kk}-f^{*}_{k})dp=0 \tag{92}\]
which vanishes as the conservation properties are satisfied at the semi-discrete level as well by construction of the scheme. Analogously for the inter-species terms,
\[\begin{split}\int& h^{\prime}_{\tau}(\mathcal{K}^{* }_{12})\tilde{\nu}_{12}(\mathcal{K}^{*}_{12}-f^{*}_{1})dp+\int h^{\prime}_{ \tau^{\prime}}(\mathcal{K}^{*}_{21})\tilde{\nu}_{21}(\mathcal{K}^{*}_{21}-f^{ *}_{2})dp\\ &=\alpha^{0}_{12}\int\tilde{\nu}_{12}(\mathcal{K}^{*}_{12}-f^{*}_ {1})dp+\alpha^{0}_{21}\int\tilde{\nu}_{21}(\mathcal{K}^{*}_{21}-f^{*}_{2})dp \\ &\quad+\binom{\alpha^{1}}{\alpha^{2}}\cdot\int(\tilde{\nu}_{12}( \mathcal{K}^{*}_{12}-f^{*}_{1})\,\binom{p}{\frac{|p|^{2}}{2m_{1}}}+\tilde{\nu }_{21}(\mathcal{K}^{*}_{21}-f^{*}_{2})\binom{p}{\frac{|p|^{2}}{2m_{2}}})dp\\ &=0.\end{split} \tag{93}\]
The implicit step (50) is
\[f^{*}_{k}-f^{\ell}_{k}=\Delta t\,\tilde{\nu}_{kk}(\mathcal{K}^{*}_{kk}-f^{*}_ {k})+\Delta t\,\tilde{\nu}_{kj}(\mathcal{K}^{*}_{kj}-f^{*}_{k}). \tag{94}\]
Using (94) and the convexity of \(h_{\tau}\) leads to
\[\begin{split} h_{\tau_{k}}(f^{*}_{k})-h(f^{\ell}_{k})& \leq h^{\prime}(f^{*}_{k})(f^{*}_{k}-f^{\ell}_{k})\\ \stackrel{{(\ref{eq:h_tau_k})}}{{=}}& \Delta t\,h^{\prime}_{\tau_{k}}(f^{*}_{k})\tilde{\nu}_{kk}(\mathcal{K}^{*}_{ kk}-f^{*}_{k})+\Delta t\,h^{\prime}_{\tau}(f^{*}_{k})\tilde{\nu}_{kj}(\mathcal{K}^{*}_{ kj}-f^{*}_{k}).\end{split} \tag{95}\]
Thus after integrating (95) with respect to \(p\) and making use of (92) and (93), we obtain
\[\begin{split}&\int h_{\tau}(f^{*}_{1})dp-\int h_{\tau}(f^{\ell}_{1})dp +\int h_{\tau^{\prime}}(f^{*}_{2})dp-\int h_{\tau^{\prime}}(f^{\ell}_{2})dp\\ &\leq\Delta t\int(h^{\prime}_{\tau}(f^{*}_{1})-h^{\prime}_{\tau} (\mathcal{K}^{*}_{11}))\tilde{\nu}_{11}(\mathcal{K}^{*}_{11}-f^{*}_{1})dp+ \Delta t\int(h^{\prime}_{\tau^{\prime}}(f^{*}_{2})-h^{\prime}_{\tau^{\prime}} (\mathcal{K}^{*}_{22}))\tilde{\nu}_{22}(\mathcal{K}^{*}_{22}-f^{*}_{2})dp\\ &+\Delta t\int(h^{\prime}_{\tau}(f^{*}_{1})-h^{\prime}_{\tau}( \mathcal{K}^{*}_{12}))\tilde{\nu}_{12}(\mathcal{K}^{*}_{12}-f^{*}_{1})dp+ \Delta t\int(h^{\prime}_{\tau^{\prime}}(f^{*}_{2})-h^{\prime}_{\tau^{\prime}} (\mathcal{K}^{*}_{21}))\tilde{\nu}_{21}(\mathcal{K}^{*}_{21}-f^{*}_{2})dp\\ &\leq 0.\end{split} \tag{96}\]
The last inequality comes by (90).
**Proposition 5.9**.: _The inequality in Proposition 5.8 is an equality if and only if \(f^{\ell}_{1}=\mathcal{K}^{\ell}_{12}\) and \(f^{\ell}_{2}=\mathcal{K}^{\ell}_{21}\). In such cases \(f^{*}_{1}=\mathcal{K}^{*}_{12}\) and \(f^{*}_{2}=\mathcal{K}^{*}_{21}\)._
Proof.: The proof works analogously as and can be found in [23].
**Proposition 5.10**.: _Let \(h_{\tau}\) by given by (11).The transport step in the first-order splitting scheme in Section 5.1.1 combined with the first-order spatial discretization in Section 5.2 fulfills the discrete entropy inequality_
\[\sum_{i=0}^{I}\left\{\int h_{\tau}(f^{\ell+1}_{1,i})+h_{\tau^{\prime}}(f^{\ell+1 }_{2,i})dp\right\}\Delta x\leq\sum_{i=0}^{I}\left\{\int h_{\tau_{1}}(f^{*}_{1, i})+h_{\tau^{\prime}}(f^{*}_{2,i})dp\right\}\Delta x \tag{97}\]
_for periodic or zero boundary conditions, provided that_
\[\Delta t\leq\frac{m_{k}\Delta x}{\max|p^{1}|}. \tag{98}\]
Proof.: We can apply the same proof as in [23] because \(h_{\tau}\) is convex.
We combine the two propositions above and obtain the following:
**Corollary 5.3.2**.: _For periodic or zero boundary conditions, the first-order splitting scheme 5.1.1 combined with the first-order numerical fluxes in Section 5.2 fulfills the discrete entropy inequality_
\[\sum_{i=0}^{I}\left\{\int h_{\tau}(f_{1,i}^{\ell+1})+h_{\tau^{ \prime}}(f_{2,i}^{\ell+1})dp\right\}\Delta x\leq\sum_{i=0}^{I}\left\{\int h(f_ {1,i}^{*})+h(f_{2,i}^{*})dp\right\}\Delta x \tag{99}\]
_provided that_
\[\Delta t\leq\frac{m_{i}\Delta x}{\max|p^{1}|}. \tag{100}\]
### Momentum discretization
Eventually, we discretize the momentum variable. We center the discrete momenta \(p_{q}=(p_{q_{1}}^{1},p_{q_{2}}^{2},p_{q_{3}}^{3})^{\top}\), with \(q=(q_{1},q_{2},q_{3})\in\mathbb{N}_{0}^{3}\), around \(u_{\text{mix}}\) with the mixture mean velocity
\[u_{\text{mix}}=\frac{p_{1}+p_{2}}{N_{1}+N_{2}} \tag{101}\]
and restrict them to a finite cube. This means, for each component \(r\in\{1,2,3\}\),
\[p^{r}\in[m_{k}u_{\text{mix}}^{r}-6m_{k}v_{\text{th},k},\,m_{k}u_ {\text{mix}}^{r}+6m_{k}v_{\text{th},k}] \tag{102}\]
where \(v_{\text{th},k}=\sqrt{\frac{T_{\text{mix}}}{m_{k}}}\) is the thermal velocity of species \(i\) and
\[T_{\text{mix}}=\frac{n_{1}T_{1}+n_{2}T_{2}}{n_{1}+n_{2}}+\frac{1 }{3}\frac{N_{1}N_{2}}{N_{1}+N_{2}}\frac{|\frac{P_{1}}{N_{1}}-\frac{P_{2}}{N_{ 2}}|^{2}}{n_{1}+n_{2}} \tag{103}\]
is the mixture temperature. An adequate resolution is ensured by the momentum mesh size \(\Delta p_{k}=0.25m_{k}v_{\text{th},k}\) in each direction, as in [34].
We emphasize the advantage of the multi-species BGK model that it is possible to use different grids for each species/equation. This feature becomes beneficial when the species masses, and hence the thermal speeds, differ significantly.
All momentum integrals are replaced by discrete sums using the trapezoidal rule, i.e.
\[\int(\cdot)dp\approx\sum_{q}\omega_{q}(\cdot)_{q}(\Delta p_{k})^{3} \tag{104}\]
where \(\omega_{q}=\omega_{q_{1}}\omega_{q_{2}}\omega_{q_{3}}\) are the weights and
\[\omega_{q_{p}}=\begin{cases}1&\text{if }\min(q_{p})<q_{p}<\max(q_{p}),\\ \frac{1}{2}&\text{else}.\end{cases} \tag{105}\]
We need to distinguish between discrete and continuous moments, especially when determining the discrete local equilibria \(\mathcal{K}_{kk,q}\) and \(\mathcal{K}_{kj,q}\). Since the minimization of (66) and (67) is solved using a discrete momentum grid and discrete moments \(\bar{\mu}_{k},\bar{\mu}\) as input, the parameters \(\alpha_{kk}\) and \(\alpha_{kj}\) are determined such that \(\mathcal{K}_{kk,q}\) and \(\mathcal{K}_{kj,q}\) have the desired discrete moments. Thus, the conservation and entropy properties are fulfilled at the discrete level. (A similar approach for the standard single-species BGK equation is given in [34].)
**Theorem 5.4.1**.: _Propositions 5.2, 5.3, and 5.8-5.10 all hold true after replacing continuous integrals by their respective quadratures. Additionally, the scheme in Section 5.1.3 satisfies the following conservation properties for \(\ell\geq 0\)_
\[\sum_{i,q}\omega_{q}\left(f_{1,iq}^{\ell}\mathbf{p}_{1,q}(\Delta p _{1})^{3}+f_{2,iq}^{\ell}\mathbf{p}_{2,q}(\Delta p_{2})^{3}\right)\Delta x\sum _{i,q}\omega_{q}\left(f_{1,iq}^{0}\mathbf{p}_{1,q}(\Delta p_{1})^{3}+f_{2,iq}^ {0}\mathbf{p}_{2,q}(\Delta p_{2})^{3}\right)\Delta x \tag{106}\]
_with \(\mathbf{p}_{k,q}=(1,p_{q},\frac{|p_{q}|^{2}}{2m_{k}})^{\top}\) and \(f_{k,iq}^{\ell}\approx f_{k,i}^{\ell}(p_{q})\)._
Optimization algorithmThe minimization of (66) and (67) is solved by Newton's method which requires the evaluation of the gradients
\[\nabla_{\alpha_{k}}\varphi_{k} \approx-\sum_{q}\omega_{q}d_{k,q}\tilde{\nu}_{kk,q}\,\mathcal{K}_ {kk,q}\,\mathbf{p}_{k,q}(\Delta p_{k})^{3}+\bar{\mu}_{k} \tag{107}\] \[\nabla_{\alpha}\varphi \approx-\sum_{q}\omega_{q}d_{1,q}\nu_{12,q}\,\mathcal{K}_{12,q} \,\mathbf{p}_{12,q}(\Delta p_{1})^{3}-\sum_{q}\omega_{q}d_{2,q}\tilde{\nu}_{2 1,q}\,\mathcal{K}_{21,q}\,\mathbf{p}_{21,q}(\Delta p_{2})^{3}+\bar{\mu}, \tag{108}\]
and the Hessians
\[\nabla_{\alpha_{k}}^{2}\varphi_{k} \approx\sum_{q}\omega_{q}d_{k,q}\tilde{\nu}_{kk,q}\,\zeta( \mathcal{K}_{kk,q},\tau_{k})\,\mathbf{p}_{k,q}\otimes\mathbf{p}_{k}(\Delta p _{k})^{3} \tag{109}\] \[\nabla_{\alpha}^{2}\varphi \approx\sum_{q}\omega_{q}d_{1,q}\tilde{\nu}_{12,q}\,\zeta( \mathcal{K}_{12,q},\tau_{1})\,\mathbf{p}_{12}\otimes\mathbf{p}_{12,q}(\Delta p _{1})^{3}+\sum_{q}\omega_{q}d_{2,q}\tilde{\nu}_{21,q}\,\zeta(\mathcal{K}_{21,q },\tau_{2})\,\mathbf{p}_{21}\otimes\mathbf{p}_{21,q}(\Delta p_{2})^{3} \tag{110}\]
where \(\mathbf{p}_{12,q}=(1,0,p_{1,q},\frac{|p_{1,q}|^{2}}{2m_{1}})^{\top}\), \(\mathbf{p}_{21,q}=(0,\mathbf{p}_{2,q})^{\top}\) and
\[\zeta(g,\tau)=\begin{cases}g&\text{for }\tau=0,\\ g^{2}e^{-\alpha\cdot\mathbf{p}_{i}}&\text{for }\tau=\pm 1.\end{cases} \tag{111}\]
The input data in (64) is computed in a straight-forward way:
\[\bar{\mu}_{k}\approx\sum_{q}\omega_{q}d_{k,q}\tilde{\nu}_{kk,q}\,G_{k,q}\, \mathbf{p}_{k,q}(\Delta p_{k})^{3}. \tag{112}\]
Analogously for the input data \(\bar{\mu}\) in (65).
## 6 Numerical results
In this section, we present several numerical tests. We illustrate the properties of our model and demonstrate the properties of our scheme.
### Relaxation in a homogeneous setting
#### 6.1.1 Decay rates and illustration of the schemes' properties
We validate our numerical scheme for quantum particles and verify the decay rates for the mean velocities and kinetic temperatures which are given analytically in Section 3.2.
Initially, we set the distribution functions to Maxwellians
\[f_{k}=\mathcal{M}[n_{k},U_{k},T_{k},m_{k}]=\frac{n_{k}}{(2\pi T_{k}m_{k})^{3/2} }\exp\left(-\frac{|p-m_{k}U_{k}|^{2}}{2T_{k}m_{k}}\right) \tag{113}\]
with
\[m_{1}=1.0,\quad n_{1}=1.0,\quad U_{1}=(0.5,0,0)^{\top},\quad T_{1 }=1.0,\] \[m_{2}=1.5,\quad n_{2}=1.2,\quad U_{2}=(0.1,0,0)^{\top},\quad T_{2 }=0.5.\]
These initial data are chosen to only illustrate the basic properties of the model and scheme, respectively, but we do not incorporate further physical details (e.g. for a specific quantum regime). The collision frequencies are set to \(\tilde{\nu}_{kj}=1\).
For the simulation, we use a momentum grid with \(48^{3}\) nodes and the first-order splitting scheme from Section 5.1.1 with the time step \(\Delta t=0.01\).
We study any combination of classical particles, fermions and bosons. Exemplary for the interactions of fermions with fermions, we illustrate the evolution of the entropy and the entropy dissipation in Figure 1. In Figure 2, we demonstrate the conservation properties where the numerical oscillations in mass, total momentum and total energy are only of the order \(10^{-14}\).
In Figure 3, we verify the behavior of the mean velocities converging exponentially fast to a common value. The numerical decay rate and the analytical one (19) coincide very well. We only display the rate for the interactions of fermions with fermions because the decay rate is independent of the type of the species.
In Figure 4, we consider the behavior of the temperatures where we distinguish between the kinetic temperatures
\[T_{k}=\frac{2}{3}\left(\frac{E_{k}}{N_{k}}-\frac{1}{2}\frac{|P_{k}|^{2}}{m_{k} N_{k}^{2}}\right) \tag{114}\]
and the physical temperatures \(\vartheta_{k}\) of the fluid. The latter ones can be calculated via a nonlinear system of equations with the parameters density, fugacity and kinetic temperature [26]. In the first column, we observe that the kinetic temperatures do not converge to a common value whenever a quantum particle is involved. This is also visible in the second column. The numerical and analytical decay rates for the kinetic temperatures coincide very well, and the difference converges to a constant value for quantum particles. Such behavior of the kinetic temperatures for quantum particles comes by an additional term for the decay rates (21) which vanishes for classical-classical interactions, see Remark 5. Additionally, we compare the results to the physical temperatures \(\vartheta_{k}\). Even though the kinetic temperatures behave differently for quantum particles, the physical temperatures converge to a common value in all cases as predicted by the theory.
Figure 1: Entropy and entropy dissipation for the test case in Section 6.1.1, exemplary for fermion-fermion interactions. The entropy decays monotonically.
Figure 2: Illustration of the conservation properties for the test case in Section 6.1.1, exemplary for fermion-fermion interactions. The mass densities of each species (\(\rho_{k}=m_{k}n_{k}\)), the total momentum (\(M\)) and total energy (\(E\)) have small oscillations of the order of \(10^{-14}\).
Figure 3: Mean velocities for the test case in Section 6.1.1, exemplary for fermion-fermion interactions. The mean velocities converge exponentially fast to a common value, and the numerical decay rate coincides very well with the analytical one.
fermion-fermion:
boson-fermion:
boson-boson:
boson-boson:
boson-classic:
Figure 4: Evolution of the temperatures for the test case in Section 6.1.1. First column: kinetic temperatures \(T_{k}\); whenever a quantum particle is involved, the kinetic temperatures do not converge to a common value. Second column: decay rates for kinetic temperatures in logarithmic scale — numerical and analytical values coincide very well. Additionally, the difference between the physical temperatures \(\vartheta_{k}\) is displayed which decays exponentially fast, whereas the kinetic temperatures \(T_{k}\) behave differently for quantum particles.
#### 6.1.2 Sulfur-Flourine-electrons test case
We run a space homogeneous, 3-species test case inspired by [24]. In the following, the index \(S\) refers to sulfur ions, the index \(F\) refers to fluorine ions, and the index \(e\) refers to electrons. For convenience, we clarify in the Appendix how the model and the numerical scheme can be extended straight-forwardly to more than two species.
We incorporate collision frequencies \(\nu_{kj}=0.00753\frac{1}{\mathrm{fs}}\) which are approximately of the same order as those used in [24]. The masses of the species are
\[m_{S}=32.07\mathrm{u}-11m_{e},\quad m_{F}=19\mathrm{u}-7m_{e},\quad m_{e}=9.11 \cdot 10^{-28}\mathrm{g}\]
with the atomic mass \(\mathrm{u}=1.6605\cdot 10^{-24}\,\mathrm{g}\). The ions are treated like classical particles and initialized by \(f_{k}=\mathcal{M}[n_{k},U_{k},T_{k},m_{k}]\) (\(k=S,F\)) with
\[n_{S} =10^{19}\,\mathrm{cm}^{-3},\quad n_{F}=6\cdot 10^{19}\, \mathrm{cm}^{-3},\] \[U_{S} =U_{F}=0\,\frac{\mathrm{cm}}{\mathrm{s}},\] \[T_{S} =T_{F}=15\,\mathrm{eV},\]
where \(M\) is defined in (113). For the electrons, we compare the behavior when they are treated like classical particles to the behavior when they are treated like fermions. In the former case, we initialize \(f_{e}=\mathcal{M}[n_{e},U_{e},\vartheta_{e},m_{e}]\) with
\[n_{e}=53\cdot 10^{19}\,\mathrm{cm}^{-3},\quad U_{e}=0\,\frac{\mathrm{cm}}{ \mathrm{s}},\quad\vartheta_{e}=100\,\mathrm{eV}.\]
It holds \(T_{e}=\vartheta_{e}\) for classical particles. In the latter case -- electrons being treated as fermions -- we initialize the distribution function by a Fermi-Dirac function, but we keep the same macroscopic quantities, i.e.
\[f_{e}=\left[\frac{(2\pi m_{e}\vartheta_{e})^{3/2}}{\alpha\,n_{e}}e^{\frac{|p |^{2}}{2m_{e}\vartheta_{e}}}+1\right]^{-1} \tag{115}\]
with the scaling factor \(\alpha=1.061711634\) which leads to the desired \(\int f_{e}dp=n_{e}\).
We use momentum grids with \(48^{3}\) nodes for each species, and we use the second-order IMEX RK scheme from Section 5.1.2 with time step \(\Delta t=0.1\) fs.
We illustrate the evolution of the temperatures in Figure 5. For the purely classic test case, the physical and the kinetic temperatures coincide such that the temperature in equilibrium \(T_{\mathrm{eq}}\) can be precomputed from the initial data [23]:
\[T_{\mathrm{eq}}=T_{\mathrm{mix}}(0)\overset{\eqref{eq:103}}{=}\frac{n_{1}T_ {1}(0)+n_{2}T_{2}(0)+n_{3}T_{3}(0)}{n_{1}+n_{2}+n_{3}}. \tag{116}\]
In Figure 5, we observe that all species temperatures converge to that value for the classical simulation. Additionally, we display the results when we consider the electrons to be fermions instead. As predicted by the theory, the physical temperatures converge to a common value. However, the physical temperatures generally differ from the kinetic temperatures in the quantum case. As a consequence, the physical temperature in equilibrium does not equal \(T_{\mathrm{eq}}\).
### Sod problem
We run a quantum-kinetic version of the well-known Sod problem [17] in the fluid regime for fermions. As carried out in [14], the limiting equations for the kinetic equations in the fluid regime are the quantum Euler equations.
We implement a single-species test case with the multi-species model by assuming \(m_{1}=m_{2}=m\), \(n_{1}=n_{2}=n\), \(U_{1}=U_{2}=U\) and \(T_{1}=T_{2}=T\). We set \(m=1\) and use \(\tilde{\nu}_{kj}=2\cdot 10^{4}\) for approaching the fluid regime. The initial data is given by \(f_{1}=f_{2}=\mathcal{M}[n,u,T,m]\) where \(\mathcal{M}\) is defined in (113) with
\[n=1,\qquad U=0,\qquad T=1, \tag{117}\]
for \(x\leq 0\) and
\[n=0.125,\qquad U=0,\qquad T=0.8 \tag{118}\]
for \(x>0\).
The simulations are run using a velocity grid with \(48^{3}\) points and \(300\) equally spaced cells in \(x\). We use the second-order IMEX Runge-Kutta scheme from Section 5.1.2 combined with the second-order finite volume scheme from Section 5.2.
Numerical results of the macroscopic quantities are given in Figure 6. The fluid limit is recovered fairly well by the density \(n\), mean velocity \(u\) and kinetic temperature \(T\). We see again that the physical temperature \(\vartheta\) deviates from the kinetic temperature.
Figure 5: Evolution of the physical temperatures for the Sulfur-Fluorine-electrons quantum test case in Section 6.1.2. When both the ions and the electrons are treated classically (lines with dots), the physical temperatures (which coincide with the kinetic temperatures (114)) converge to the mixture temperature \(T_{\rm eq}\) defined in (116). When the electrons are treated like fermions instead, the physical temperatures do converge to a common value as predicted by the theory. However, this value differs from \(T_{\rm eq}\).
Figure 6: Numerical solution at \(t=0.055\) of the Sod problem in Section 6.2. We show results for a 2-species kinetic simulation for fermions. The solutions for both species are identical; we show only the species 1 results. For reference, the exact solution for the quantum Euler equations is also provided (dotted gray line). The kinetic solution recovers the fluid limit fairly well.
Appendix
The two-species model can be extended to a system of \(N\)-species which undergo binary interactions. For ease in notation, we illustrate here the 3-species case. Each distribution function \(f_{k}\), \(k=1,\ldots,3\), represents the solution to
\[\partial_{t}f_{k}+\frac{p}{m_{k}}\cdot\nabla_{x}f_{k}=\tilde{\nu}_{k1}(\mathcal{ K}_{k1}-f_{k})+\tilde{\nu}_{k2}(\mathcal{K}_{k2}-f_{k})+\tilde{\nu}_{k3}( \mathcal{K}_{k3}-f_{k}) \tag{119}\]
with \(\tilde{\nu}_{kj}=\nu_{kj}n_{j}\). Since we still consider only binary interactions, the properties in section 2 and 3.1 are still satisfied.
The presented numerical scheme is based on the general implicit solver in Section 5.1.3. Since the transport operators act only on the individual species, we focus on and shortly illustrate the scheme of the relaxation process.
As above, we write the implicit updates of the distribution functions in a generic steady state form
\[f_{k}=d_{k}G_{k}+d_{k}\gamma\Delta t(\tilde{\nu}_{kk}\mathcal{K}_{kk,\tau_{k}} +\tilde{\nu}_{kj}\mathcal{K}_{kj,\tau_{j}}+\tilde{\nu}_{kl}\mathcal{K}_{kl, \tau_{k}}) \tag{120}\]
for \(k,j,l\in\{1,2,3\}\), each of \(k,j,l\) distinct, where \(\mathcal{K}_{kk,\tau_{k}}\), \(\mathcal{K}_{kj,\tau_{j}}\) and \(\mathcal{K}_{kl,\tau_{l}}\) are the unique attractors associated to \(f_{k}\),
\[d_{k}=\frac{1}{1+\gamma\Delta t(\tilde{\nu}_{kk}+\tilde{\nu}_{kj}+\tilde{\nu }_{kl})}, \tag{121}\]
and \(G_{k}\) is a known function. When we can express \(\mathcal{K}_{kk}\), \(\mathcal{K}_{kj}\) and \(\mathcal{K}_{kl}\) as functions of \(G_{k}\), \(G_{j}\) and \(G_{l}\), (120) provides an explicit update formula for \(f_{k}\).
We apply the conservation properties to (120). An analogous calculation as in the 2-species case leads to a set of constraints to determine the attractors from the given data:
\[\begin{split}&\int d_{1}\left(\tilde{\nu}_{11}\mathcal{K}_{11, \tau_{1}}+\tilde{\nu}_{12}\mathcal{K}_{12,\tau_{1}}+\tilde{\nu}_{13}\mathcal{ K}_{13,\tau_{1}}\right)\mathbf{p}_{1}dp\int d_{2}\left(\tilde{\nu}_{21} \mathcal{K}_{21,\tau_{2}}+\tilde{\nu}_{22}\mathcal{K}_{22,\tau_{2}}+\tilde{ \nu}_{23}\mathcal{K}_{23,\tau_{2}}\right)\mathbf{p}_{2}dp\\ +&\int d_{3}\left(\tilde{\nu}_{31}\mathcal{K}_{31, \tau_{3}}+\tilde{\nu}_{32}\mathcal{K}_{32,\tau_{3}}+\tilde{\nu}_{33}\mathcal{ K}_{33,\tau_{3}}\right)\mathbf{p}_{3}dp\\ &=\int d_{1}\left(\tilde{\nu}_{11}+\tilde{\nu}_{12}+\tilde{\nu} _{13}\right)G_{1}\mathbf{p}_{1}dp+\int d_{2}\left(\tilde{\nu}_{21}+\tilde{\nu }_{22}+\tilde{\nu}_{23}\right)G_{2}\mathbf{p}_{2}dp+\int d_{3}\left(\tilde{ \nu}_{31}+\tilde{\nu}_{32}+\tilde{\nu}_{33}\right)G_{3}\mathbf{p}_{3}dp.\end{split} \tag{122}\]
These constraints (122) represent first-order optimality conditions associated to the minimization of the convex function
\[\varphi_{\text{tot}}(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{12},\alpha_{13 },\alpha_{23})=\varphi_{1}(\alpha_{1})+\varphi_{2}(\alpha_{2})+\varphi_{3}( \alpha_{3})+\varphi(\alpha_{12})+\varphi(\alpha_{13})+\varphi(\alpha_{23})\]
with
\[\varphi_{k}(\alpha_{k})=\int d_{k}\tilde{\nu}_{kk}w[\mathcal{K}_{kk,\tau_{k}} ]dp+\mu_{kk}\cdot\alpha_{k}\]
and
\[\varphi(\alpha_{kj})=\int\left(d_{k}\tilde{\nu}_{kj}w[\mathcal{K}_{kj,\tau_{ k}}]+d_{j}\tilde{\nu}_{jk}w[\mathcal{K}_{jk,\tau_{j}}]\right)dp+\mu_{kj} \cdot\alpha_{kj},\]
where
\[w[\mathcal{K}_{kj,\tau_{k}}]=\frac{\log(1-\tau_{k}\mathcal{K}_{kj,\tau_{k}})} {\tau_{k}}=\begin{cases}-\mathcal{K}_{kj,\tau_{k}}&\text{for }\tau_{k}=0,\\ \log(1-\mathcal{K}_{kj,1})&\text{for }\tau_{k}=+1,\\ -\log(1+\mathcal{K}_{kj,-1})&\text{for }\tau_{k}=-1.\end{cases}\]
Moreover, \(\alpha_{k}=(\alpha_{k}^{0},\alpha_{k}^{1},\alpha_{k}^{2})^{\top}\);
\[\mu_{kk}=\begin{pmatrix}\mu_{kk}^{0}\\ \mu_{kk}^{1}\\ \mu_{kk}^{2}\end{pmatrix}=\int d_{k}\tilde{\nu}_{kk}G_{k}\mathbf{p}_{k}dp\]
for \(k=1,2,3\); for \(k\neq j:\alpha_{kj}=(\alpha_{kj}^{0},\alpha_{jk}^{0},\alpha_{kj}^{1},\alpha_{ kj}^{2})^{\top}\); and
\[\mu_{kj}=\begin{pmatrix}\mu_{kj}^{0}\\ \mu_{jk}^{0}\\ \mu_{kj}^{2}\end{pmatrix}=\int\left[\begin{pmatrix}1\\ 0\\ p\\ \frac{|p|^{2}}{2m_{k}}\end{pmatrix}d_{k}\nu_{kj}G_{k}+\begin{pmatrix}0\\ 1\\ p\\ \frac{|p|^{2}}{2m_{j}}\end{pmatrix}d_{j}\nu_{jk}G_{j}\right]dp.\]
## 8 Acknowledgements
Marlies Pirner was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044-390685587, Mathematics Munster: Dynamics-Geometry-Structure, by the Alexander von Humboldt foundation and the German Science Foundation DFG (grant no. PI 1501/2-1).
|
2309.06135 | Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by
Finding Problematic Prompts | Text-to-image diffusion models, e.g. Stable Diffusion (SD), lately have shown
remarkable ability in high-quality content generation, and become one of the
representatives for the recent wave of transformative AI. Nevertheless, such
advance comes with an intensifying concern about the misuse of this generative
technology, especially for producing copyrighted or NSFW (i.e. not safe for
work) images. Although efforts have been made to filter inappropriate
images/prompts or remove undesirable concepts/styles via model fine-tuning, the
reliability of these safety mechanisms against diversified problematic prompts
remains largely unexplored. In this work, we propose Prompting4Debugging (P4D)
as a debugging and red-teaming tool that automatically finds problematic
prompts for diffusion models to test the reliability of a deployed safety
mechanism. We demonstrate the efficacy of our P4D tool in uncovering new
vulnerabilities of SD models with safety mechanisms. Particularly, our result
shows that around half of prompts in existing safe prompting benchmarks which
were originally considered "safe" can actually be manipulated to bypass many
deployed safety mechanisms, including concept removal, negative prompt, and
safety guidance. Our findings suggest that, without comprehensive testing, the
evaluations on limited safe prompting benchmarks can lead to a false sense of
safety for text-to-image models. | Zhi-Yi Chin, Chieh-Ming Jiang, Ching-Chun Huang, Pin-Yu Chen, Wei-Chen Chiu | 2023-09-12T11:19:36Z | http://arxiv.org/abs/2309.06135v2 | # Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models
###### Abstract
Text-to-image diffusion models, e.g. Stable Diffusion (SD), lately have shown remarkable ability in high-quality content generation, and become one of the representatives for the recent wave of transformative AI. Nevertheless, such advance comes with an intensifying concern about the misuse of this generative technology, especially for producing copyrighted or NSFW (i.e. not safe for work) images. Although efforts have been made to filter inappropriate images/prompts or remove undesirable concepts/styles via model fine-tuning, the reliability of these safety mechanisms against diversified problematic prompts remains largely unexplored. In this work, we propose **Prompting4Debugging (P4D)** as a debugging and red-teanning tool that automatically finds problematic prompts for diffusion models to test the reliability of a deployed safety mechanism. We demonstrate the efficacy of our P4D tool in uncovering new vulnerabilities of SD models with safety mechanisms. Particularly, our result shows that around half of prompts in existing safe prompting benchmarks which were originally considered "safe" can actually be manipulated to bypass many deployed safety mechanisms, including concept removal, negative prompt, and safety guidance. Our findings suggest that, without comprehensive testing, the evaluations on limited safe prompting benchmarks can lead to a false sense of safety for text-to-image models.
**WARNING: This paper contains model outputs that may be offensive or upsetting in nature.**
1National Yang Ming Chiao Tung University
2IBM Research
{joycenerd.cs09, nax1016.cs10, chingchun}@nycu.edu.tw, [email protected], [email protected]
## Introduction
In recent years, generative models have been making remarkable advancements across multiple domains, such as text, images, and even code generation, blurring the distinction between the works created by AI systems and those crafted by human experts. One prominent area of focus upon generative AI is text-to-image [11, 1, 12, 13], where most of the state-of-the-art T2I methods are built upon the diffusion models, in which these T2I diffusion models enable the transformation of textual information into images. They not only bridge the gap between natural language processing and visual content creation, but also enhance the interaction and understanding across these two modalities. One of the main factors leading to the exceptional performance of T2I diffusion models nowadays stems from the vast amount of training data available on the internet, allowing the models to generate a wide range of content, including natural animals, sketches, cartoon images, and even artistic images. However, such large-scale training data collected from the Internet can be a double-edged sword, as it can lead the models to unconsciously generate inappropriate content such as copyright infringement and NSFW materials.
To this end, there are several recent research works proposing the diffusion models equipped with safety mechanisms, e.g. Stable Diffusion with negative prompts [12], SLD [1], and ESD [1], which either restrict the text embedding space during inference or finetune the model for attempting to prevent the model from generating copyrighted or inappropriate images. Although these safety mechanisms are shown to be partially effective according to their evaluation schemes, there are already studies that demonstrate their potential flaws. For example, [1] has found
Figure 1: Given an existing text-to-image (T2I) diffusion model \(\mathcal{G}^{\prime}\) with safety mechanism which ideally can remove the target concept (e.g. nudity) from the generated image (while the same input prompt would lead to inappropriate image content for the typical T2I diffusion model \(\mathcal{G}\)), our proposed Prompting4Debugging (P4D) red-teans \(\mathcal{G}^{\prime}\) to automatically uncover the safety-evasive prompts.
that the state-of-the-art Stable Diffusion model equipped with NSFW safety filter [14] will still generate sexual content if users give the text prompt _"A photo of a billboard above a street showing a naked man in an explicit position"_. However, these problematic prompts are discovered manually and thus are hard to scale. Here hence comes an urgent need for developing an automated and scalable red-teaming tool for developers to systematically inspect the model safety and reliability before deployment.
On the other hand, as the rapid increase of size (e.g. even growing up to billions of parameters) for recent T2I diffusion models [11, 12, 13, 14], model finetuning becomes extremely expensive and infeasible upon limited computation resources while building the red-teaming tool. As a result, in this work, we utilize prompt engineering [14, 15, 16, 17, 18, 19, 20, 21, 22] as our basis for developing the red-teaming technique, which achieves comparable performance to traditional approaches of finetuning heavy models but with the advantage of learning only a minimal number of prompts.
Overall, we propose a **Prompting4Debugging (P4D)** framework to help debugging/red-teaming the T2I diffusion models equipped with safety mechanisms via utilizing prompt engineering techniques as well as leveraging an unconstrained diffusion model to automatically and efficiently find the problematic prompts that would lead to inappropriate content. Moreover, the problematic prompts discovered by our P4D testing tool can be used for understanding model misbehavior and as important references for follow-up works to construct stronger safe mechanisms. The illustration of our proposed P4D is provided in Figure 1. Our main contributions of this work are summarized as follows.
* Our proposed Prompting4Debugging (P4D) serves as a debugging tool to red-team T2I diffusion models with safety mechanisms for finding problematic prompts resulting in safety-evasive outputs.
* Our extensive experiments based on the Inappropriate Image Prompts (I2P) dataset reveal the fact that around half of the prompts which originally can be tackled by the existing safety mechanisms are actually manipulable by our P4D to become problematic ones.
* We also observe that some of the existing safety mechanisms in T2I diffusion models could lead to a false sense of safety by "_information obfuscation_" for red-teaming: when turning off the safety mechanism during the debugging process, it even becomes easier for our P4D to find the problematic prompts which are still effective to pass the safety mechanism and produce inappropriate image content during the inference time.
## Related work
**AI red-teaming tools.** Red-teaming is an active cybersecurity assessment method that exhaustively searches for vulnerabilities and weaknesses in information security, where the issues found by red-teaming can further help companies or organizations improve their defense mechanisms and strengthen overall cybersecurity protection. Recently, with the popularity and increasing demand for generative AI, red teaming is also being applied to AI models (especially language models [23, 24, 25]) to enhance model security and stability. [23] proposes to prompt language models with a variety of methods, such as few-shot generation and reinforcement learning, to generate test cases that are able to find vulnerabilities in models. [15] fools the model for detecting machine-generated text by revising output, e.g. replacing synonyms words or altering writing style in generated sentences. On the other hand, [10] constructs a pool of user inputs and employs Bayesian optimization to iteratively modify diverse positive test cases which eventually lead to model failures. However, these methods are only applicable to red-team language models, while our P4D focuses on the text-to-image models, which is a field that has been rarely explored in AI red-teaming.
**Prompt engineering.** Prompt engineering originates from the field of natural language processing and aims to adapt a pretrained language model to various downstream tasks by modifying input text with prompts. Prompt engineering can be categorized into two groups: _hard prompts_ and _soft prompts_. Hard prompts, also known as discrete tokens, usually consist of interpretable words that are hand-crafted by users. For instance, [14] first demonstrates the remarkable generalizability of pretrained language models via adopting manually crafted hard prompts to a wide range of downstream tasks in few-shot learning. Then [13, 15, 16] reformulate input texts into specific cloze-style phrases, thus maintaining the form of hard prompts, to prompt the language models. On the other hand, soft prompts consist of appended continuous-valued text vectors or embeddings, providing a larger search space compared to hard prompts. For instance, prompt-tuning [10] and prefix-tuning [15] automate the soft prompts in continuous space. However, soft prompts are often uninterpretable or non-transferable (i.e. cannot be shared by different language models). As a consequence, some discrete optimization methods are proposed to strike a balance between hard prompts and soft prompts, e.g. AutoPrompt [15], FluentPrompt [15], and PEZ [16] that learns hard prompts through continuous gradient-based optimization. Additionally, PEZ extends its capabilities to discover prompts that can be matched with given images, achieved by measuring the CLIP Score [10] using the same optimization method. These studies demonstrate the potential of prompt engineering across various tasks and domains, motivating us to integrate prompt engineering into the field of red-teaming T2I diffusion models.
**Diffusion models with safety mechanisms.** In response to the emerging issues of generating inappropriate images from diffusion models, several works have devoted to address the concern. These works fall into two categories: guidance-based and finetuning-based methods. For guidance-based
methods like Stable Diffusion with negative prompts [14] and SLD [15], they block the text embedding of certain words or concepts (e.g. nudity, late, or violence), in order to prevent the generation of the corresponding image content during the inference process. Rather than using guidance-based techniques, ESD [1] takes a different approach by finetuning the partial model weights (e.g. the U-Net to perform denoising in Stable Diffusion) to remove unwanted contents from the image output. Nonetheless, certain corner cases still bypass the safety mechanisms of these diffusion models [14]. To enable profound testing, our P4D serves as a debugging tool, allowing developers to identify problematic prompts at scale by employing red-teaming strategies on T2I diffusion models. Meanwhile, the models can enhance their robustness by attempting to tackle the more challenging prompts uncovered through our P4D.
## Background
In this section, we first briefly introduce how diffusion models learn to generate unconditional images. Moreover, as all the state-of-the-art T2I diffusion models used in this work are based on latent diffusion models, we also describe how latent diffusion models improve the efficiency of diffusion processes and extend to support conditional generation.
**Diffusion Models**[14, 15] are powerful generative models that learn to simulate the data generation process by progressively denoising the (intermediate) noisy states of data, where such denoising steps stand for the backward process to the opposite forward one composed of diffusion steps which gradually add random noise to data. Given an input image \(x\), Denoising Diffusion Probabilistic Models (DDPM) [13] first generates intermediate noisy image \(x_{t}\) at time step \(t\) via the forward diffusion steps, where \(x_{t}\) can be written as a close form depending on \(x\), \(t\), and noise \(\epsilon\) sampled from Gaussian distribution \(\mathcal{N}(0,I)\). Then the diffusion model training is based on the backward process for learning a model parameterized by \(\theta\) to predict \(\epsilon\), where such model takes both \(x_{t}\) and the corresponding time step \(t\) as input. The objective is defined as:
\[\mathcal{L}_{DM}=\mathbb{E}_{x,\epsilon\sim\mathcal{N}(0,1),t}\left[\| \epsilon-\epsilon_{\theta}(x_{t},t)\|_{2}^{2}\right] \tag{1}\]
where \(t\) ranges from \(1\) to the maximum time step \(T\).
**Latent Diffusion Models**[14] proposes to model both forward and backward processes in the latent space, for alleviating the efficiency issue of DDPM which stems from having the model operate directly in the pixel space, where the transformation between latent and pixel spaces is based on a variational autoencoder (composed of an encoder \(\mathcal{E}\) and a decoder \(\mathcal{D}\)). Furthermore, they extend DDPM to enable conditional image generation, via incorporating diverse conditions such as text prompts. Specifically, given the latent representation \(z=\mathcal{E}(x)\) of input image \(x\) as well as the intermediate noisy latent vector \(z_{t}\) at time step \(t\) (analogously, depending on \(z\), \(t\), and \(\epsilon\sim\mathcal{N}(0,I)\)), a model parameterized by \(\theta\) is trained to make prediction for the noise \(\epsilon_{\theta}(z_{t},c,t)\) that is conditioned on \(z_{t}\), time step \(t\), and a text condition \(c\). The objective for learning such conditional generation process (based on image-condition training pairs \(\{(x,c)\}\)) is:
\[\mathcal{L}_{LDM}=\mathbb{E}_{\mathcal{E}(x),c,\epsilon\sim\mathcal{N}(0,1),t} \left[\|\epsilon-\epsilon_{\theta}(z_{t},c,t)\|_{2}^{2}\right]. \tag{2}\]
## Methodology
In this paper, we aim to develop a red-teaming tool named Promptg4Debugging (P4D) for Text-to-image (T2I) diffusion models to test the reliability of deployed safety mechanisms. In particular, three models, including Stable Diffusion (SD) with negative prompts [14], SLD [15], and ESD [1], are considered as our targets of study. The overview of our proposed P4D is visualized in Figure 2, in which we detail its designs in the following.
Given an input text prompt \(P\) which is able to lead an unconstrained/standard T2I diffusion model \(\mathcal{G}\) for generating the output image with an inappropriate concept/object \(\mathcal{C}\) (i.e. \(\mathcal{G}\) does not have the safety mechanism, and \(P\) is a problematic prompt), when taking such prompt \(P\) as the input for another T2I diffusion model \(\mathcal{G}^{\prime}\) equipped with the safety mechanism specific for \(\mathcal{C}\), ideally the resultant output image should be free from \(\mathcal{C}\) (i.e. \(\mathcal{G}^{\prime}\) successfully defends the generated image against the problematic prompt \(P\)). Our red-teaming tool P4D now attempts to counteract the safety mechanism of \(\mathcal{G}^{\prime}\) such that the inappropriate concept/object \(\mathcal{C}\) now again appears in the generated image (i.e. the safety mechanism of \(\mathcal{G}^{\prime}\) is bypassed).
Specifically, our red-teaming tool P4D adopts the technique of prompt engineering to circumvent the safety mechanism in \(\mathcal{G}^{\prime}\), where a new or modified prompt \(P^{*}\) is optimized for making \(\mathcal{G}^{\prime}\) conditioned on \(P^{*}\) to produce the inappropriate content as what would be obtained by having \(\mathcal{G}\) conditioned on \(P\). As the state-of-the-art T2I diffusion model, i.e. Stable Diffusion (SD), as well as the choices for the T2I diffusion models with safety mechanism \(\mathcal{G}^{\prime}\) in this work (e.g. SD with negative prompts [14], SLD [15]), and ESD [1]) are all based on the latent diffusion models, the optimization for \(P^{*}\) in our P4D is actually realized in the latent space, following the procedure below (cf. Figure 2):
1. With an unconstrained T2I diffusion model \(\mathcal{G}\) (e.g. Stable Diffusion in our experiments), an original text prompt \(P\) is first used to generate an image \(x\) having the inappropriate concept/object \(\mathcal{C}\). Note that the noise predictor in the backward process of \(\mathcal{G}\) is parameterized by \(\theta\).
2. We then obtain the latent representation \(z=\mathcal{E}(x)\) of \(x\) via the encoder \(\mathcal{E}\) of \(\mathcal{G}\) (noting that \(\mathcal{G}\) is based on latent diffusion models thus has the corresponding variational autoencoder), followed by computing the intermediate noisy latent vector \(z_{t}\) at an arbitrary time step \(t\) according to the diffusion process of \(\mathcal{G}\).
3. Given a T2I diffusion model with safety mechanism \(\mathcal{G}^{\prime}\) in which its noise predictor in the backward process is parameterized by \(\theta^{\prime}\), we now aim to find a prompt \(P^{*}\) such that \(\mathcal{G}^{\prime}\) conditioned on \(P^{*}\) can produce the output \(x^{*}\) similar to \(x\), thereby also having the similar inappropriate concept/object \(\mathcal{C}\). The optimization for \(P^{*}\) happens
on the latent space directly to encourage similarity between the noise predictions \(\epsilon_{\theta}(z_{t},P,t)\) and \(\epsilon_{\theta^{\prime}}(z_{t},P^{*},t)\). The basic idea is that, starting from the same noisy latent vector \(z_{t}\) at an arbitrary time step \(t\), if both the noise predictors of \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) which respectively take \(P\) and \(P^{*}\) as text prompt are able to reach the same noise prediction, then our goal of assuring the similarity between \(x^{*}\) and \(x\) in pixel space ideally can be also achieved.
Notably, the text prompt is typically fed into the noise predictor in the form of embeddings (according to the common practice for our \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\)). To this end, the noise prediction happens in \(\mathcal{G}\) is actually operated as \(\epsilon_{\theta}(z_{t},\mathcal{W}(P),t)\), where \(\mathcal{W}\) is a pre-trained and fixed text encoder (e.g. CLIP) for extracting the embedding \(\mathcal{W}(P)\) of text prompt \(P\). While for the noise prediction in \(\mathcal{G}^{\prime}\) that involves our optimization target \(P^{*}\), we adopt the similar design of prompt engineering as PEZ [23] to automate the optimization (a benefit of soft prompt) while making the resultant prompt more transferable (a benefit of hard prompt): We start from a continuous/soft embedding \(P^{*}_{\text{cont}}=[e_{1},\dots,e_{N}]\) composed of \(N\) tokens \(e_{i}\in\mathbb{R}^{d}\), followed by projecting \(P^{*}_{\text{cont}}\) into the corresponding discrete/hard embedding \(P^{*}_{\text{disc}}=\mathcal{F}(P^{*}_{\text{cont}})\) via a projection function \(\mathcal{F}\) (where each token in \(P^{*}_{\text{cont}}\) is mapped to its nearest vocabulary embedding). As a result, noise prediction in \(\mathcal{G}^{\prime}\) is now operated as \(\epsilon_{\theta^{\prime}}(z_{t},P^{*}_{\text{disc}},t)\), and the objective \(\mathcal{L}\) for our debugging process is defined as
\[\mathcal{L}=\left\|\epsilon_{\theta}(z_{t},\mathcal{W}(P),t)-\epsilon_{\theta^ {\prime}}(z_{t},P^{*}_{\text{disc}},t)\right\|_{2}^{2} \tag{3}\]
with noting that both noise predictors in \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) are kept fixed in such optimization.
It is also worth noting that, as projection function \(\mathcal{F}\) acts as a vector quantization operation and is non-differentiable, during the optimization procedure we follow the practice of PEZ [23] to directly update \(P^{*}_{\text{cont}}\) by the gradient of \(\mathcal{L}\) with respect to \(P^{*}_{\text{disc}}\), where \(P^{*}_{\text{cont}}=P^{*}_{\text{cont}}-\gamma\nabla_{P^{*}_{\text{disc}}} \mathcal{L}\). Last but not least, the resultant \(P^{*}_{\text{disc}}\) can be transformed into legible texts \(P^{*}\) by the off-the-shelf text decoder/tokenizer.
We experiment two variants for \(P^{*}_{\text{cont}}\): **P4D-\(N\)** and **P4D-\(K\)**, where the former initializes \(N\) tokens in \(P^{*}_{\text{cont}}\) from scratch via randomly drawing \(N\) vocabulary embeddings, while the latter inserts learnable tokens after every \(K\) tokens of \(\mathcal{W}(P)\) (i.e. the embedding of the original text prompt \(P\)). Basically, \(P^{*}_{\text{cont}}\) in P4D-\(N\) has fixed length of \(N\) which is independent from the length of \(\mathcal{W}(P)\), it would potentially be insufficient for debugging the images with complex content as the original prompt length are not taken into consideration. In comparison, the length of \(P^{*}_{\text{cont}}\) in P4D-\(K\) (and the number of trainable tokens being inserted) varies with the length of \(\mathcal{W}(P)\) thus alleviating the aforementioned concern in P4D-\(N\). Later in experiments, we observe that both P4D-\(N\) and P4D-\(K\) have the comparable debugging performance but the hard prompt found by P4D-\(K\) demonstrates better interpretability as the original prompt \(P\) is taken as its part.
## Experiments
**Dataset.** We evaluate the performance of our P4D on concept-related and object-related datasets. For concept-related dataset, we focus on Inappropriate Image Prompts (I2P) dataset [16], which encompasses various uncomfortable and inappropriate prompts (including hate, harassment, violence, self-harm, nudity contents, shocking images, and illegal activity). Specifically, nudity contents are most prohibitive due to privacy and respect considerations, we hence specifically set this concept aside for separate evaluation. On the other hand for the object-related datasets, we utilize the "car" and "French-horn" classes from ESD [1] for our evaluation (as ESD only offers finetuned weights for these two classes). Notably, the original French-horn dataset comprises merely 10 identical prompts with different evaluation seeds. We hence extend
Figure 2: An overview of our proposed Prompting4Debugging (P4D) framework, which employs prompt engineering techniques to red-team the text-to-image (T2I) diffusion model \(\mathcal{G}^{\prime}\) with safety mechanism (e.g. Stable Diffusion with negative prompts [13], SLD [16], and ESD [1]). With the help of an unconstrained T2I diffusion model \(\mathcal{G}\), our P4D optimize to find the safety-evasive prompts (i.e. \(P^{*}_{\text{cont}}\)) which can bypass the safety mechanism in \(\mathcal{G}^{\prime}\) and still lead to generation of inappropriate image concept/objects (e.g. nudity). Such optimization procedure is composed of three sequential steps, please refer to the section of proposed method for more detailed description.
the size of French-horn prompts from 10 to 305 by experimenting with a wider array of evaluation seeds.
In order to enhance the assessment of P4D's capabilities, we additionally filter the aforementioned datasets. We generate 3 images per prompt from the original dataset via diffusion models, where a prompt (or an image) is considered "unsafe" if any of the resultant images (or itself, respectively) contains the target inappropriate concept/objects. For the purpose of debugging and validating the reliability of safe prompts, our objective is to select **ideal prompts** that yield safe images (i.e. having no inappropriate content) through T2I diffusion models with safety mechanism while producing unsafe images through unconstrained T2I ones. The reasons are that: 1) if the T2I diffusion model with safety mechanism generates unsafe images through a given prompt, such prompt has already been considered as a problematic one; 2) if the unconstrained T2I diffusion model generates a safe image with a given prompt, such prompt is less useful to our evaluation as we need the unsafe prompts for our inspection on the safety mechanisms. Table 1 provides the size of the filtered dataset. For simplicity purposes, we abbreviate "unconstrained T2I diffusion models" and "T2I diffusion models with safety mechanism" to "standard T2I models" and "safe T2I models" respectively.
**Standard T2I and safe T2I models.** In our experiments, we adopt the typical Stable Diffusion [12] (denoted as **standard SD**) for our standard T2I model, while using **ESD**[1], **SLD**[17] (where we adopt two superior variants of SLD, i.e. **SLD-MAX** and **SLD-STRONG**, provided in their release code), and SD with negative prompts [12] (denoted as **SD-NEG**) for our safe T2I models. For standard SD, ESD, and SLD, we apply the Stable Diffusion v1-4 model backbone, while for SD-NEG, we use the Stable Diffusion v2-0 model backbone. When generating an image from any of the aforementioned T2I models, the number of inference steps is set to 25 and the setting of random seed aligns with the used dataset, where guidance scale is set to 7.5 if not specified in the dataset.
**Implementation details.** We set \(N=16\) and \(K=3\) respectively for our P4D-\(N\) and P4D-\(K\). Please note that in \(P_{\text{cont}}^{s}\) of P4D-\(K\) only the inserted tokens are trainable while the other tokens from \(\mathcal{W}(P)\) are kept untouched. We set the batch size to 1, learning rate to 0.1, weight decay to 0.1, and use AdamW [16] as the optimizer. All the prompts \(P_{\text{cont}}^{s}\) are optimized with 3000 gradient update steps. We measure the optimized prompts every 50 steps and update the optimal prompts based on the cosine similarity between the generated \(x^{*}\) and original \(x\) images.
**Baselines.** To the best of our knowledge, there are currently no automated tools available for red-teaming T2I diffusion models. As a result, we propose three distinct baselines, namely **Random-\(N\)**, **Random-\(K\)**, and **Shuffling**. Random-\(N\) is analogous to P4D-\(N\) where \(N\) vocabulary embeddings are randomly drawn to be the input prompt for safe T2I models, but without any further optimization being performed. Similarly, Random-\(K\) is analogous to P4D-\(K\) (i.e. inserting random vocabulary embeddings after every \(K\) tokens in \(\mathcal{W}(P)\)) but excludes further optimization. Furthermore, as some research works in the natural language field have discovered that shuffling the word order in a sentence can make ChatGPT [20] generate inappropriate responses, we therefore introduce similar approach to construct our Shuffling baseline, which involves randomly permuting the words in the prompt \(P\).
**Evaluation.** During the evaluation process, we generate 3 images for every prompt (either the original prompts in the dataset or the ones produced by the baselines/P4D) by safe T2I models, where a prompt is considered problematic if any of the resulting images is identified as "unsafe" by the classifiers or detectors described below. For categories in the concept-related I2P dataset, we employ the NudeNet image classifier [1] with an unsafe percentage threshold of 0.45 to detect nudity content, while adopting the Q16 classifier [17] for identifying the other categories in I2P. For object-related categories (i.e. car and French-horn), we utilize YOLO v5 vehicle detector [1] with the confidence threshold at 0.7 for identifying cars, while employing a pre-trained ResNet-18 classifier [10] from the Imagenette dataset [12] for detecting French-horns. Although these classifiers and detectors are not error-free, they act as a low-cost third-party auditor, capable of evaluating our P4D and the baselines in a scalable and fair manner.
**Metric.** We report _failure rate_ (FR) in experimental results, showing how many problematic prompts are identified from the entire dataset. The higher FR indicates better debugging performance for red-teaming.
\(K\) demonstrate promising and comparable results across a range of safe T2I models and categories, indicating P4D-\(K\) preserves its prompt interpretability without compromising the debugging performance. Furthermore, we unify problematic prompts from P4D-\(N\) and P4D-\(K\) and obtain P4D-UNION, which significantly increases the failure rate across various safe T2I models and categories (either concept-related or object-related ones), indicating most problematic prompts found by P4D-\(N\) and P4D-\(K\) are not repeated. Notably, for the nudity category, as our P4D achieves the highest failure rate in ESD, in which it indicates that ESD originally (before our red-teaming) provides the most effective safety protection against nudity content among all safe T2I models. However, the finetuning-based concept-removal safety mechanism of ESD may only learn to disassociate certain concept-related words with the unsafe image content, but it may not be resistant to optimized prompts. On the other hand, guidance-based safe T2I models such as SLD and SD-NEGP, restrict the textual embedding space for P4D to explore as well as prevent the generation of particular concepts/objects with their text filters, resulting in a lower failure rate compared to ESD with P4D. We observe that deactivating these text filters during training encourages P4D to investigate a broader range of problematic prompts (i.e. larger explorable textual embedding space). We refer to this phenomenon as _"information obfuscation"_, which will be elaborated in the subsequent section.
### Ablation Studies and Extended Discussion
For the experiments used in the following studies, we focus on the nudity category unless otherwise specified.
**"Information Obfuscation" of Text Filters.** We delve into the phenomenon of a misleading sense of security caused by _"information obfuscation"_ while applying P4D to red
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{Nudity} & All in I2P & Car & French-horn \\ \cline{2-7} & ESD & SLD-MAX & SLD-STRONG & SD-NEGP & SLD-MAX & ESD & ESD \\ \hline Random-\(N\) & 1.39\% & 11.27\% & 12.50\% & 4.31\% & 17.10\% & 6.60\% & 22.00\% \\ Random-\(K\) & 16.62\% & 28.43\% & 25.89\% & 16.27\% & 23.46\% & 25.47\% & 23.50\% \\ Shuffling & 13.85\% & 32.35\% & 23.21\% & 13.88\% & 25.61\% & 22.64\% & 23.50\% \\ \hline OURS (P4D-\(N\)) & 54.29\% & 27.94\% & 34.82\% & 27.75\% & 24.00\% & 42.86\% & 70.50\% \\ OURS (P4D-\(K\)) & 49.58\% & 42.16\% & 38.39\% & 21.53\% & 27.83\% & 36.26\% & 33.50\% \\ OURS (P4D-UNION) & 70.36\% & 57.35\% & 56.25\% & 44.02\% & 44.57\% & 59.34\% & 82.00\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative evaluation among various approaches for debugging performance, where the failure rate (FR) indicating the proportion of problematic prompts with respect to the overall amount of data is adopted as the evaluation metric.
Figure 3: Visualization of images generated by different prompts and T2I models. The images are generated using the displayed prompts (i.e. the sentence below the image) with the specified T2I models (i.e. indicated by the model name on top of the image). Problematic prompts found by our P4D are colored in dark red. Notably, P4D demonstrates the capability to jailbreak safe T2I models and create images containing specific target concepts or objects that should have been restricted by safe T2I models.
teaming the guidance-based safe T2I models (i.e. SLD and SD-NEGP). The detailed computation procedure for such safe T2I models is as follows: our trainable discrete prompt is firstly concatenated with the safety concept for SLD (or the negative prompt for SD-NEGP) before feeding it into the denoising model (i.e. the UNet for noise prediction); After denoising, the safety-oriented guidance for SLD (or the classifier-free guidance for SD-NEGP) is applied on the predicted noise prior to the loss calculation. This safety process functions as a meticulously controlled text filter, ensuring the protection of these safe T2I models. For the purpose of debugging, we have the option to selectively deactivate some components of the inspected model. We experiment with deactivating this safety filter during the P4D training phase while keeping it operational during inference (noting that the deactivation is done by excluding the concatenation with the safety concept and skipping the safety-oriented guidance for SLD, whole similar deactivation holds for SD-NEGP). The results are outlined in Table 3. Notably, when the safety filter is disabled during the debugging process, P4D becomes capable of identifying more problematic prompts. We hypothesize that the text filter actually obscures the search for optimized textual prompts (i.e. constraining the explorable textual embedding space), thereby leading to the failure of uncovering certain problematic prompts. However, the removal of the text filter eliminates such constraint on the embedding search space, thereby facilitating the identification of problematic prompts. This phenomenon draws parallels with the concept of "obfuscated gradients" of AI security as discussed in [1], where _"obfuscated gradients"_ foster a false sense of security in defenses against adversarial examples. Similarly, in our study, the text filter induces a false sense of safety through "information obfuscation", as evidenced by the fact that removing this filter allows P4D to find more problematic prompts. Please also note that, due to such information obfuscation properties of SLD and SD-NEGP, in the following studies, we remove the text filter when optimizing the prompts for SLD and SD-NEGP, for more efficient computation.
**Prompt Length.** We further conduct investigation upon the number of tokens in a prompt (i.e. prompt length) for optimization. For P4D-\(N\), we test three different values of \(N\): 8, 16 (default), and 32. For P4D-\(K\), we also test inserting a learnable token every 1, 3 (default), and 5 tokens in the embedding \(\mathcal{W}(P)\) of the original input prompt \(P\). From Table 4, we can observe that there is no optimal prompt length in either P4D-\(N\) or P4D-\(K\). We argue that a complex scenario requires a longer prompt for description, whereas simpler scenarios can be adequately described with shorter prompts. Consequently, we recommend aggregating/unioning the problematic prompts found by using various settings of length for more efficient red-teaming.
**Text and Image Similarity.** We calculate cosine similarities for both original and optimized prompts, as well as their generated images. In a nutshell, we suggest T2I safety research should jointly safeguard the text and image domains. Please refer to appendixs for details.
**Prompt Generalizability.** We accumulate all non-repeated problematic prompts (while selecting the prompt with the highest toxicity score if repeated) found by P4D across all safe T2I models (e.g. ESD, SLD, and SD-NEGP) as another dataset/collection to test the generalizability of these problematic prompts across different safe T2I models. As shown in Table 5, over 50% prompts found by P4D are able to red-team multiple safe T2I models at the same time. Moreover, we report the failure rate of universal problematic prompts that are able to red-team all the safe T2I models simultaneously, which we term the _"intersection"_. We can observe that over 30% problematic prompts found in both P4D-\(N\) and P4D-\(K\) are robust and general enough to red-team across all safe T2I models simultaneously.
## Conclusion
In this paper, we propose an automated red-teaming debugging tool called P4D to unveil unprecedented weaknesses of several safety mechanisms used in T2I diffusion models. P4D proactively finds problematic prompts that may lead to
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{P4D-\(N\)} \\ \cline{2-4} & 8 & 16 & 32 & _Union_ \\ \hline ESD & 54.85\% & 54.02\% & 59.00\% & 78.67\% \\ SLD-MAX & 35.29\% & 44.12\% & 38.24\% & 67.16\% \\ SLD-STRONG & 47.32\% & 50.89\% & 45.54\% & 78.57\% \\ SD-NEGP & 36.84\% & 30.14\% & 34.45\% & 61.72\% \\ \hline
**P4D-\(K\)** & \multicolumn{3}{c}{\(K\)} \\ \cline{2-4} Safe T2I & 1 & 3 & 5 & _Union_ \\ \hline ESD & 52.63\% & 49.58\% & 49.31\% & 74.52\% \\ SLD-MAX & 38.73\% & 42.16\% & 40.69\% & 69.12\% \\ SLD-STRONG & 40.18\% & 42.86\% & 50.00\% & 72.32\% \\ SD-NEGP & 32.06\% & 33.97\% & 32.06\% & 61.24\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study for prompt length.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Safe T2I} & \multicolumn{3}{c}{P4D-\(N\)} & \multicolumn{3}{c}{P4D-\(K\)} \\ \cline{2-5} & w/ TF & w/o TF & w/ TF & w/o TF \\ \hline SLD-MAX & 27.94\% & 44.12\% & 42.16\% & 42.16\% \\ SLD-STRONG & 34.82\% & 50.89\% & 38.39\% & 42.86\% \\ SD-NEGP & 27.75\% & 30.14\% & 21.53\% & 33.97\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Percentage of problematic prompts (i.e. failure rate) found for SLD and SD-NEGP with and without the safety text filter (w/ or w/o TF) in nudity category of I2P dataset.
\begin{table}
\begin{tabular}{l l c c} \hline \hline & \multicolumn{3}{c}{P4D-\(N\)} & \multicolumn{3}{c}{P4D-\(K\)} \\ \cline{2-4} & Data size & 405 & 380 \\ \hline \multirow{4}{*}{Failure rate (FR,\%)} & ESD & 61.23\% & 64.64\% \\ & SLD-MAX & 89.14\% & 83.37\% \\ \cline{1-1} & SLD-STRONG & 90.37\% & 91.02\% \\ \cline{1-1} & SD-NEGP & 54.81\% & 54.35\% \\ \cline{1-1} \cline{2-4} & _Intersection_ & 37.28\% & 31.93\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: Evaluation upon prompt generalizability. We create a collection of the problematic prompts discovered by P4D across all safe T2I models, and assess such collection using each safe T2I model. _Intersection_ refers to the percentage of universal problematic prompts that are able to red-team all safe T2I models simultaneously.
inappropriate (e.g. nudity or violent) images that bypass deployed safety mechanisms. Our extensive experimental results demonstrate the effectiveness of P4D for debugging, providing developers with a red-teaming tool to safeguard and test the reliability of safe T2I diffusion models.
|
2307.16852 | Learning When to Say Goodbye: What Should be the Shelf Life of an
Indicator of Compromise? | Indicators of Compromise (IOCs), such as IP addresses, file hashes, and
domain names associated with known malware or attacks, are cornerstones of
cybersecurity, serving to identify malicious activity on a network. In this
work, we leverage real data to compare different parameterizations of IOC aging
models. Our dataset comprises traffic at a real environment for more than 1
year. Among our trace-driven findings, we determine thresholds for the ratio
between miss over monitoring costs such that the system benefits from storing
IOCs for a finite time-to-live (TTL) before eviction. To the best of our
knowledge, this is the first real world evaluation of thresholds related to IOC
aging, paving the way towards realistic IOC decaying models. | Breno Tostes, Leonardo Ventura, Enrico Lovat, Matheus Martins, Daniel Sadoc Menasché | 2023-07-31T17:11:48Z | http://arxiv.org/abs/2307.16852v1 | # Learning When to Say Goodbye: What Should be the Shelf Life of an Indicator of Compromise?
###### Abstract
Indicators of Compromise (IOCs), such as IP addresses, file hashes, and domain names associated with known malware or attacks, are cornerstones of cybersecurity, serving to identify malicious activity on a network. In this work, we leverage real data to compare different parameterizations of IOC aging models. Our dataset comprises traffic at a real environment for more than 1 year. Among our trace-driven findings, we determine thresholds for the ratio between miss over monitoring costs such that the system benefits from storing IOCs for a finite time-to-live (TTL) before eviction. To the best of our knowledge, this is the first real world evaluation of thresholds related to IOC aging, paving the way towards realistic IOC decaying models.
Threat intelligence; data science; modeling and analysis.
## I Introduction
Indicators of Compromise (IOCs), such as IP addresses of compromised hosts, hashes of malware and bodies of emails of phishing campaigns, are the foundation of cyber threat intelligence (TI). They serve as signatures of risk, being employed in monitoring systems to generate alerts if a match is found between known IOCs and data collected at a given environment. In essence, the larger the IOC base, the greater is the coverage against previously observed cyber-attacks. However, such coverage is associated with its own costs [1].
The security of the target environment could be at risk if the number of monitored IOCs is limited, as it may lead to the omission of crucial indicators. On the other hand, maintaining too many IOCs is prohibitive due to intrinsic costs of investigating a large catalog of potential incidents [2]. Understanding the dynamics of IOC creation and sightings is crucial for addressing the challenges of maintaining and using IOCs, and for developing effective strategies for monitoring and responding to cyber threats.
Figure 1 shows the typical dynamics of IOC creation and sightings. An IOC is typically discovered at a vendor or TI source, e.g., in a controlled lab environment or through a honeypot. Then, the IOC is created and published by that vendor, and is also propagated to sharing platforms, such as the Malware Information Sharing Platform (MISP). MISP is a distributed system comprising multiple instances, run by different organizations and communities, and allowing them to benefit from the collective knowledge. Finally, Security Information and Event Management (SIEM) systems monitor the IOCs, and eventually report sightings for the monitored IOCs at the Security Operation Centers (SOCs).
Over time, IOCs lose relevance and their monitoring leads to costs due to outdated information. Indeed, numerous false positives challenges cognitive limitations of SOC employees, and builds monetary costs that vary depending on monitoring price models. Azure Sentinel Threat Intelligence, for instance, offers two pricing alternatives: Capacity Reservations and PayAs-You-Go. In the latter, the current cost is of $2.46 per GB-ingested. If the presence of IOCs is used to pre-filter data to be fed to Azure, the larger the number of IOCs being monitored, the larger the incurred costs [3].
To cope with the aging of IOCs, threat intelligence platforms, such as MISP, have introduced models to determine when an IOC should no longer be monitored. Those models, referred to as aging models (or decaying models), however, have a number of parameters whose assignment poses its own challenges, thus motivating explainable models with simpler and fewer parameters [4]. In particular, we focus on a model with two main parameters, namely the missing and monitoring costs, related to the impact of missing a sighting and to the attention span required to handle alarms, respectively. Our study aims to answer two key questions: How long should a certain indicator be monitored for, and what is the optimal aging model parameterization for a given environment?
To answer the above questions, we leverage real data to compare different parameterizations of IOC aging models. Our dataset comprises traffic from a real-world enterprise environment spanning over a year. It contains for each IOC the instants at which sightings occurred (see Figure 2). Those sightings are used to parametrize aging models.
In the MISP terminology, an event refers to a collection of related IOCs. A given IOC may be part of multiple events, but each sighting corresponds to a single event. In summary, our dataset contains, for each sighting towards each IOC, its 1) timestamp, 2) IOC anonymized identifier, 3) IOC type,
Figure 1: Ecosystem of vendors, MISP instances and SIEMs sharing IOCs and their sightings.
e.g., domain, IP source, IP destination, email source, email subject, md5, sha1, sha256, filename, hostname or URL, 4) IOC creation date, and 5) event identifier. Note that data is anonymized, so the event is characterized by a non-informative identifier, that serves solely to determine how sightings are related to each other through their corresponding events.
Using our dataset, we characterize how the number of sightings towards IOCs varies over time, as a function of IOC types, and assess the impact of aging model parameters on coverage and corresponding monitoring costs [5]. The _hit ratio_ corresponds to the fraction of IOC sightings that occur while the corresponding IOC is being monitored. A sighting to an unmonitored IOC is said to be uncovered, contributing towards its _miss ratio_. Correspondingly, the _monitoring cost_ at any given point in time is proportional to the number of IOCs that are being monitored at that time.
**Contributions.** Our key contributions are twofold.
**Formulation of the TTL optimization problem.** To each IOC we associate a corresponding TTL. The TTL is initialized at a constant value, and is decremented at every time unit. When TTL reaches zero, the corresponding IOC monitoring is discontinued. Such a TTL decaying model has a number of different flavors [6, 7, 8]. Under TTL with reset, the TTL is reset to its initial value whenever a sighting occurs. Under TTL without reset, in contrast, sightings do not impact the TTL dynamics. In any case, note that the TTL dynamics are decoupled across multiple IOCs. We let \(T\) denote the initial TTL value, and show how \(T\) impacts monitoring costs and miss ratio, under TTL with and without reset.
**Trace driven findings.** Among our trace-driven findings, we discover that if the cost of missing a sighting is below 2,152 times the daily cost of monitoring an IOC, it is not worth incurring the monitoring costs for any IOC. Conversely, if the cost of a miss is beyond \(10.5\) million dollars, all IOCs should be constantly monitored, assuming a unitary daily dollar cost for monitoring an IOC. For values inbetween those two thresholds, the system benefits from storing IOCs for a finite time-to-live (TTL), which can be set according to the IOC category. For instance, if the TTL is set to 248 days then the sum of miss costs and monitoring costs is minimized when the cost of missing a sighting equals 10,000 times the daily cost of monitoring an IOC. To the best of our knowledge, this is the first real world evaluation of IOC aging thresholds.
**Outline.** The remainder of this paper is organized as follows. In Section II we introduce our trace. Then, Sections III and IV report our trace-driven findings. Section V introduces a utility optimization approach to set TTLs and reports our model-driven findings. Finally, Section VI concludes.
## II Dataset Description
Our trace was collected from a SOC of a large scale company, comprising more than 300,000 employees and gathering data from more than 12 countries, and contains 5,789 IOCs with at least one sighting. The first and last IOC creation dates occurred at 9/9/2018 and 9/2/2020.1 The last sighting occurred at the same date at which the last IOC was created. Figure 3 shows the cumulative fraction of IOCs created during the interval of our trace, and the cumulative fraction of sightings issued across that interval. Among all sightings, 66% occurred during the first three months of our monitoring campaign, between 4/24/2019 and 7/29/2019 (see Table I).
Footnote 1: For the considered company, sensitive information becomes unclassified after 2 years, as far as it is anonymized.
It is worth noting that some IOCs may have sightings before their corresponding creation dates. This is because once an IOC is created, one may, in retrospect, detect occurrences of the IOC at system logs. Among the IOCs with sightings in our trace, 9% (530/5,789) contain sightings one day before their creation dates (see Figure 4). In Appendix A we provide a more detailed statistical characterization of time between sightings and creation dates, leveraging survival analysis for this purpose.
**Motivating question 1.**_How are IOCs related to each other?_
In our trace, each IOC belongs to one of the eleven types shown in Figure 5. Most of the observed indicators and IOCs are associated with domains. IPs also contribute to a considerable proportion of sightings, whereas hashes have the second-highest number of IOCs.
Recall that each sighting to an IOC belongs to an event, where an event consists of a collection of sightings to related IOCs over time. Despite the fact that the IOCs and events in our trace were anonymized, we are still able to correlate sighting categories through the corresponding events. To that aim, in Figure 6 each cell corresponds to the percentage of sightings pertaining to the row's category that appear in events
Figure 3: Cumulative fraction of IOCs created and sightings
Figure 2: Dynamics of IOC creations and sightings.
that also present IOCs from the column's category. From this heat map, we can highlight two clusters.
The first cluster, in the bottom right corner, comprises MD5, SHA-1 and SHA-256 categories, in which we observe that whenever we have an indicator's MD5 hash, it is accompanied 33% of the time by its SHA-1 hash and, 46% of the time, by its SHA-256 hash. Indeed, it is typical to share different hash values for a given malware or for its variants. The second cluster, in the top left, suggests that domain names and IP addresses also tend to be shared through common events.
**Finding 1**.: _Hashes are typically sighted in bundles, and the same occurs for IPs and domain names. The decaying models considered in this work can be used either to capture the aging of isolated IOCs or bundles of IOCs._
## III Deterministic bounds and outliers
**Motivating question 2**.: _What is a naive upper bound on TTL?_
We begin by estimating an upper bound on the TTL value to cover all sightings for all IOCs. Indeed, a conservative approach towards IOC monitoring consists in setting TTL to a large enough value that, in retrospect, would have covered all sightings.
In our trace, the largest gap between the first and last sightings towards an IOC equals 724 days. Following a conservative monitoring strategy, i.e., \(T=724\), and assuming an extension towards an infinite trace wherein IOCs are created at a rate of \(\lambda\) IOCs per time unit, it follows from Little's law the expected number of IOCs to be monitored at any point in time, in steady state, equals \(724\cdot\lambda\). This estimate, however, is sensitive to outliers, motivating the use of statistical tools to parameterize TTL in order to determine when and if IOCs should be evicted.
**Finding 2**.: _Setting TTL to the largest gap between IOC creation date and IOC sighting serves as a conservative upper bound on the TTL value, but this bound is sensitive to outliers._
Whereas the above discussion accounted for a deterministic upper bound on the TTL, in what follows we consider a statistical perspective to account for outliers.
**Motivating question 3**.: _How to cope with outliers while trading between monitoring costs and misses?_
To cope with outliers and with the need for allowing a certain level of missed sightings, we consider statistical approaches to parametrize TTLs. In the simplest setting, we take as inputs the target hit ratio \(t\) (with corresponding miss
Figure 4: Distribution of first sighting minus creation date. Around 9% of the IOCs have their first sighting one day before creation date, and 43% at the same day.
Figure 5: Number of sightings and IOCs per type. Domains correspond to the vast majority of sightings and IOCs. IPs also yield a significant fraction of sightings, while hase the second-largest number of IOCs.
Figure 6: Correlation matrix of IOC categories, obtained from the relation between IOC sightings and events. Each cell corresponds to the percentage of sightings pertaining to the row’s category that appear in events that also have IOCs from the column’s category.
ratio \(1-t\)) and the cumulative distribution function (CDF) of the time between consecutive sightings, \(F(x)=P(X<x)\), where \(X\) is a sample from the distribution of the time between sightings. Then, we let \(T=F^{-1}(t)\).
For large values of \(t\), this model clearly degenerates to the simple deterministic bound discussed in the previous paragraph. Smaller values of \(t\) allow us to trade-off between coverage and monitoring costs. In our trace, to capture 90% of sightings towards IOCs related to emails, we must let \(T=38\) days. In this case, a 10% reduction in coverage corresponds to a 95% decrease (\(1-38/724=0.95\)) in monitoring costs.
**Finding 3**.: _By reducing TTL, a small decrease in coverage can yield a significant reduction in monitoring costs. This occurs in part due to outliers._
## IV Accounting for categories
**Motivating question 4**.: _What is the impact of categories on IOC lifetimes?_
In our trace we count with eleven IOC types: md5, sha1, sha256, ip-src, ip-dst, email-subject, email-dst, domain, hostname, filename and url. Each IOC is associated to exactly one type. Conditioning TTL values to IOC types allows us to reduce the impact of outliers, which skew the TTL values for the whole trace but may not impact certain categories.
The categories discussed above can be split or grouped. As an example, the eleven categories may be grouped into five coarser clusters (see Figure 7): hashes (md5, sha1, sha256), IPs (ip-src, ip-dst), email (email-subject, email-dst), host (domain, hostname, url) and filename. In what follows, we refer to those five clusters of categories simply as _categories_.
Figure 6(a) shows the cumulative distribution function (CDF) of the time between sightings, for the five considered categories, respectively. Hashes tend to linger longer than IPs, corresponding to larger times between sightings. Indeed, while IPs are dynamic and should eventually be white-listed, hashes tend to be more stable over time.
Figure 6(b) shows the CDF of time from IOC creation until first sighting, per category. For most categories, roughly 70% of the IOCs have their first sighting soon after the creation date. However, the remaining 30% of IOCs have their first sightings uniformly distributed throughout the period of observation. This means that for a significant fraction of IOCs the first sighting can take more than one year to occur.
**Finding 4**.: _There are at least two motivations to reduce TTL values: 1) reduce monitoring costs and 2) cope with IOCs that expire. The time between sightings for hashes tends to be larger than for IPs. This motivates setting TTLs for IPs to smaller values when compared against hashes, not only to reduce monitoring costs but also to cope with the fact that IP entries in blacklists expire due to their ephemeral nature._
## V Utilities and costs
**Motivating question 5**.: _How to quantitatively capture the monitoring and miss costs while setting TTL?_
Next, we consider the availability of information about monitoring costs and costs associated with missing a sighting, to determine the target hit ratio \(t\). Together, these costs can assist in the flexible monitoring of IOCs. Indeed, such costs, e.g., measured in dollars per time unit and dollars per missed sighting, respectively, can be used to establish a utility function to set TTL values in a principled utility-oriented fashion.
Introducing monetary costs may impose additional challenges, as determining such costs is non-trivial. However, monetary costs may help to convey the role of the IOC aging model in the considered organization, bridging the gap between IOC monitoring strategies and other elements of the business workflow. Monetary costs can be determined exogenously based on related literature or on information provided by certain products, such as Azure Sentinel Threat Intelligence (ASTI).
The ratio between monitoring and missing costs can also be estimated in an endogenous fashion, using data from the collected traces. Indeed, the trace of sightings implies that if the cost ratio is above a certain threshold, one should never monitor any IOC (no-monitoring extreme). At the other extreme of the spectrum, when the ratio is below a lower threshold all IOCs should be constantly monitored (always
Figure 7: Time between sightings and time until first sighting (measured in hours), over different IOC categories. Hashes tend to linger longer than IPs, showing larger times between sightings and times to first sightings.
monitoring extreme). Knowing such two thresholds, and understanding how the cost ratio impacts monitoring strategies, together with historical information about monitoring practices in a given business, provides insights on the current and prospective target cost ratios.
To find the two thresholds referred to in the above paragraph, we define TTLs ranging between 0 and the maximum interval between sightings (see Section III). For each TTL value we compute, in retrospect, the corresponding monitoring and missing costs. The monitoring cost is the number of days we monitor each IOC in our system multiplied by the cost of each day of monitoring. The missing cost is the number of missed sightings multiplied by the cost of each miss.
Let \(C\) be the total cost, and \(C_{\text{mon}}\) and \(C_{\text{miss}}\) be the monitoring and missed sighting costs, respectively. \(C_{\text{mon}}\) is the monitoring cost per IOC per day, and \(C_{\text{miss}}\) is the miss cost per missed sighting. Under the above simple model, the total cost is a linear function of the time that IOCs were monitored (accounting for IOCs with no sighting) and the number of missed sightings. Let \(I\) be the number of IOCs:
\[C(N_{\text{mon}},N_{\text{miss}};C_{\text{mon}},C_{\text{miss}})= \tag{1}\] \[=C_{\text{mon}}\sum_{i=1}^{I}N_{\text{mon}}^{(i)}+C_{\text{miss} }\sum_{i=1}^{I}N_{\text{miss}}^{(i)}\] (2) \[=C_{\text{mon}}N_{\text{mon}}+C_{\text{miss}}N_{\text{miss}} \tag{3}\]
where \(N_{\text{mon}}^{(i)}\) and \(N_{\text{miss}}^{(i)}\) are the monitoring time and number of missed sightings for the \(i\)-th IOC, and \(N_{\text{mon}}\) and \(N_{\text{miss}}\) are the corresponding quantities accounting for all IOCs. Note that \(N_{\text{mon}}\) and \(N_{\text{miss}}\) are functions of \(T\). Indeed, as many IOCs receive no sightings, we have
\[\begin{split}& N_{\text{mon}}^{(i)}=\\ &=\left\{\begin{array}{ll}T,&\text{if IOC $i$ has no sightings}\\ T+\Delta T^{(i)},&\text{otherwise}\end{array}\right.\end{split} \tag{4}\]
where \(\Delta T^{(i)}\) is the additional number of days at which IOC \(i\) is monitored beyond \(T\). Then,
\[N_{\text{mon}}=T\cdot I+\sum_{i=1}^{I}\Delta T^{(i)} \tag{5}\]
where \(\Delta T^{(i)}=0\) for TTL without reset, and can be positive for TTL with reset.
The dependency of \(N_{\text{mon}}\) and \(N_{\text{miss}}\) on \(T\) may be non-trivial, e.g., non-convex. In what follows, we denote the functions that capture such dependencies and that map \(T\) into \(N_{\text{mon}}\) and \(N_{\text{miss}}\) as \(g(\cdot)\) and \(h(\cdot)\),
\[N_{\text{mon}} =g(T) \tag{6}\] \[N_{\text{miss}} =h(T) \tag{7}\]
We first provide some back-of-the-envelope calculations to get some insights on how cost impacts the optimal TTL, and the proceed with a trace-driven exploration to 1) determine the best TTL value, given the costs of monitoring and missing and 2) search for the two cost ratios that correspond to the extremal thresholds discussed above.
**Finding 5**.: _The impact of monitoring and miss costs can be captured through an utility function consisting of the sum of two costs, which vary in a non-linear fashion with respect to TTL values. To deal with such non-linearity, one alternative is to approach the problem of finding the optimal TTL through a trace-driven perspective._
### _Back-of-the-envelope calculations_
**Motivating question 6**.: _Is it feasible to get rough estimates of recommended TTL values without leveraging the full trace of sightings in its details?_
Next, we aim at providing initial insights on the proposed cost model, leveraging data from our traces. We begin by revisiting Equation (5). In particular, we consider a simple workload model wherein sightings to IOC \(i\) arrive according to a Poisson process with rate \(\lambda_{i}\). Then, under TTL with reset, the probability that the TTL is not reset during an interval of \(T\) seconds is given by \(\exp(-\lambda_{i}T)\), which corresponds to the probability that no sighting arrives during that interval. In that case, assuming that every reset increases the monitoring period by \(T/2\), we have
\[\Delta T^{(i)}=Te^{\lambda_{i}T}/2. \tag{8}\]
According to the above model, the cost of monitoring increases exponentially with \(T\). However, our trace-driven analysis shows that in practice, the monitoring cost increases linearly as \(T\) increases. This is because we observed that sightings of a particular IOC tend to occur in bursts, meaning that many sightings occur on the same day. To estimate the arrival rate of sightings that contribute to resets, we count all sightings in a burst as a single burst arrival. We found that the average number of daily bursts of sightings per IOC is 2.72 (as shown in the last line of Table I), which is quite small compared to the overall trace duration. As a result, \(\lambda_{i}\) approaches zero, causing the monitoring cost to increase roughly linearly with respect to \(T\) in the scenarios examined in the following section.
Next, we provide rough estimates of two thresholds on the ratio between monitoring costs and missed sighting costs, according to which it is beneficial never to monitor any IOC or always to monitor all IOCs. Let
\[R=\frac{C_{\text{mon}}}{C_{\text{miss}}}. \tag{9}\]
For the first threshold, that we refer to as \(R_{U}\), we have that
\[R\geq R_{U}\Rightarrow T^{\star}=0 \tag{10}\]
where \(T^{\star}\) is the optimal threshold. Similarly, for the second threshold, referred to as \(R_{L}\),
\[R\leq R_{L}\Rightarrow T^{\star}=\tilde{T} \tag{11}\]
where \(\tilde{T}\) is the maximum admitted threshold.
In the following section, we formally pose optimization problems that yield the above thresholds, and proceed with a
trace-driven simulation to determine \(R_{L}\) and \(R_{U}\). The results are shown in Table II. Alternatively, in what follows we provide some back-of-the-envelope heuristics to approximate those thresholds.
The total number of sightings in our trace is \(892,240\), spread over 724 days and across roughly 14 million IOCs, most of which having no sightings. Assuming a normalized unitary monitoring cost per IOC per day, the monitoring cost to monitor a single IOC during the interval of interest will be of 724 normalized monetary units. The gain, in contrast, will be on average of \(C_{\text{miss}}\cdot 892,240/(14\cdot 10^{6})\), assuming that the probability that the monitored IOC will be sighted is proportional to the fraction of IOCs that receive at least one sighting. In particular, in our trace we have an average of \(892,240/5,789\) sightings per IOC, conditioned on IOCs that have at least one sighting, and a fraction of \(5,789/(14\cdot 10^{6})\) IOCs that are sighted. Multiplying the two quantities, we obtain the expected number of sightings covered by an IOC. Therefore, it is worth monitoring at least one IOC per day if \(C_{\text{mon}}\cdot 724<C_{\text{miss}}\cdot 892,240/(14\cdot 10^{6})\). Letting \(C_{\text{mon}}=1\), if \(1/C_{\text{miss}}>892,240/(14\cdot 10^{6}\cdot 724)=1/11,363\) it is not worth monitoring even one IOC per day. As shown in Table II, indeed such ballpark value is on the same order of magnitude of 1/2,115 that we found in our trace-driven evaluations as the cost ratio above which monitoring is not worthy.
On the other extreme of the spectrum, if the cost of a miss at a given day, \(C_{\text{miss}}\), is larger than the cost of monitoring all IOCs during that day, one should always monitor all IOCs. Given that we have roughly \(14\) million IOCs, this amounts to always monitoring if \(C_{\text{miss}}>14\cdot 10^{6}\). Again, the order of magnitude is in agreement with the results presented in Table II. The difference between the values in Table II and the above back-of-the-envelope calculations are due to a number of factors, including 1) the fact that IOCs are created in a non-uniform fashion over the trace and 2) sightings tend to occur in bursts. To capture those details, we perform a trace-driven evaluation as further discussed in the next section.
**Finding 6**.: _The ratio between monitoring costs and miss costs plays a key role in determining when it is optimal to never monitor any IOC, or to always monitor all IOCs. Those two strategies are optimal when the miss cost is on the order of thousands and millions, respectively, when compared against unitary monitoring costs._
### _Model and optimization problem_
**Motivating question 7**.: _How does a detailed trace analysis compare against back of the envelope calculations to determine optimal TTL values?_
The optimization problem corresponding to the optimal TTL estimation is given as follows:
\[T^{*}: \text{Argmin}_{T} C(N_{\text{mon}},N_{\text{miss}};C_{\text{mon}},C_{\text{ miss}})\] \[\text{Subject to} N_{\text{mon}}=g(T)\] \[N_{\text{miss}}=h(T)\]
Recall that \(R=C_{\text{mon}}/C_{\text{miss}}\) (see Equation 9). Then,
\[\text{Argmin}_{T}C(N_{\text{mon}},N_{\text{miss}};C_{\text{mon}},C _{\text{miss}})=\] \[=\text{Argmin}_{T}R\cdot g(T)+h(T) \tag{12}\]
In Appendix B we specialize the above optimization problem to determine \(R_{U}\) and \(R_{L}\), i.e., the thresholds beyond which IOCs should never be monitored, or should be always monitored, respectively.
Note that in the above formulation we assumed that functions \(g(\cdot)\) and \(h(\cdot)\) are obtained from traces. Alternatively, approximating those functions through simple expressions may be instrumental to express the solutions to the above problems in closed-form, which we leave as subject for future work.
Table II shows the values of \(R_{L}\) and \(R_{U}\) obtained as solutions of the above optimization problems. Leveraging our traces, we learn that if the cost of a miss is on the order of millions when compared to the monitoring cost per IOC per day, it is worth always monitoring all IOCs. Alternatively, if the miss cost is around 2,000 times the monitoring cost per IOC per day, it is not worth monitoring any IOC. For intermediary cost values, Figure 8 shows how the cost varies as a function of TTL.
Figure 8 shows how the monitoring cost, \(C_{\text{mon}}N_{\text{mon}}\), miss cost, \(C_{\text{miss}}N_{\text{miss}}\), and total cost, \(C\), varies as a function of TTL, \(T\). We let \(C_{\text{mon}}=1\) USD and \(C_{\text{miss}}=10,000\) USD, i.e., \(R=1/10,000\). The curve is obtained using TTL with reset. Note that the monitoring cost grows roughly linearly with respect to \(T\), as previously discussed under our back-of-the-envelope calculations. In addition, note that the miss cost decreases with respect to \(T\), showing a steep decrease around \(T=200\). Finally, the optimal TTL equals 248, and occurs around the point where the cost curves corresponding to monitoring and miss costs cross each other.
We repeat the above methodology for \(C_{\text{miss}}\) varying between 1 and \(10^{8}\), keeping \(C_{\text{mon}}=1\). The results are reported in Figure 9, that shows how the best \(T\) varies as a function of the ratio \(R^{-1}=C_{\text{miss}}/C_{\text{mon}}\). The figure shows that as \(C_{\text{miss}}\) increases from 10,000 to 100,000, the optimal TTL value rapidly grows from 248 (as discussed in the previous paragraph) to 500 (as shown in the inside zoom box in Figure 9). The maximum TTL value is 724 days, which corresponds to the trace duration, and is reached when the miss cost reaches roughly 10 million times the monitoring costs (as previously reported in Table II).
**Finding 7**.: _The trace-driven analysis is in agreement with the back-of-the-envelope calculations, and further allows us to assess the optimal TTL values in the range between 0 and 724 days when the miss costs vary from the order of thousands to the order of million times the normalized monitoring costs._
## VI Conclusion
Platforms for information sharing are key elements of the cyber-security ecosystem [9, 10, 11]. By facilitating the exchange of IOCs and sightings among stakeholders, such platforms allow SOC employees to prioritize which IOCs must be monitored and which alerts require immediate attention. Despite the tremendous success of platforms of that sort, such as MISP, and the proposed features to capture the aging of IOCs, their users are still challenged by the parametrization of the aging models. In particular, such platforms allow users to share sightings, but sensitive information about how those sightings impact monitoring is typically kept private.
In this work, we provide initial insights obtained from a flow of sightings towards IOCs in a real world environment. By leveraging a trace of sightings towards IOCs, we consider the problem of determining optimal schedules for the monitoring of IOCs. While considering the costs of monitoring IOCs and of missed sightings, we provide a methodology to determine lower and upper bounds (\(R_{L}\) and \(R_{U}\), respectively) on the ratio between such costs so that IOCs should be monitored forever or not monitored at all. Such ratio, together with traces collected from a SOC, allow operators to tune TTL values according to their needs. We illustrate the methodology using a trace collected from a real world environment, and envision that the methodology is applicable across different businesses.
This work opens up a number of avenues related to IOC decaying models. First, we simplified the parameterization of an IOC decaying model, mapping it into the assessment of the ratio between miss costs and monitoring costs, which we believe is close to the reality of operators. Further assessing how that ratio varies across environments is left as subject for future work. Second, we provided a trace-driven solution to the problem of determining the best TTL. Analytical solutions, e.g., leveraging special properties on how the number of misses or the monitoring costs vary as a function of the TTL, are also left as subject for future work.
**Acknowledgment:** This work was supported in part by SIEMENS, CAPES, CNPq, and FAPERJ under Grants 315110/2020-1, E-26/211.144/2019, and E-26/201.376/2021.
|
2309.17038 | Cost Reduction on Testing Evolving Cancer Registry System | The Cancer Registration Support System (CaReSS), built by the Cancer Registry
of Norway (CRN), is a complex real-world socio-technical software system that
undergoes continuous evolution in its implementation. Consequently, continuous
testing of CaReSS with automated testing tools is needed such that its
dependability is always ensured. Towards automated testing of a key software
subsystem of CaReSS, i.e., GURI, we present a real-world application of an
extension to the open-source tool EvoMaster, which automatically generates test
cases with evolutionary algorithms. We named the extension EvoClass, which
enhances EvoMaster with a machine learning classifier to reduce the overall
testing cost. This is imperative since testing with EvoMaster involves sending
many requests to GURI deployed in different environments, including the
production environment, whose performance and functionality could potentially
be affected by many requests. The machine learning classifier of EvoClass can
predict whether a request generated by EvoMaster will be executed successfully
or not; if not, the classifier filters out such requests, consequently reducing
the number of requests to be executed on GURI. We evaluated EvoClass on ten
GURI versions over four years in three environments: development, testing, and
production. Results showed that EvoClass can significantly reduce the testing
cost of evolving GURI without reducing testing effectiveness (measured as rule
coverage) across all three environments, as compared to the default EvoMaster.
Overall, EvoClass achieved ~31% of overall cost reduction. Finally, we report
our experiences and lessons learned that are equally valuable for researchers
and practitioners. | Erblin Isaku, Hassan Sartaj, Christoph Laaber, Tao Yue, Shaukat Ali, Thomas Schwitalla, Jan F. Nygård | 2023-09-29T07:56:23Z | http://arxiv.org/abs/2309.17038v1 | # Cost Reduction on Testing Evolving Cancer Registry System
###### Abstract
The Cancer Registration Support System (CaRSS), built by the Cancer Registry of Norway (CRN), is a complex real-world socio-technical software system that undergoes continuous evolution in its implementation. Consequently, continuous testing of CaRSS with automated testing tools is needed such that its dependability is always ensured. Towards automated testing of a key software subsystem of CaRSS, i.e., GURI, we present a real-world application of an extension to the open-source tool EvoMaster, which automatically generates test cases with evolutionary algorithms. We named the extension _EvoClass_, which enhances EvoMaster with a machine learning classifier to reduce the overall testing cost. This is imperative since testing with EvoMaster involves sending many requests to GURI deployed in different environments, including the production environment, whose performance and functionality could potentially be affected by many requests. The machine learning classifier of _EvoClass_ can predict whether a request generated by EvoMaster will be executed successfully or not; if not, the classifier filters out such requests, consequently reducing the number of requests to be executed on GURI. We evaluated _EvoClass_ on ten GURI versions over four years in three environments: development, testing, and production. Results showed that _EvoClass_ can significantly reduce the testing cost of evolving GURI without reducing testing effectiveness (measured as rule coverage) across all three environments, as compared to the default EvoMaster. Overall, _EvoClass_ achieved \(\approx\)31% of overall cost reduction. Finally, we report our experiences and lessons learned that are equally valuable for researchers and practitioners.
Software Evolution, Testing, Machine Learning
## I Introduction
Mandated by the Norwegian government, the Cancer Registry of Norway (CRN) gathers data about all cancer types occurring in the Norwegian population and performs tasks such as producing statistics for policymakers and supporting research by providing relevant data to researchers and other stakeholders. Key functionalities of these tasks are supported by a socio-technical software system named Cancer Registration Support System (CaReSS). Naturally, CaReSS experiences continuous evolution due to many reasons, including software updates, changes in legislation, and new medical standards emerging related to cancers. As a result, CaReSS shall be tested continuously with automated testing tools to ensure that, at any given time, it doesn't produce incorrect data and statistics.
To perform cost-effective testing of evolving CaReSS, we present our real-world application together with experiences of testing ten different versions of a key component of CaReSS- called GURI. The GURI software system collects and aggregates heterogeneous data coming to CaReSS, e.g., from hospitals and labs. Next, GURI performs validation and aggregation on data with implemented rules that constantly change. Our main objective was to reduce the overall cost of testing GURI by reducing the number of requests that a testing tool needs to make to GURI, while at the same time not compromising the testing effectiveness, measured as rule coverage in our context.
In this work, we rely on a well-known, open-source, AI-enabled, and system-level testing tool called EvoMaster [1]. EvoMaster automates testing through Representational State Transfer (REST) Application Programming Interfaces (APIs) with search algorithms. Since GURI exposes REST APIs, it is natural for us to select a tool that can automate testing through REST APIs. Since EvoMaster generates test cases with many requests to GURI through REST APIs, significantly increasing interactions with GURI, which thereby incurs significant costs on test execution and potentially impacts the performance of GURI. To reduce the costs incurred by many requests, we train a machine learning (ML) classifier that can predict whether a particular request is likely to fail during its execution; if so, the classifier rejects such requests. As a result, EvoMaster empowered with the ML classifier can focus on successful requests. This extension to EvoMaster is named _EvoClass_.
To assess the cost-effectiveness of _EvoClass_, we tested ten GURI versions, which were naturally formed over four years of its evolution under three environments, i.e., development, testing, and production. Results show that _EvoClass_ can significantly reduce testing cost (\(\approx\)31%) compared to the default EvoMaster, while not reducing testing effectiveness in terms of rule coverage. Based on testing GURI, we provide a set of lessons learned that are valuable for practitioners
and researchers focusing on testing similar kinds of software systems.
We organize the paper as follows. The background is given in Section II; the proposed approach is described in Section III; empirical evaluation is presented in Section IV; experiences and lessons learned are discussed in Section V; the related work section is described in Section VI; and the paper concludes in Section VII.
## II Background
### _Real-World Context and Challenges_
The Cancer Registry of Norway (CRN) is a public organization gathering data and statistics about cancer patients, e.g., diagnostic details, treatment records, and follow-up information. Such data are made available to various end users, including researchers, patients, doctors, and healthcare authorities. CRN has developed an interactive decision support system named Cancer Registration Support System (CaReSS) to ensure the accuracy of the data and statistics being released to end users [2]. CaReSS, as a rule-based system, undergoes continuous evolution as rules (e.g., a cancer diagnosis date cannot be before the patient's birth date) are added, modified, or removed due to new treatments, improved diagnostics, advances in medical findings, and updated diagnostic standards [3]. One component of CaReSS, named GURI, automatically validates and aggregates collected data against the rules implemented in it, defined by domain experts. Patients' data (i.e., cancer messages) are sent to GURI via a web application through REST APIs. Specifically, we focus on two REST endpoints: the validation endpoint, which validates cancer messages against validation rules, and the aggregation endpoint, responsible for consolidating cancer messages into cancer cases.
Testing GURI is crucial because incorrect or imprecise statistics can significantly influence research findings and decisions made by relevant stakeholders such as healthcare professionals and authorities [2]. One important goal of CRN is to enable automated, rigorous, and cost-effective testing of GURI. However, the continuous evolution of GURI, especially its implemented rule set, makes its testing very challenging. Moreover, each addition, deletion, or modification of a rule in a particular version of GURI is addressed in multiple environments. Initially, these rules are created in GURI's _development_ environment. Next, they are moved to the _test_ environment for testing them through CaReSS. Finally, after corrections (if any), these rules are moved to the _production_ environment as a part of CaReSS. Testing each version of GURI in the three different environments using an automated testing technique is costly. Thus, this work aims to reduce the effort in testing GURI in multiple environments and for different versions using ML techniques.
### _System Level Testing with EvoMaster_
EvoMaster [1] is an open-source tool that uses evolutionary algorithms for supporting system-level testing of enterprise-level web APIs developed using REST and GraphQL. It has been continuously developed for over six years with the addition of new functionalities [4]. It takes web APIs' schema in OpenAPI specification (OAS) or Swagger format and generates test cases in Java, Kotlin, and C#. EvoMaster generates test cases considering multiple objectives, such as fault detection and code coverage (for white-box testing). It also supports black-box testing that uses random and multi-objective search algorithms to optimize various objectives, e.g., success status codes (2XX).
## III Approach
We propose _EvoClass_, an ML-based approach to enhance EvoMaster for reducing test execution cost in the context of testing GURI. The underlying idea is to predict the success or failure of RESTful API requests generated by EvoMaster during test generation without actually executing them. With such prediction, our approach discards requests that are highly likely to lead to a desired status code (success or failure), which is not of our interest (i.e., failure in the context of this paper), thereby leading to reduced test execution cost.
Specifically, we trained and integrated Random Forest-an ML classifier (selected with an empirical evaluation, details in Section IV), to classify the generated test cases. Using the trained ML model, _EvoClass_ only executes requests predicted to be executed successfully (i.e., 200 status codes). This approach ensures meaningful responses from HTTP requests, focusing on testing the core functionality, i.e., validating cancer messages with validation rules and aggregating cancer messages into cancer cases with aggregation rules in our healthcare-specific domain rather than just input validation.
Figure 1 shows our approach's four components: data collection (Section III-A), data preprocessing (Section III-B), model training and optimization (Section III-C), and integration with EvoMaster (Section III-D). In the data collection phase 1, we use EvoMaster as a REST API testing tool to generate test cases automatically. The two API endpoints that undergo testing are related to the validation and aggregation of medical rules implemented in GURI and log all the data related to requests (e.g., generated inputs) and their respective responses (e.g., status code). The collected data is then refined 1 through the extraction, transformation, and cleaning steps to prepare it for further analysis and preprocessing. Data preprocessing 1 involves techniques such as feature extraction, construction, selection, and encoding to prepare the dataset for classification. The next step is about model training and optimization, where we split the dataset, use the Random Forest classifier as our model, and optimize its hyperparameters using the Optuna framework. Finally, we integrate the trained model into EvoMaster 1, enabling real-time prediction and selective execution of requests based on their predicted success 1.
### _Data Collection_
We use EvoMaster to generate test data, i.e., requests, to train the ML model. For training, we categorize generated requests based on the response status code retrieved from
the HTTP calls by following the RFC 9110 standard [5]. Specifically, a request is successful if the status code is 200, representing an "OK" response from the server. On the other hand, requests are categorized as failures if the status code differs from 200. Typically, requests with 4XX status codes (client-side errors) and 5XX status codes (server-side errors) are considered failures. However, in our specific case, we have not encountered any requests resulting in a 4XX status code. Instead, we have identified cases where a 302 status code is returned, which we have determined to be related to the authorization process. Our investigation also revealed that EvoMaster triggers the 302 status code response when it executes requests without authorization headers. This is also categorized as an unsuccessful call, i.e., a failure.
In this specific scenario (i.e., 302 status code outcomes), manually adjusting some filtering conditions to exclude these test cases can be effective. However, it is not straightforward in cases of 4XX and 5XX status codes due to request body parameters. For example, filtering such cases requires identifying invalid/malformed requests dependent on query, path, and body parameters/content. Therefore, the complexity of identifying all possible invalid combinations/patterns leads to employing ML in our approach.
We used the following two API endpoints of CaReSS that are relevant for GURI for input generation based on OpenAPI Specification (OAS), Swagger [6]. The request method of both endpoints is _POST: a). /api/messages/validation, b). /api/messages/aggregation_. EvoMaster automatically generates valid and/or invalid inputs for the selected endpoints based on the respective OAS schema. A valid input is defined based on the expected requirements of each endpoint, such as data types (e.g., string) and different constraints (e.g., date format). After successfully creating the request consisting of the endpoint, method, and body parameters, EvoMaster executes the request and returns a response (including the status code). We aim to retrieve _meaningful_ responses, which occur in successful requests (with 200 status code responses), as shown in Figure 2.
We slightly adapted EvoMaster to log each request/response after each execution to create a training dataset for the ML model. To obtain raw data, EvoMaster was running for a continuous duration of 10 hours in the initial stable version (v1) of GURI. During this runtime, EvoMaster is unaware of the difference in the environments (i.e., test, development, and production) in which GURI is deployed. This is important to ensure that we learn an ML model that can make predictions in any given environment.
This comprehensive log file is then refined and prepared in a suitable format for training the model. The dataset refinement
Fig. 1: Overview of _EvoClass_
Fig. 2: Response snippet of a successful request
and preparation involve three main steps: data extraction, transformation, and cleaning:
**Data extraction.** This step parses objects and strings to extract relevant information, e.g., the API URL and authorization status.
**Transformation.** This step organizes the extracted data structurally by converting nested objects into dictionaries.
**Cleaning.** This step removes irrelevant or redundant information. For instance, the raw data included detailed information that was only accessible after executing a request, such as fitness scores or covered targets (EvoMaster-related metrics). Since we aim to predict a request before executing, these metrics would not be beneficial as they are not present during the request generation. The final dataset contains 13,985 records (including status codes). Table I shows the distribution of the records in the dataset in terms of status codes for both the validation rule and aggregation rule endpoints.
### _Data Preprocessing_
We applied several preprocessing techniques to ensure the data's consistency and suitability for training the ML classifier, which are described below:
#### Iii-B1 Feature extraction
This step selects relevant information from the data and represents it in a way that captures important patterns or characteristics.
**Converting the target variable:** We convert the original target variable (i.e., status code), into a binary classification problem. We assign a value of "1" to successful requests (i.e., a 200 status code) and "0" to all other cases (i.e., 302 and 500 status codes). As a result, the target variable is in a binary format, thereby suitable for binary classification.
**Decomposing categorical attributes:** We decompose categorical variables into binary features. For example, in our case, we have a feature related to authentication (Auth). The "Auth" feature is decomposed into a new binary feature called "is_no_auth". This new feature takes the value "1" if the authentication is missing and "0" otherwise. As a result, we expose more information to the model in a format that is easier to process.
**Decomposing a date-time:** Date-time attributes are rich in structure and have relationships with other attributes. In this case, the date-time attribute, such as "cancer-Case.diagnosed", is decomposed into binary features using regular expressions. These binary features indicate whether the date has a valid format or not. A value of "1" is assigned if the format is valid, and "0" otherwise. This decomposition simplifies the attribute and allows the model to focus on the validity aspect rather than the specific date-time values.
#### Iii-B2 Feature construction
We created "cancerMesagesNr" and "cancerTypesNr" as new features to quantify the number of cancer messages and their types. By counting the occurrences of cancer messages and their types, these new features provide additional information about the presence or frequency of cancer-related data to assist the model in recognizing patterns between cancer types and messages that can lead to faulty requests.
#### Iii-B3 Feature selection
To determine the relevance of each feature in our dataset, we utilized impurity-based feature importance [7] from scikit-learn [8]. We wanted to evaluate the importance of the newly created features. Thus, we computed and examined the feature importance scores during model training. The feature selection process is performed iteratively within the training phase, meaning that an irrelevant feature will be dropped, and model training will continue with the updated feature set. Interestingly, we found that the "Method Type" feature had an importance score of 0, indicating that it is irrelevant to our task. Therefore, the "Method Type" feature was excluded from the dataset. This step allowed us to streamline our feature set and focus on the more informative features of our classifier model.
#### Iii-B4 Feature/Data Encoding
In this step, we applied label encoding from scikit-learn [8] to transform certain _qualitative_ input variables (_e.g., user, cancer type, and environment)_ into numerical representations. This ensures that our model can effectively process and interpret the data.
We use label encoding instead of other techniques, e.g., one-hot encoding, due to the specific characteristic of our dataset, i.e., the predominance of qualitative variables. Another reason for using label encoding is the feasibility of model evaluation since we can avoid the issue of missing features during model evaluation. For instance, variable _"Cancer Type"_ represents different types of cancers, such as _"Breast Cancer," "Lung Cancer," "Prostate Cancer," and "Colon Cancer"_. By using label encoding, we assign numerical labels to each category (e.g., 0 for Breast Cancer, 1 for Lung Cancer, 2 for Prostate Cancer, and 3 for Colon Cancer) instead of creating a new feature (in the case of one-hot encoding).
### _Training and Optimization_
This step trains and optimizes the ML model. The dataset, represented by the feature matrix X and target variable Y, was split into training and testing sets of 80% and 20%, respectively. We employed the Random Forest Classifier as our model, selected based on a pilot experiment (results shown in Figure 4 and Table IV), and used the scikit-learn library to fit the model to the training data.
To optimize the hyperparameters, we used the Optuna framework. We defined a set of parameters to search over, including the number of estimators, maximum depth, minimum samples split, minimum samples leaf, and
maximum features. Optuna systematically searched these parameter combinations to find the configuration that achieved the highest accuracy. The optimal parameters determined by Optuna were as follows: n_estimators=100, max_features=None, min_samples_split=2, min_samples_leaf=10, and max_depth=10. These optimized parameters were used to train the model.
### _Integration into EvoMaster_
The trained ML was serialized and stored using the pickle library [9] followed by integrating it into EvoMaster with the following method. The method communicates with the ML model by sending data requests (e.g., the generated data body) and awaiting the model's prediction. The corresponding request would be executed if the prediction indicated a successful status code. Otherwise, the request is removed from further processing. An example of such requests (i.e., test cases) is shown in Figure 3. The method involves iterating over the actions of an individual request generated by EvoMaster. Each action is formatted into a JSON object and passed as input to a Python script through a subprocess. The subprocess is executed using the ProcessBuilder class, facilitating the interaction between the Java and Python scripts. With this integration, we used the trained model for real-time prediction and decision-making, ensuring the execution of only successful requests, as predicted by the model.
## IV Evaluation
We describe research questions, subject application, evaluation setup, execution, metrics, and discussion.
### _Research Questions_
* **RQ0:** Which ML model performs the best for the classification task of _EvoClass_? We aim to find the most suitable ML model to be integrated into _EvoClass_. We experimented with four commonly used classifiers that can be efficiently trained without requiring a large amount of data: Random Forest, Logistic Regression, KNeighborsClassifier, and GaussianNB.
* **RQ1:** How effectively does _EvoClass_ reduce the testing cost? We study whether the ML model effectively filters out possibly failing requests. Since we have ten GURI versions deployed on three different environments, we also check whether _EvoClass_ can obtain consistent performance across the versions and environments.
* **RQ2:** How much rule coverage is achieved by _EvoClass_ compared to the baseline? We aim to know whether _EvoClass_ can achieve comparable rule coverage as the default EvoMaster.
### _Subject Application_
GURI is the subject application provided by our collaborator CRN (Section II-A). We selected 10 GURI versions running in the development, test, and production environments. GURI has a total of 32 REST APIs corresponding to different functionalities. Only two REST APIs are related to validation and aggregation rules, which we used. This setting is in line with our previous work [10]. Table II shows each GURI version's time stamp and the number of validation and aggregation rules. The first version _v1_ contains 30 validation and 32 aggregation rules. After the evolution of the rule set over four years, the recent version _v10_ has, in total, 70 validation rules and 43 aggregation rules.
The rule evolution occurs in both versions and environments. It includes creating and refining rules in the development environment, testing and modifying them in the test environment, and deploying the validated rules in the production environment. While most modifications are partial (e.g., additional constraints), there are cases where rules are fully deleted or newly introduced. These iterative processes ensure continuous improvement and accuracy of the rule set in GURI. Table III illustrates that rules commonly undergo various change types (i.e., deletion, modification, and insertion). For rule R03, which applies to validate cancer messages related to breast cancer, a new condition such as Ekstalokalisasjon!= '7777' is introduced to the test environment. However, for the same rule, in the production environment, this constraint is removed, implying that any
Fig. 3: A test case generated by EvoMaster leading to a 500 status code
value for variable _Ekstalokalisasjon_ received from cancer messages is acceptable (i.e., being validated to be true). Similarly, for the other example (R40), which applies to all cancer types, we can observe modifications across the environments.
### _Evaluation Setup, Execution, and Metrics_
**Setup.** We trained the ML model using a dataset collected by running EvoMaster on the first version of GURI for ten hours to collect training data (Section III-A). We use 80% data for training and 20% for validation/testing according to the commonly adapted 80-20 split. We select the Random Forest classifier for training the ML model based on the results of a pilot experiment (see details in Section IV-D1).
We compare our approach to EvoMaster in black-box mode [11] as the baseline, as Laaber et al. [10] showed that, in the context of the CRN and GURI, EvoMaster in black-box mode performs on-par in terms of coverage and fault detection and is superior regarding rule coverage when compared to EvoMaster in white-box mode. We configure GURI's ten versions in three different environments, i.e., development, test, and production environment. We repeat each configuration of our experiment 30 times which is recommended for experiments with inherent randomness [12]. We specified one hour time bound for each repetition, which is a common practice [13]. The overall computation time required for our experiment is 2 (approaches) * 3 (environments) * 10 (versions) * 30 (repetitions) * 1 (hour) = 1800 hours (75 days) if run sequentially.
**Execution.** We executed experiments on a high-performance computing cluster named Experimental Infrastructure for Exploration of Exascale Computing1 (eX3) provided by Simula Research Laboratory. Our experiment utilized eight nodes of the eX3 cluster running on the Ubuntu operating system. All nodes have 2 GB of RAM, 4 TB GB local NVMe scratch storage, and four different types of processors, including 32-core AMD EPYCTM 7601, 64-core AMD EPYCTM 7763, 24-core AMD EPYCTM 7413, and 24-core IntelQ Xeon(r) Platinum 8168. The eX3 cluster uses Slurm2 for resources management.
Footnote 1: [https://www.ex3.simula.no/](https://www.ex3.simula.no/)
Footnote 2: [https://slurm.schedmd.com/](https://slurm.schedmd.com/)
**Metrics and Statistical Tests.** For RQ0, in addition to using Receiver Operating Characteristics (ROC) curve and Area Under Curve (AUC), we measure accuracy, precision, recall, and F1-score, which are commonly used metrics for evaluating ML model performance. To analyze results for RQ1, we also calculate accuracy, precision, recall, and F1-score. In addition, we introduce the metric of cost reduction, which is with the formula below:
\[CostReduction=((TotalRequests-ExecutedRequests)\] \[/TotalRequests)*100\]
For RQ2, we measure the total rule hits (_TotalHits_), applied or not applied rules (_Applied_ or _NotApplied_), and their percentage rule coverages. A rule hit refers to a rule execution, which can be either applied or not applied. An applied rule refers to a fully executed (at least one time) rule. A not-applied rule relates to the partial execution of a rule which means a particular input (e.g., diagnose date), related to cancer messages, cannot be validated. A typical rule message, in this case, would be _"This rule is not used because of diagnose date"_. This is a common case with validation rules (Figure 2). The applied and not applied rule coverages are calculated as:
\[Coverage(Applied)=(Applied/TotalHits)*100\]
\[Coverage(NotApplied)=(NotApplied/TotalHits)*100\]
The rule coverage calculations reflect our perspective on the importance of rule execution. Specifically, we intend to emphasize the effectiveness of the generated test cases, where both applied and not applied rules are considered. The coverage metrics capture the proportion of applied (or not applied) rules relative to all rule hits (i.e., executed rules), but they do not account for never executed rules.
To reduce the effect of the randomness of the ML model of our approach and EvoMaster on the results, we repeated each experiment 30 times and performed statistical testing to check the significance of each difference between our approach and the default EvoMaster. We used the Mann-Whitney test as recommended in [12]. In addition, we relied on Vargha-Delaney's \(\hat{A}_{12}\) to estimate the effect size (values of which range from 0 to 1) of the difference between the two approaches. A higher \(\hat{A}_{12}\) value (\(>0.5\)) indicates our approach has a higher chance of yielding better results than the default EvoMaster, and vice versa.
### _Results and Discussion_
In the following section, we present the results and analyses of our evaluation corresponding to each RQ.
#### Iv-D1 RQ0 Results
Figure 4 shows the performance results of the four classifiers as ROC and AUC. As shown in the figure, Random Forest performs better than Logistic Regression, K-Nearest Neighbors (KNN), and Gaussian Naive Bayes, as Random Forest achieves the highest AUC value, i.e., 97.86%. Table IV summarizes each classifier's performance in terms of accuracy, precision, recall, and F1-score. Table IV shows that Random Forest achieved the highest accuracy, i.e., 95.40%, which is at least 10% higher than all the other three classifiers. Similarly, Random Forest outperforms the other three classifiers regarding precision, recall, and F1-score.
Notably, KNN demonstrates the weakest performance, which is due to the curse of dimensionality affecting its performance in high-dimensional spaces [14]. Based on these results, we opted for Random Forest and integrated it into _EvoClass_ for conducting other experiments for answering RQ1 and RQ2.
**RQ0 Summary:** Random Forest demonstrated superior performance across all metrics as compared to the other three classifiers. Notably, it achieved the highest AUC (97.86%) in the ROC analysis.
#### Iv-D2 RQ1 Results
Table V summarizes the results for each rule version under each environment. These results include the total number of generated requests (column _Total Req._), the total number of actually executed requests that are also predicted by our ML model to be successfully executed (Column _Pred. Success_), the number of non-executed requests that have been filtered out by approach (column _Pred. Failure_), and the number of executed requests that resulted in failures but predicted being successful (i.e., false positives, see column _Pred. Success (F)_). In addition, we report results of accuracy, precision, recall, F1-score, and cost reduction.
It can be observed that the overall cost reduction for all three environments and for all 10 versions is \(\approx\)31%. This result implies that _EvoClass_ performs consistently well across the rule versions and across the different environments. In addition, a 31% cost reduction is significant. For instance, for v1, _EvoClass_ managed to save cost by avoiding the execution of 14807 requests. The accuracy of _EvoClass_ (\(\approx\)91%) is stable across the environments and versions, implying that the performance of _EvoClass_ does not degrade with the evolving versions in each environment. The overall precision and F1-score values are close to 87% and 93%, respectively. The recall values are 100% in all cases, telling us that _EvoClass_ did not produce any false negatives, which is important to our context as a false negative implies not executing a request which might lead to the successful execution of a request. The result also indicates that _EvoClass_'s performance is stable despite rule changes across the rule versions. This is because their fundamental characteristics (e.g., data models and variables) remain relatively the same throughout the evolution of the rule set, often with only minor modifications.
**RQ1 Summary:**_EvoClass_ has shown consistent and significant cost reduction across different GURI versions and environments. The average cost reduction achieved is approximately 31%.
#### Iv-D3 RQ2 Results
Table VI presents the results of the comparison between _EvoClass_ and the default EvoMaster (EM), in terms of the number of generated and executed requests, the total number of rule hits, the number of applied/not applied rules, and the coverage of applied/not applied rules. First, when looking at the total number of generated and executed requests, as already reported in RQ1, with _EvoClass_, fewer requests were executed for each version under each environment when compared with the default EvoMaster; consequently, the overall cost is reduced. In terms of rule hits (i.e., the number of rules invoked in each request), which are positively correlated with the number of executed requests (with Pearson's correlation coefficients being 0.15 for the default EvoMaster and 0.30 for _EvoClass_, respectively), obviously, the default EvoMaster achieved high numbers of rule hits for all the versions and under all the environments as it generated and executed more requests. Considering that the number of rules increases from v1 to v10 (Table II), as
Fig. 4: Performance comparison of the classifiers in ROC and AUC scores
expected, the number of rule hits also increases. For instance, rule hits increase from 533998 (v1) to 742003 (v10) under the development environment.
Based on the execution results of the requests, we count the number of rules that are applied at least once and also fully executed (i.e., _Rule Applied_), the number of validation rules that are partially executed as the condition part of such a rule is checked to be false (i.e., _Rule NotApplied_). Notice that aggregation rules do not involve the _Rule NotApplied_ case. As shown in Table VI, due to more requests leading to more rule hits, the default EvoMaster achieved higher numbers of Rule Applied and Rule NotApplied instances for all the versions and across all the environments. However, when looking at the coverages of the Rule Applied and Rule NotApplied instances, _EvoClass_ performed very similarly to the default EvoMaster. This shows that our approach with a reduced number of requests can achieve the same level of coverage as EvoMaster. Interestingly, one can also observe that when evolving from v1 to v10, the coverage of Rule Applied decreases, and the coverage of Rule NotApplied increases. This is because the number and complexity of rules, especially validation rules, increased during the evolution from v1 to v10, as shown in Table II.
We also performed the Wilcoxon signed-rank test to check whether there exists a statistically significant difference between the default EvoMaster and _EvoClass_ in terms of the rule coverage. Results showed that the p-values are greater than 0.05, and \(\hat{A}_{12}\) are around 0.5 for all cases. This indicates that there is insufficient evidence to conclude that there is a significant difference between the two approaches in terms of rule coverage.
**RQ2 Summary:** In terms of rule hits, the default EvoMaster outperformed _EvoClass_ due to generating and executing a higher number of requests. However, results show that _EvoClass_ can achieve the same rule coverage as the default EvoMaster with fewer executions.
### _Threats to Validity_
Following, we discuss threats to the validity commonly reported in software engineering experiments [15]. To reduce threats to the _external validity_, we used a real-world software application with ten versions that naturally evolved over four years of the operation of GURI, and deployed them under three environments. However, similar to many empirical software engineering studies, our results may not be generalizable to other application contexts, a common threat to the external validity [16]. To minimize threats to the _internal validity_, we set up our experiment by following standard practices and recommended guidelines. Initially, we performed a pilot experiment to select a suitable ML classifier. We used a popular framework Optuna for hyperparameter tuning [17]. We used the default/recommended parameters settings of EvoMaster [1]. For the experiment setting, we set the number of repetitions to 30 and a one-hour fixed time budget for each run [13]. To handle threats to the _construct validity_, we repeated our experiment 30 times to lower the effect of randomness. We analyzed experiment results using commonly used metrics (e.g., accuracy, precision). We compared _EvoClass_ with the default EvoMaster with the same set of metrics. In addition, we used the Mann-Whitney test and Vargha-Delaney's \(\hat{A}_{12}\) effect size when comparing the two approaches, by following recommended guidelines [12], which reduces threats to the _conclusion validity_.
## V Experiences and Lessons Learned
**Generalizability:** Even though we extended EvoMaster with an ML classifier, such a classifier can be integrated into other REST API-based tools (e.g., RESTest [18], RESTler [19], and RestTestGen [20]). As a result, testing cost reduction can be achieved together with testing strategies implemented by these tools, such as Adaptive Random Testing and Constraint-based Testing. Moreover, in our context, Random Forest showed the best results. Other classifiers may be better in other contexts, which can be integrated into EvoMaster and other classifiers in the future.
For now, we experimented with one sub-system of CaReSS, i.e., GURI. Our experiment results show that we save around 30% of the testing cost by simply introducing an ML classifier. This result is very encouraging. Therefore, as the next step, we will perform additional experiments with other sub-systems of CaReSS and also CaReSS as a whole. Naturally, the implementation of _EvoClass_ can be reused for extended experiments. Based on the encouraging results of the current experiments, we expect that at least a similar testing effort can be saved. Furthermore, a large-scale empirical study will be needed to see whether Random Forrest performs best when testing other sub-systems. Additionally, this empirical study could provide valuable insights into the impact of various factors such as dataset size, preprocessing steps, and hyperparameter tuning that would most likely change alongside the selected classifier.
_EvoClass_ also holds the potential for broader applicability, extending beyond its current application domain; while our experiments were centered on GURI within the CaReSS system, the method's core principles can be utilized in testing REST APIs in other domains, e.g., healthcare IoT [21]. The applicability of our method requires configuring application-specific details, such as OAS schema and data pre-processing.
**Test case and dataset quality:** Many studies emphasize the importance of collecting high-quality and diverse datasets to train machine learning models effectively [22, 23, 24]. A comprehensive dataset in terms of API requests and responses from various scenarios (i.e., test cases) ensures better performance of the ML model. However, this becomes a challenge when we look at existing testing tools' limitations (e.g., generation of pseudo-random inputs and inter-parameter dependencies) [11, 25]. These tools lack the capability to generate domain-specific data, e.g., medical data, which limits the availability of diverse and representative test cases. In our work, we address this challenge by leveraging synthetic data, which, though
not reflecting real-world scenarios, provides an alternative for training ML models when domain-specific data is unavailable. In our case, it was impossible to get real data due to the legislation of the General Data Protection Regulation (GDPR) from the European Union; therefore, we had to generate data for ML training ourselves. As a result, how to train ML models when real datasets contain personal data and are restricted by GDPR is an interesting area of research.
**Balancing testing cost and effectiveness:** ML-based approaches can provide significant savings in testing costs, but it is as much of important to strike a balance between cost reduction and maintaining the effectiveness of the testing process. In our case, the effectiveness is measured by the rule coverage, and the results show that we are maintaining similar effectiveness as default EvoMaster while reducing costs. In this regard, finding and optimizing the trade-off between false positives and false negatives are valuable lessons learned in this study. False positives occur when the model incorrectly predicts a successful API request as positive, while false negatives happen when the model wrongly predicts an unsuccessful request as negative. These occurrences affect precision and recall scores, as shown in Table V.
**Maintainability:** There are several dimensions of maintainability. First, _EvoClass_ can be adapted and integrated into other existing REST API-based testing tools. However, it is important to acknowledge that these tools are prone to continuous updates and changes which would require careful consideration in terms of compatibility and usability. Second, for the subject application, specifically, REST APIs under test, if changes in the respective OAS schema are of major importance, they need to be addressed accordingly in the approach, following each phase as shown in Figure 1. Finally, after a certain period of time, the trained ML model would need an update. To this end, approaches such as transfer learning can be considered to update the ML model [26].
## VI Related Work
With the widespread deployment and usage of web APIs based applications, testing them to ensure their quality thoroughly is crucial. As a result, numerous automated techniques and tools have emerged in recent years for testing REST APIs such as EvoMaster [1, 4, 27], RESTTest [18], RESTler [19], RestTestGen [20, 28], bOXRT [29], Schemathesis [30], Dredd3, Teases4, and APIFurzzer5. Several empirical studies have also been performed, providing deeper insights (e.g., coverage, performance, and fault detection) into the strengths and limitations of the aforementioned automated testing tools [13],
[30, 31]. Despite of these insights, _EvoClass_, even though presented as an extension of EvoMaster, can seamlessly be integrated with any other REST API-based tool.
Automated testing techniques for RESTful APIs primarily rely on black-box testing, generating test inputs randomly from the API specification. However, these approaches often generate considerable "invalid" inputs, leading to unsuccessful HTTP calls. Consequently, these randomly generated inputs fail to simulate real-world scenarios accurately. Additionally, the existing literature overlooks the testing of REST API functionalities beyond input validity, neglecting the evaluation of meaningful responses, i.e., successful requests [32, 33, 34]. Hence, there is a clear need to enhance these testing techniques to overcome these limitations and ensure more comprehensive and realistic testing of RESTful APIs. Notably, our proposed approach (_EvoClass_) focuses on reducing the costs associated with unsuccessful HTTP calls, further optimizing the testing process.
One of the most relevant works is the study conducted by Mirabella et al. [35]. While their work shares a common goal of leveraging ML techniques for API testing, there are notable differences between their approach and ours. Mirabella et al. focused on predicting the validity of test inputs by employing a deep learning-based approach to predict whether test inputs satisfy all input constraints. In contrast, we focus on predicting the success or failure of API requests generated by testing tools, considering test inputs as a whole, and encompassing the entire request-response cycle. By predicting the status codes associated with API responses, our approach reduces the number of requests to be executed while maintaining the same effectiveness of the API functionality under test.
## VII Conclusion
This work focused on devising a solution for testing a real-world evolving application, i.e., GURI from the Cancer Registry of Norway (CRN). We presented the EvoMaster extension _EvoClass_, which utilizes machine learning to reduce the cost of testing GURI. We evaluated the cost-effectiveness of _EvoClass_ using GURI's ten versions under three environments. The results show that _EvoClass_ can significantly reduce testing cost (i.e., \(\approx\)31%), meanwhile achieving rule coverage similar to the default EvoMaster. In the future, we plan to propose domain-specific test generation methods such as rule coverage to improve the effectiveness further. In addition, we plan to integrate our solution with testing tools other than EvoMaster. Finally, we want to test other software systems from CRN.
## Acknowledgment
This work is supported by the "AI-Powered Testing Infrastructure for Cancer Registry System" project (No. #309642) funded by the Research Council of Norway. The Norwegian Ministry of Education and Research supports Erblin Isaku's Ph.D. The experiment was conducted on the Experimental Infrastructure for Exploration of Exascale Computing (eX3), which is financially supported by RCN under contract 270053.
|
2309.16332 | The effect of local ventilation on airborne viral transmission in indoor
spaces | We incorporate local ventilation effects into a spatially dependent
generalisation of the Wells--Riley model of airborne viral transmission.
Aerosol production and removal through ventilation (global and local),
biological deactivation, and gravitational settling as well as transport around
a recirculating air-conditioning flow and turbulent mixing are modelled using
an advection--diffusion--reaction equation. The local ventilation effects are
compared with the equivalent global ventilation and we find that the
streamlines of the airflow provide insight into when the global ventilation
model is a good approximation. When the agreement between ventilation models is
poor, we find that the global ventilation model generally overestimates the
infection risk. | Alexander Pretty, Ian M. Griffiths, Zechariah Lau, Katerina Kaouri | 2023-09-28T10:44:53Z | http://arxiv.org/abs/2309.16332v1 | # The effect of local ventilation on airborne viral transmission in indoor spaces
###### Abstract
We incorporate local ventilation effects into a spatially dependent generalisation of the Wells-Riley model of airborne viral transmission. Aerosol production and removal through ventilation (global and local), biological deactivation, and gravitational settling as well as transport around a recirculating air-conditioning flow and turbulent mixing are modelled using an advection-diffusion-reaction equation. The local ventilation effects are compared with the equivalent global ventilation and we find that the streamlines of the airflow provide insight into when the global ventilation model is a good approximation. When the agreement between ventilation models is poor, we find that the global ventilation model generally overestimates the infection risk.
xx; revised xx; accepted xx
## 1 Introduction
The importance of ventilation in reducing the indoor transmission of infectious diseases was first highlighted by Nightingale (1860). When an infectious person breathes, talks, coughs, or sneezes, disease-carrying particles are emitted. In poorly ventilated spaces, small particles known as _aerosols_ can remain airborne for several hours, transmitting the disease to susceptible people when inhaled. During the COVID-19 pandemic, a key shift in understanding and mitigating transmission of the SARS-CoV-2 virus was the recognition of airborne transmission (Morawska and Milton, 2020), most likely responsible for superspreader outbreaks in a restaurant (Ho, 2021), courtroom (Vernez et al., 2021), choir practice (Miller, 2021), and meat processing plant (Gunther, 2020).
There are two main approaches to modelling airborne transmission: Wells-Riley models and Computational Fluid Dynamics (CFD). Wells-Riley models (Riley et al., 1978) assume a well-mixed-room (WMR), meaning aerosols are instantaneously transported throughout the room. Due to its high computational speed, this approach can be readily applied at the start of an epidemic. This was the case for COVID-19 (Buonanno et al., 2020; Dai and Zhao
2020; Lelieveld 2020). However, the WMR assumption is not always appropriate and cannot provide any information on the spatial variation of the concentration.
CFD models simulate the (usually turbulent) airflow in a room. The computational demand is high, so many CFD studies at the start of the COVID-19 pandemic focused on relatively short time frames (less than 5 minutes) (Shafaghi _et al._, 2020; Vuorinen, 2020) whereas airborne transmission typically occurs over hours. Some CFD models have simulated aerosol evolution for up to an hour (Shao _et al._, 2021), but the high computational times means that CFD can not easily inform up-to-date decisions in a quickly developing epidemic.
Lau _et al._ (2022) model the spatiotemporal evolution of aerosols in a room using an advection-diffusion-reaction (ADR) equation under the assumption of a recirculating airflow. Unlike Wells-Riley models, the ADR model accounts for the spatial variation of concentration and infection risk. Moreover, the simplified 2D airflow allows fast simulations so the ADR model can be quickly deployed in a fast-changing epidemic. In Lau _et al._ (2022), aerosol removal by ventilation is modelled as a global sink, as in the Wells-Riley type models. This assumption produces good agreement with real-life scenarios ventilated by inbuilt air-conditioning (AC) units (Lau _et al._, 2022).
For rooms with poor or non-existent AC, air purifiers can increase overall aerosol removal. The effectiveness of air purifiers depends strongly on their location (Burgmann & Janoske, 2021; Narayanan & Yang, 2021), an effect that cannot be captured by the WMR assumption or the global sink of Lau _et al._ (2022). Moreover, many CFD studies of air purifiers report prohibitively long computational times; for example Dbouk _et al._ (2021) report 7 days to simulate a 2.5 minute event in a domestic setting. While air purifiers are not technically classified as ventilation, since they do not provide fresh air from outdoors, the American Society of Heating, Refrigeration and Air-Conditioning (ASHRAE) recently incorporated air cleaning devices in their measure of _equivalent clean airflow_ when risk of disease transmission is high (e.g. during an epidemic) (ASHRAE, 2023).
In this paper, we introduce a spatially local ventilation model to the methodology of Lau _et al._ (2022). While all ventilation systems (including doors, windows and AC units) have local effects, we are motivated by common air purifier designs and introduce a cylindrical device that draws air in through the top and expels clean air from the bottom. This device will be an addition to the existing inbuilt AC system.
The paper is organised as follows. The modelling framework, incorporating a local ventilation system, is presented in SS2. In SS3, we compare the average aerosol concentration in the room predicted by the local and global ventilation models. In SS4 we consider the infection risk to individuals at specific locations and compare the ventilation models. Conclusions and suggestions for future work are provided in SS5.
## 2 Modelling framework
### Advection-diffusion-reaction (ADR) equation
Consider a 3-dimensional (3D) room with dimensions \(L_{x}\), \(L_{y}\), \(L_{z}\), as depicted in figure 1(\(a\)). Following Lau _et al._ (2022), we assume a recirculating flow produced by a single AC vent along the top corner and introduce the arclength coordinate \(\xi\), which follows the recirculating loop. The distance between the recirculation layers is \(L_{z}/2\)(van Hooff _et al._, 2013) and the total arclength is \(2L_{x}\). A cylindrical local ventilation system with radius \(r\) is introduced, which extends over both recirculating layers: Air is drawn into an inlet in the upper layer and expelled from an outlet in the lower layer. We will refer to this device as a _purifier_.
Figure 1(\(b\)) shows the computational domain \((\xi,y)\). The left/right halves correspond to the upper/lower layers and the purifier inlet and outlet appear as circles removed from the
domain, with boundaries
\[\partial_{\text{in}}=\{(\xi,y):|(\xi,y)-(x_{p},y_{p})|=r\},\quad\partial_{\text{ out}}=\{(\xi,y):|(\xi,y)-(2L_{x}-x_{p},y_{p})|=r\},\] (1a,b)
where \((x_{p},y_{p})\) denotes the location of the purifier in the \((x,y)\)-plane, and we will assume throughout this work that the purifier is in the centre of the room. Although the purifier design is motivated by real-life devices (Dbouk _et al._, 2021), in this quasi-3D model the device extends the entire height of the room, which is significantly taller than real-life purifiers. However, this simplification allows for a direct comparison with the model of Lau _et al._ (2022) while still offering useful and practical insights.
Consider a single infectious individual standing at \(\mathbf{x}_{0}=(x_{0},y_{0})\) and talking continuously. Talking produces around 10 times as many aerosols as breathing (Asadi _et al._, 2019): a reasonable worst-case scenario for an asymptomatic person. We assume, as in Lau _et al._ (2022), that the concentration of aerosols, \(\mathcal{C}(\xi,y,t)\), is governed by the ADR equation
\[\frac{\partial\mathcal{C}}{\partial t}+\nabla\cdot(\mathbf{v}\mathcal{C})-\nabla \cdot(K\nabla\mathcal{C})=R\delta(\xi-x_{0})\delta(y-y_{0})-(\lambda+\beta+ \sigma)\mathcal{C}, \tag{2}\]
where \(\mathbf{v}=(u,v)\) is a vector field describing the airflow around the recirculating loop; \(K\) is the eddy diffusion coefficient; \(R\) is the (constant) aerosol production rate; and \(\lambda\), \(\beta\), \(\sigma\) are global removal rates due to ventilation, biological deactivation, and gravitational settling, respectively. Parameter values are given in table 1; more details are provided in Lau _et al._ (2022).
The recirculating loop is central to the modelling framework so the AC must be switched on. Following Lau _et al._ (2022), we model this inbuilt ventilation with the global removal term \(\lambda\) and set an air-exchange rate of 0.72 air changes per hour (ACH): the 'poor ventilation' scenario of Lau _et al._ (2022) (from classroom data, Guo _et al._, 2008). This reflects a broken or poorly maintained AC system that moves air around but is ineffective at removing aerosols.
Aerosol production begins at \(t=0\), so we set the initial condition \(\mathcal{C}(\xi,y,0)=0\). Periodic conditions across the left and right boundaries (\(\xi=0,2L_{x}\)) complete the recirculating loop,
\[\mathcal{C}(0,y,t)=\mathcal{C}(2L_{x},y,t),\quad\frac{\partial\mathcal{C}}{ \partial\xi}(0,y,t)=\frac{\partial\mathcal{C}}{\partial\xi}(2L_{x},y,t),\] (3a,b)
and there is no flux through the walls at \(y=0,L_{y}\),
\[\frac{\partial\mathcal{C}}{\partial y}(\xi,0,t)=\frac{\partial\mathcal{C}}{ \partial y}(\xi,L_{y},t)=0. \tag{4}\]
At the purifier inlet, aerosols are carried out of the domain by advection (no diffusive flux),
Figure 1: A 3D room with dimensions \(L_{x}\), \(L_{y}\), \(L_{z}\) is shown in (\(a\)). A cylindrical local ventilation system crosses the two recirculating layers and the arclength coordinate \(\xi\) follows the recirculating loop. The computational domain \((\xi,y)\) is shown in (\(b\)).
and no aerosols enter the domain at the outlet (no total flux),
\[\mathbf{\hat{n}}\cdot(K\nabla C)=0\ \ \text{on}\ \ \partial_{\text{in}},\quad\mathbf{\hat{n}} \cdot(\nu C-K\nabla C)=0\ \ \text{on}\ \ \partial_{\text{out}},\] ( \[2.5a\], \[b\] )
where \(\mathbf{\hat{n}}\) denotes the unit vector normal to each boundary, directed out of the domain.
It is unclear from the methodology of Foat _et al._ (2020) how to determine an eddy diffusion coefficient \(K\) for this scenario since the AC vent and purifier outlet have different surface areas. We assume that \(K\) is related to the total air-exchange rate as follows,
\[K=(\lambda+\lambda_{p})\sqrt[3]{\frac{V^{2}}{2}},\] ( \[2.6\] )
where \(\lambda_{p}\) is the air-exchange rate of the purifier (discussed below), so that the value of \(K\) is the same for equivalent global and local ventilation levels.
Following Lau _et al._ (2022), we assume that the majority of aerosols remain within the recirculating loop and are well-mixed over this height. Hence, the concentration in aerosols/m\({}^{3}\) is given by
\[C(x,y,t)=\frac{C(\xi=x,y,t)+C(\xi=2L_{x}-x,y,t)}{L_{z}/2}.\] ( \[2.7\] )
The risk of infection to a susceptible person at any \((x,y)\) is then calculated using
\[P(x,y,t)=1-\exp\left[-I\int_{0}^{t}\rho C(x,y,\tau)\,\mathrm{d}\tau\right],\] ( \[2.8\] )
where \(I\) is the infectivity constant of the virus and \(\rho\) is the breathing rate (see table 1).
### Airflow simulations
In Lau _et al._ (2022), aerosols are advected around the recirculating loop at constant speed \(u_{0}\) (table 1). Here, we assume that the recirculating loop remains coherent in the presence of a
\begin{table}
\begin{tabular}{c c c c} Parameter & Symbol & Value & Source \\ Room length & \(L_{x}\) & 8 m & Lau _et al._ (2022) \\ Room width & \(L_{y}\) & 8 m & Lau _et al._ (2022) \\ Room height & \(L_{z}\) & 3 m & Lau _et al._ (2022) \\ Room volume & \(V\) & 192 m\({}^{3}\) & \(L_{x}\times L_{y}\times L_{z}\) \\ AC airflow speed & \(u_{0}\) & 0.15 ms\({}^{-1}\) & (ASHRAE 2020) \\ Aerosol emission rate (talking) & \(R\) & 5 aerosols/s & (Lau _et al._ 2022) \\ Virus deactivation rate & \(\beta\) & \(1.7\times 10^{-4}\) s\({}^{-1}\) & (van Doremalen 2020) \\ Gravitational settling rate & \(\sigma\) & \(1.1\times 10^{-4}\) s\({}^{-1}\) & (De Oliveira _et al._ 2021) \\ Air-exchange rate & \(\lambda\) & 0.72 ACH: \(2\times 10^{-4}\) s\({}^{-1}\) & (Guo _et al._ 2008) \\ & & 1.4 ACH: \(4.0\times 10^{-4}\) s\({}^{-1}\) & see table 2 \\ & & 6 ACH: \(1.7\times 10^{-3}\) s\({}^{-1}\) & see table 2 \\ Eddy diffusion coefficient & \(K\) & 0.72 ACH: \(5.3\times 10^{-3}\) m\({}^{2}\)s\({}^{-1}\) & (Fou _et al._ 2020), (2.6) \\ & & 1.4 ACH: \(1.0\times 10^{-2}\) m\({}^{2}\)s\({}^{-1}\) & (Fou _et al._ 2020), (2.6) \\ & & 6 ACH: \(4.5\times 10^{-2}\) m\({}^{2}\)s\({}^{-1}\) & (Fou _et al._ 2020), (2.6) \\ Breathing rate & \(\rho\) & \(1.3\times 10^{-4}\) m\({}^{3}\)s\({}^{-1}\) & (Hallett _et al._ 2020) \\ Infectivity constant & \(I\) & 0.0069 & (Lau _et al._ 2022) \\ Location of purifier centre & \((x_{p},y_{p})\) & (4,4) m & Room centre: \((L_{x}/2,L_{y}/2)\) \\ Radius of purifier & \(r\) & 0.1 m & \\ \end{tabular}
\end{table}
Table 1: Parameters and their values.
purifier, leading to a modified \(\mathbf{v}=(u,v)\) such that air enters and leaves the purifier with a specified constant speed \(v_{p}\),
\[\mathbf{v}\cdot\mathbf{\hat{n}}=v_{p}\ \ \text{on}\ \ \partial_{\text{in}},\ \ \ \mathbf{v}\cdot\mathbf{\hat{n}}=-v_{p}\ \ \text{on}\ \ \partial_{\text{out}}.\] (2.9a,b)
We also require that \(\mathbf{v}\) satisfies the periodic boundary condition
\[\mathbf{v}(0,y)=\mathbf{v}(2L_{x},y). \tag{2.10}\]
A vector field that satisfies (2.9) and (2.10) is determined using the Shear Stress Transport (SST) turbulent flow solver (Menter, 1994) in COMSOL (a laminar flow solver is not suitable since \(Re>10\ 000\)). The resulting 2D flow in the \((\xi,y)\)-plane does not account for the inherently 3D structure of turbulent flow. However, the spreading of aerosols by small-scale turbulent eddies is accounted for by the eddy diffusion coefficient \(K\).
Imposing no-slip and no-penetration conditions at the walls,
\[\mathbf{v}(\xi,0)=\mathbf{v}(\xi,L_{y})=(0,0), \tag{2.11}\]
we run the SST solver until a steady-state is reached. This steady velocity, \(\mathbf{v}\), is then used in the ADR equation (2.2). The results are compared against those of Lau _et al._ (2022) for: (i) no purifier, (ii) a switched off purifier (\(v_{p}=0\)). There is good agreement provided
\[\max_{y}u(\xi=0,y)=u_{0}, \tag{2.12}\]
which is imposed by setting a suitable pressure gradient over the periodic boundaries. Several turbulent models were compared and all resulted in a similar concentration distribution \(\mathcal{C}\).
Two purifier settings are considered based on the clean air delivery rate (CADR) of purifiers used in experimental and computational studies. We define a _weak purifier_ with a CADR of 140 m\({}^{3}\)h\({}^{-1}\), representative of devices for small spaces such as domestic rooms (Dbouk _et al._, 2021) and individual offices; and a _strong purifier_ with a CADR of 1000 m\({}^{3}\)h\({}^{-1}\), representative of devices for larger spaces such as classrooms (Kahler _et al._, 2020) and open-plan offices.
Let \(Q\) denote the flow-rate through the device in m\({}^{3}\)s\({}^{-1}\). The CADR (stated in m\({}^{3}\)h\({}^{-1}\) by convention) is given by \(\eta Q\) where \(\eta\) is the filter efficacy. We assume that 100% of the aerosols that enter the purifier are trapped by the filter, so \(\eta=1\) and the CADR and flow-rate \(Q\) are thus equivalent. For a cylindrical purifier (circumference \(2\pi r\)) with an inlet half the height of the room (\(L_{z}/2\)),
\[Q=\pi rL_{z}v_{p}. \tag{2.13}\]
We hence compute the velocity \(v_{p}=Q/(\pi rL_{z})\) for a given \(Q\) (see table 2).
Each purifier is compared with an equivalent increase in the global removal term \(\lambda\). The air-exchange rate associated with each purifier is given by \(\lambda_{p}=Q/V\), wh
\begin{table}
\begin{tabular}{c c c c c} Parameter & Symbol & Weak purifier & Strong purifier & Source \\ Flow-rate (CADR) & \(Q\) & \(0.039\) m\({}^{3}\)s\({}^{-1}\) (140 m\({}^{3}\)h\({}^{-1}\)) & \(0.28\) m\({}^{3}\)s\({}^{-1}\) (1000 m\({}^{3}\)h\({}^{-1}\)) & * \\ Air velocity into purifier & \(v_{p}\) & \(0.04\) ms\({}^{-1}\) & \(0.3\) ms\({}^{-1}\) & (2.13) \\ Purifier air-exchange rate & \(\lambda_{p}\) & \(2.0\times 10^{-4}\) s\({}^{-1}\) & \(1.5\times 10^{-3}\) s\({}^{-1}\) & \(Q/V\) \\ Total air-exchange rate & \(\lambda_{\text{tot}}\) & \(4.0\times 10^{-4}\) s\({}^{-1}\) & \(1.7\times 10^{-3}\) s\({}^{-1}\) & \(\lambda+\lambda_{p}\) \\ Equivalent global ACH & & \(1.4\) ACH & \(6\) ACH & \(3600\lambda_{\text{tot}}\) \\ \end{tabular}
\end{table}
Table 2: Parameters for the two purifier settings with \(\lambda=2\times 10^{-4}\) s\({}^{-1}\) (0.72 ACH).
* Weak purifier: Dbouk _et al._ (2021), Strong purifier: Kähler _et al._ (2020).
removal rate of the AC unit (0.72 ACH) to determine a total air-exchange rate, \(\lambda_{\rm tot}=\lambda+\lambda_{p}\) (see table 2). The weak purifier doubles the total air-exchange to 1.4 ACH, less than half the recommended ventilation for classrooms (3 ACH for 30 occupants: Lau _et al._ 2022; ASHRAE 2022). The total air-exchange rate for the strong purifier is 6 ACH, exceeding this recommendation and also sufficient to meet the guidelines for times of heightened infection risk provided the number of occupants is halved (ASHRAE 2023). Hereon in, we will refer to the global ventilation model by the ACH and local ventilation model by the purifier strength.
The streamlines of the airflow \(\mathbf{v}\) are shown in figure 2 for both purifiers. The flow is broadly unidirectional for the weak purifier (figure 2_a_). Regions in which the streamlines are directed into or out of the purifier are shaded and have a greater area for the strong purifier (figure 2_b_). In both cases, these regions meet at the periodic boundary.
### Computational speed
For each airflow simulation (no purifier, weak purifier, strong purifier) to reach a steady state, computation takes approximately 10 minutes. For the ADR equation (2.2), an event of 4 hours takes around 5 minutes to run, including calculation of the infection risk (2.8). At these computational speeds, advice and guidance can be quickly updated with new information during a fast-changing epidemic. Simulations were performed on a Lenovo IdeaPd Flex 5 laptop, with a 1.3 GHz 4-core Intel Core i7-1065G7 processor and 8 GB of RAM.
## 3 Average aerosol concentration
We define the average aerosol concentration in the room as
\[\bar{C}(t)=\frac{1}{V}\iint_{\Omega}\mathcal{C}(\xi,y,t)\,\mathrm{d}\xi\, \mathrm{d}y, \tag{3.1}\]
where \(\Omega\) denotes the computational domain depicted in figure 1(_b_). Taking appropriate integrals of (2.2) and applying the divergence theorem gives
\[\frac{\partial\bar{C}}{\partial t}=\frac{R}{V}-(\lambda+\beta+\sigma)\bar{C}- \frac{1}{V}\oint_{\partial_{\rm in}}v_{p}C\,\mathrm{d}l. \tag{3.2}\]
When \(v_{p}\neq 0\), the boundary integral (describing aerosol removal by the purifier) depends on the values of \(C\) on the purifier inlet boundary \(\partial_{\rm in}\) (2.1). Hence, \(\bar{C}\) depends on the location of the infectious source \(\mathbf{x}_{0}\) and the vector field \(\mathbf{v}\). When \(v_{p}=0\) there is no local ventilation (as in Lau _et al._ 2022), the boundary integral vanishes, and (3.2) is equivalent to the Wells-Riley
Figure 2: The airflow streamlines in the \((\xi,y)\)-plane are shown for (_a_) the weak purifier (\(v_{p}=0.04\)) and (_b_) the strong purifier (\(v_{p}=0.3\)). The shaded regions indicate streamlines that pass through the purifier inlet (left) and the purifier outlet (right).
model used in Miller (2021). In this case the solution to (3.2) is given by
\[\bar{C}(t)=C^{*}\left[1-\mathrm{e}^{-(\lambda+\beta+\sigma)t}\right],\ \ \ \mathrm{where}\ \ C^{*}=\frac{R}{(\lambda+\beta+\sigma)V}. \tag{3.3}\]
Hence, \(\bar{C}\) does not depend on \(\mathbf{x}_{0}\) or \(\mathbf{v}\) for the global ventilation cases, and \(\bar{C}\to C^{*}\) as \(t\to\infty\).
We express the location of the infectious source relative to the purifier location as
\[\mathbf{x}_{0}=(x_{0},y_{0})=(x_{p}-d\cos\theta,y_{p}+d\sin\theta), \tag{3.4}\]
where \(d\) is the distance from the purifier and \(\theta\) is the angle (in degrees) from the line \(y=L_{y}/2\). The problem is symmetric in this line so we consider only \(\theta\in[0,180]\). Figure 3(\(a\)) shows all choices of \((d,\theta)\) considered here.
To compare the long-time behaviour of the different ventilation models, \(\bar{C}\) is depicted in figure 3(\(b\)) after an event of 4 hours, ensuring \(\mathcal{C}\) reaches a steady-state in every case. For the global ventilation models, \(\bar{C}\) is depicted by horizontal lines, which agree with \(C^{*}\) (3.3). For most \((d,\theta)\) choices, there is good agreement between each purifier and the equivalent global ventilation. For the weak purifier, the only significant deviation is when \(\theta=0\) (for all values of \(d\)). For the strong purifier, the discrepancy is significant for \(d=1\) (for all values of \(\theta\)) and for small \(\theta\) when \(d=2,3\). Where the results differ the most, global ventilation predicts a greater \(\bar{C}\) than the equivalent local ventilation. When \(d=2,3\), the global ventilation model predicts a larger \(\bar{C}\) than the purifiers for some \(\theta\), but the discrepancy is relatively small.
Figure 3(\(b\)) shows that \(\bar{C}\) increases with distance from the purifier \(d\), but there is also dependence on \(\theta\) that is related to the streamlines of \(\mathbf{v}\). The regions where the streamlines enter each purifier are shaded in figure 3(\(a\)), and \(\bar{C}\) is notably lower when \(\mathbf{x}_{0}\) is within (or close to) these regions. For values of \(\mathbf{x}_{0}\) outside these regions, there is less variation in \(\bar{C}\) and closer agreement with the equivalent global ventilation.
Figure 3: The infectious source locations, \(\mathbf{x}_{0}\) (3.4), are depicted by filled circles in (\(a\)) and the regions where all streamlines are directed into the purifier inlet (figure 2) are shaded for each purifier. For these \(\mathbf{x}_{0}\), the average aerosol concentration, \(\bar{C}\) (3.1), after 4 hours is shown for the weak (\(+\)) and the strong (\(\times\)) purifiers in (\(b\)). The global ventilation cases are also shown in (\(b\)), depicted as horizontal lines (labelled with the ACH).
## 4 Infection risk to susceptible people nearby
Our spatially varying ADR model allows us to determine the infection risk to susceptible people at specific locations (8). We express the location of a susceptible person \(\mathbf{x}_{s}\) as
\[\mathbf{x}_{s}=(x_{s},y_{s})=(x_{p}-d_{s}\cos(\theta+\phi),y_{p}+d_{s}\sin(\theta+ \phi)), \tag{10}\]
where \(\phi\) is the angle between \(\mathbf{x}_{0}\) and \(\mathbf{x}_{s}\), and \(d_{s}\) is the distance from the purifier. We consider a susceptible person directly opposite the infectious person, \(\phi=180\), and a susceptible person left of the infectious person, \(\phi=90\) (depicted in figure 4). We restrict interest to \(d=1\), the case with the greatest discrepancy between the local and global ventilation models in figure 3. To reflect a scenario in which individuals are in close proximity, we also set \(d_{s}=1\). This is reflective of classrooms, restaurants, galleries and other social events, with 1 hour being a representative event duration. However, in reality, these purifiers are potentially too large and noisy for use in such a setting.
Figure 4 shows the infection risk to susceptible people opposite and left of the infectious person after 1 hour, with the scenario for each \(\theta\) depicted on the lower axes. Figure 4(\(a\)) is symmetric around \(\theta=180\) because the problem is symmetric in the line \(y=L_{y}/2\). Moreover, the infection risk to a susceptible person right of the infectious person (\(\phi=-90\)) can be deduce from figure 4(\(b\)) by reflection in this line (\(\theta\to 360-\theta\)).
After 1 hour, the infection risk is below 40% in all cases and the infection risk is lower with the purifiers than with the equivalent global ventilation. The weak purifier has had only a marginal effect when compared against the baseline example of 0.72 ACH, and shows close agreement with the corresponding global ventilation of 1.4 ACH. The infection risk is lower for the strong purifier than for the equivalent global ventilation of 6 ACH for all values of \(\theta\). The greatest discrepancy is when \(\theta=0\), with the strong purifier predicting half the infection risk of the 6 ACH global ventilation (figure 4\(a\),\(b\)).
The streamlines of \(\mathbf{v}\) again provide insight into these results. For the global ventilation models and the weak purifier, peaks occur when \(y_{0}=y_{s}\) due to the broadly unidirectional flow around the recirculating loop, with lower peaks when the aerosols travel further (e.g. \(\theta=180\) in figure 4\(a\)). For the strong purifier, the largest infection risk (\(\theta=90\) in figure 4\(b\)
Figure 4: The infection risk, \(P\) (8), after 1 hour to the susceptible person (\(a\)) opposite (\(\phi=180\)) and (\(b\)) left (\(\phi=90\)) of the infectious person (10). Global ventilation cases are labelled with the ACH (open symbols) and the equivalent local ventilation (purifier) is depicted by filled symbols of the same shape. Schematics in the lower axes depict the scenario for each \(\theta\).
corresponds to the susceptible person being directly downstream from the infectious person according to the significantly modified streamlines in this case (figure 2_b_).
Under the WMR assumption, the infection risk is calculated based on the average aerosol concentration \(\bar{C}\) (3.3). This approach predicts the following infection risks: 8.8% for 0.72 ACH, 7.5% for 1.4 ACH, and 3.6% for 6 ACH. This is a significant underestimate compared to the corresponding values in figure 4 since the close proximity to the infectious source results in a concentration significantly greater than the room average at all times.
## 5 Summary and conclusions
The recent ADR model for airborne virus transmission (Lau _et al._, 2022) was modified to incorporate a local ventilation system motivated by air purifiers. A weak purifier (CADR = 140 m\({}^{3}\)h\({}^{-1}\)) and a strong purifier (CADR = 1000 m\({}^{3}\)h\({}^{-1}\)) were compared against equivalent increases in the global ventilation (1.4 ACH and 6 ACH, respectively).
For each purifier, the average aerosol concentration after reaching a steady-state was compared against the equivalent global ventilation, with good agreement in most cases (figure 3). The infection risk to susceptible people near to the purifier after 1 hour was also considered: The weak purifier showed close agreement with the global ventilation model, whereas the strong purifier predicted a lower infection risk than the equivalent global ventilation (figure 4). The largest discrepancies between the local and global ventilation models were observed when the infectious person was located inside or near to the regions where the airflow streamlines are directed into the purifier inlet (figure 2).
When modelling airborne transmission, Wells-Riley models (e.g. Miller, 2021) offer great computational speed but are highly simplistic, whereas CFD models (particularly those that track individual particles, e.g. Dbouk _et al._, 2021) provide significant detail at great computational expense. The ADR model of Lau _et al._ (2022) offers a compromise, with greater detail than Wells-Riley models (spatial variation) at low computational cost. By adding further complexity to the problem, the present model is able to explore the effects of local ventilation over hours, the time-frame over which airborne transmission occurs, while retaining relatively small computational times.
In future, this computational speed could facilitate a more thorough investigation of the problem, exploring factors such as the purifier location, size, and strength. The model could be further developed by incorporating a spatially varying eddy diffusion coefficient or an unsteady airflow. Building on this work, the local ventilation effects of air-conditioning, windows, doors, and other purifier designs could be explored in a similar manner.
**Acknowledgements.** The authors wish to acknowledge the contributions of Dr. Aaron English, Dr. Raquel Gonzalez Farina and Dr. Attila Kovacs. We are grateful to Sian Grant for generating Figure 1(_a_).
**Funding.** A.P. and K.K. gratefully acknowledge funding from a Ser Cymru 'Tackling COVID-19' grant, awarded by the Welsh Government. I.M.G. is grateful to the Royal Society for funding through a University Research Fellowship.
**Declaration of interests.** The authors report no conflict of interest.
|
2309.09212 | RobotPerf: An Open-Source, Vendor-Agnostic, Benchmarking Suite for
Evaluating Robotics Computing System Performance | We introduce RobotPerf, a vendor-agnostic benchmarking suite designed to
evaluate robotics computing performance across a diverse range of hardware
platforms using ROS 2 as its common baseline. The suite encompasses ROS 2
packages covering the full robotics pipeline and integrates two distinct
benchmarking approaches: black-box testing, which measures performance by
eliminating upper layers and replacing them with a test application, and
grey-box testing, an application-specific measure that observes internal system
states with minimal interference. Our benchmarking framework provides
ready-to-use tools and is easily adaptable for the assessment of custom ROS 2
computational graphs. Drawing from the knowledge of leading robot architects
and system architecture experts, RobotPerf establishes a standardized approach
to robotics benchmarking. As an open-source initiative, RobotPerf remains
committed to evolving with community input to advance the future of
hardware-accelerated robotics. | Víctor Mayoral-Vilches, Jason Jabbour, Yu-Shun Hsiao, Zishen Wan, Martiño Crespo-Álvarez, Matthew Stewart, Juan Manuel Reina-Muñoz, Prateek Nagras, Gaurav Vikhe, Mohammad Bakhshalipour, Martin Pinzger, Stefan Rass, Smruti Panigrahi, Giulio Corradi, Niladri Roy, Phillip B. Gibbons, Sabrina M. Neuman, Brian Plancher, Vijay Janapa Reddi | 2023-09-17T08:41:11Z | http://arxiv.org/abs/2309.09212v2 | # RobotPerf: An Open-Source, Vendor-Agnostic, Benchmarking Suite
###### Abstract
We introduce RobotPerf, a vendor-agnostic benchmarking suite designed to evaluate robotics computing performance across a diverse range of hardware platforms using ROS 2 as its common baseline. The suite encompasses ROS 2 packages covering the full robotics pipeline and integrates two distinct benchmarking approaches: black-box testing, which measures performance by eliminating upper layers and replacing them with a test application, and grey-box testing, an application-specific measure that observes internal system states with minimal interference. Our benchmarking framework provides ready-to-use tools and is easily adaptable for the assessment of custom ROS 2 computational graphs. Drawing from the knowledge of leading robot architects and system architecture experts, RobotPerf establishes a standardized approach to robotics benchmarking. As an open-source initiative, RobotPerf remains committed to evolving with community input to advance the future of hardware-accelerated robotics.
## I Introduction
In order for robotic systems to operate safely and effective in dynamic real-world environments, their computations must run at real-time rates while meeting power constraints. Towards this end, accelerating robotic kernels on heterogeneous hardware, such as GPUs and FPGAs, is emerging as a crucial tool for enabling such performance [1, 2, 3, 4, 5, 6, 7]. This is particularly important given the impending end of Moore's Law and the end of Dennard Scaling, which limits single CPU performance [8, 9].
While hardware-accelerated kernels offer immense potential, they necessitate a reliable and standardized infrastructure to be effectively integrated into robotic systems. As the industry leans more into adopting such standard software infrastructure, the Robot Operating System (ROS) [10] has emerged as a favored choice. Serving as an industry-grade middleware, it aids in building robust computational robotics graphs, reinforcing the idea that robotics is more than just individual algorithms. The growing dependency on ROS 2 [11], combined with the computational improvements offered by hardware acceleration, accentuates the community's demand for a standardized, industry-grade benchmark to evaluate varied hardware solutions. Recently, there has been a plethora of workshops and tutorials focusing on benchmarking robotics applications [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], and while benchmarks for specific robotics algorithms [23, 24] and certain end-to-end robotic applications, such as drones [25, 26, 27, 28], do exist, the nuances of analyzing general ROS 2 computational graphs on heterogeneous hardware is yet to be fully understood.
In this paper, we introduce _RobotPerf_, an open-source and community-driven benchmarking tool designed to assess the performance of robotic computing systems in a standardized, architecture-neutral, and reproducible way, accommodating the various combinations of hardware and software in different robotic platforms (see Figure 1). RobotPerf focuses on evaluating robotic workloads in the form of ROS 2 computational graphs on a wide array of hardware setups, encompassing a complete robotics pipeline and emphasizing
Fig. 1: A high level overview of RobotPerf. It targets industry-grade real-time systems with complex and extensible computation graphs using the Robot Operating System (ROS 2) as its common baseline. Emphasizing adaptability, portability, and a community-driven approach, RobotPerf aims to provide fair comparisons of ROS 2 computational graphs across CPUs, GPUs, FPGAs and other accelerators.
real-time critical metrics. The framework incorporates two distinct benchmarking methodologies that utilize various forms of instrumentation and ROS _nodes_ to capture critical metrics in robotic systems. These approaches are: black-box testing, which measures performance by eliminating upper layers and replacing them with a test application, and grey-box testing, an application-specific measure that observes internal system states with minimal interference. The framework is user-friendly, easily extendable for evaluating custom ROS 2 computational graphs, and collaborates with major hardware acceleration vendors for a standardized benchmarking approach. It aims to foster research and innovation as an open-source project. We validate the framework's capabilities by conducting benchmarks on diverse hardware platforms, including CPUs, GPUs, and FPGAs, thereby showcasing RobotPerf's utility in drawing valuable performance insights.
RobotPerf's source code and documentation are available at [https://github.com/robotperf/benchmarks](https://github.com/robotperf/benchmarks) and its methodologies are currently being used in industry to benchmark industry-strength, production-grade systems.
## II Background & Related Work
### _The Robot Operating System (ROS and ROS 2)_
ROS [10] is a widely-used middleware for robot development that serves as a _structured communications layer_ and offers a comprehensive suite of additional functionalities including: open-source packages and drivers for various tasks, sensors, and actuators, as well as a collection of tools that simplify development, deployment, and debugging processes. ROS enables the creation of computational graphs (see Figure 1) that connect software processes, known as nodes, through topics, facilitating the development of end-to-end robotic systems. Within this framework, nodes can publish to or subscribe from topics, enhancing the modularity of robotic systems.
ROS 2 builds upon ROS and addresses many of its key limitations. Constructed to be industry-grade, ROS 2 adheres to industry Data Distribution Service (DDS) and Real-Time Publish Subscribe (RTPS) standards [29]. Based on the Data Distribution Service (DDS) standard, it enables fine-grained, direct, inter- and intra-node communication, enhancing performance, reducing latency, and improving scalability. Importantly, these improvements are also designed to support hardware acceleration [30, 5]. Over 600 companies have adopted ROS 2 and its predecessor ROS in their production environments, underscoring its significance and widespread adoption in the industry [11].
ROS 2 also provides standardized APIs to connect user code through language-specific client libraries, _rclcpp_ and _rclpy_, which handle the scheduling and invocation of callbacks such as timers, subscriptions, and services. Without a ROS Master, ROS 2 creates a decentralized framework where nodes discover each other and manage their own parameters.
### _Robotics Benchmarks_
There has been much recent development of open-source robotics libraries and associated benchmarks demonstrating their performance as well as a plethora of workshops and tutorials focusing on benchmarking robotics applications [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. However, most of these robotics benchmarks focus on algorithm correctness (_functional_ testing) in the context of domain specific problems, as well as end-to-end latency on CPUs [31, 32, 33, 34, 37, 35, 38, 36, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48]. A few works also analyze some _non-functional_ metrics, such as CPU performance benchmarks, to explore bottleneck behaviors in selected workloads [23, 49, 24].
Recent work has also explored the implications of operating systems and task schedulers on ROS 2 computational graph performance through benchmarking [50, 51, 52, 53, 54] as well as by optimizing the scheduling and communication layers of ROS and ROS 2 themselves [55, 56, 57, 58, 59, 60, 61, 62]. These works often focused on a specific context or (set of) performance counter(s).
Finally, previous work has leveraged hardware acceleration for select ROS Nodes and adaptive computing to optimize the ROS computational graphs [63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79]. However, these works do not provide comprehensive frameworks to quickly analyze and evaluate new heterogeneous computational graphs except for two works that are limited to the context of UAVs [25, 28].
Research efforts most closely related to our work include ros2_tracing [80] and RobotCore [5]. ros2_tracing provided instrumentation that demonstrated integration with the low-overhead LTTng tracer into ROS 2, while RobotCore illuminates the advantages of using vendor-specific tracing to complement ros2_tracing to assess the performance of hardware-accelerated ROS 2 Nodes. Building on these
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & & & & & & \\ \hline OMPL. Benchmark [31] & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ MotionBenchMaker [32] & ✓ & ✗ & ✗ & ✗ & ✓ & ✓ & ✗ \\ OpenCollBench [33] & ✗ & ✗ & ✓ & ✗ & ✓ & ✗ & ✗ \\ BARN [34] & ✗ & ✗ & ✗ & ✓ & ✓ & ✗ & ✗ \\ DynaBRAN [35] & ✓ & ✗ & ✗ & ✓ & ✓ & ✗ & ✗ \\ MAVBench [25] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ \\ Bench-MR [36] & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ RTRBench [23] & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline
**RobotPerf (ours)** & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparative evaluation of representative existing robotics benchmarks with RobotPerf across essential characteristics for robotic systems.
two specific foundational contributions, RobotPerf offers a comprehensive set of ROS 2 kernels spanning the robotics pipeline and evaluates them on diverse hardware.
Table I summarizes our unique contributions. It includes a selection of representative benchmarks from above and provides an evaluation of these benchmarks against RobotPerf, focusing on essential characteristics vital for robotic systems. We note that while our current approach focuses only on non-functional performance benchmarking tests, RobotPerf's architecture and methodology can be extended to also measure functional metrics.
## III RobotPerf: Principles & Methodology
RobotPerf is an open-source, industry-strength robotics benchmark for portability across heterogeneous hardware platforms. This section outlines the important design principles and describes the implementation methodology.
### _Non-Functional Performance Testing_
Currently, RobotPerf specializes in non-functional performance testing, evaluating the efficiency and operational characteristics of robotic systems. Non-functional performance testing measures those aspects not belonging to the system's functions, such as computational latency, memory consumption, and CPU usage. In contrast, traditional functional performance testing looks into the system's specific tasks and function, verifying its effectiveness in its primary goals, like the accuracy of the control algorithm in following a planned robot's path. While functional testing confirms a system performs its designated tasks correctly, non-functional testing ensures it operates efficiently and reliably.
### _ROS 2 Integration & Adaptability_
RobotPerf is designed specifically to evaluate ROS 2 computational graphs, rather than focusing on independent robotic algorithms. We emphasize benchmarking _ROS 2 workloads_ because the use of ROS 2 as middleware allows for the easy composition of complex robotic systems. This makes the benchmark versatile and well-suited for a wide range of robotic applications and enables industry, which is widely using ROS, to rapidly adopt RobotPerf.
### _Platform Independence & Portability_
RobotPerf allows for the evaluation of benchmarks on a variety of hardware platforms, including general-purpose CPUs and GPUs, reconfigurable FPGAs, and specialized accelerators (e.g., ray tracing accelerators [81]). Benchmarking robotic workloads on heterogeneous platforms is vital to evaluate their respective capabilities and limitations. This facilitates optimizations for efficiency, speed, and adaptability, as well as fine-tuning of resource allocations, ensuring robust and responsive operation across diverse contexts.
### _Flexible Methodology_
We offer grey-box and black-box testing methods to suit different needs. Black-box testing provides a quick-to-enable external perspective and measures performance by eliminating the layers above the layer-of-interest and replacing those with a specific test application. Grey-box testing provides more granularity and dives deeper into the internal workings of ROS 2, allowing users to generate more accurate measurements at the cost of increased engineering effort. As such, each method has its trade-offs, and providing both options enables users flexibility. We describe each method in more detail below and highlight takeaways in Table II.
#### Iii-B1 Grey-Box Testing
Grey-box testing enables precise probe placement within a robot's computational graph, generating a chronologically ordered log of critical events using a tracer that could be proprietary or open source, such as LTTng [82]. As this approach is fully integrated with standard ROS 2 layers and tools through ros2_tracing, it incurs a minimal average latency of only 3.3 \(\upmu\)s [80], making it well-suited for real-time systems. With this approach, optionally, RobotPerf offers specialized input and output nodes that are positioned outside the nodes of interest to avoid the need to instrument them. These nodes generate the message tracepoints upon publish and subscribe events which are processed to calculate end-to-end latency.
#### Iii-B2 Black-Box Testing
The black-box methodology utilizes a user-level node called the MonitorNode to evaluate the performance of a ROS 2 node. The MonitorNode subscribes to the target node, recording the timestamp when each message is received. By accessing the propagated ID, the MonitorNode determines the end-to-end latency by comparing its timestamp against the PlaybackNode's recorded timestamp for each message. While this approach does not need extra instrumentation, and is easier to implement, it offers a less detailed analysis and alters the computational graph by introducing new nodes and dataflow.
### _Opaque Performance Tests_
The requirement for packages to be instrumented directly within the source code poses a challenge to many benchmarking efforts. To overcome this hurdle, for most benchmarks, we refrain from altering the workloads of interest and, instead, utilize specialized input and output nodes positioned outside the primary nodes of concern. This setup allows for benchmarking without the need for direct instrumentation of
\begin{table}
\begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt}} \hline \hline Criteria & **Grey-Box** & **Black-Box** \\ \hline Precision & Utilizes tracers from encode instrumentation. Low overhead. Driven by kernelspace. & Limited to ROS 2 message subscriptions. Restricted to ROS 2 message callbacks. Recorded by userspace processes. Limited to message subscriptions in current implementation. & Limited to ROS 2 message subscriptions. \\ \multirow{2}{*}{Performance} & \multirow{2}{*}{Multiple event types.} & \multirow{2}{*}{Restricted to ROS 2 messages callbacks. Recorded by userspace processes.} & Restricted to ROS 2 message callbacks. Recorded by userspace processes. Limited to message subscriptions in current implementation. \\ \multirow{2}{*}{Portability} & \multirow{2}{*}{Regures a valid tracer. Standard format (CTF).} & \multirow{2}{*}{Standard ROS 2 APIs. Custom JSON format.} & \multirow{2}{*}{Standard ROS 2 APIs. Custom JSON format.} \\ \multirow{2}{*}{Ease of use} & \multirow{2}{*}{Regures code modifications and data postprocessing.} & \multirow{2}{*}{Tests unmodified software with minor node additions.} \\ \multirow{2}{*}{Real-Robots} & \multirow{2}{*}{Does not modify the computational graph.} & \multirow{2}{*}{Modifies the computational graph adding extra dataflow.} \\ \hline \hline \end{tabular}
\end{table} TABLE II: Grey-box vs. black-box benchmarking trade-offs.
the target layer. We term this methodology "opaque tests," a concept that RobotPerf adheres to when possible.
### _Reproducibility & Consistency_
To ensure consistent and reproducible evaluations, RobotPerf adheres to specific common robotic dataformats. In particular, it uses ROS 2 rosbags, including our own available at [https://github.com/robotperf/rosbags](https://github.com/robotperf/rosbags), as well third-party bags (e.g., the r2b dataset [99]).
To ensure consistent data loading and finer control over message delivery rates, we drew inspiration from [100]. Our computational graphs incorporate _modified and improved_ DataLoaderNode and PlaybackNode implementations, which can be accessed at [https://github.com/robotperf/ros2_benchmark](https://github.com/robotperf/ros2_benchmark). These enhanced nodes offer improvements that report worst-case latency and enable the reporting of maximum latency, introduce the ability to profile power consumption and so forth.
### _Metrics_
We focus on three key metrics: latency, throughput and power consumption including energy efficiency. Latency measures the time between the start and the completion of a task. Throughput measures the total amount of work done in a given time for a task. Power measures the electrical energy per unit of time consumed while executing a given task. Measuring energy efficiency (or performance-per-Watt) captures the total amount of work (relative to either throughput or latency) that can be delivered for every watt of power consumed and is directly related to the runtime of battery powered robots [25].
### _Current Benchmarks and Categories_
RobotPerf beta[98] introduces benchmarks that cover the robotics pipeline from perception, to localization, to control, as well as dedicated benchmarks for manipulation. The full list of benchmarks in the beta release can be found in Table III. Aligned with our principles defined above, each benchmark is a self-contained ROS 2 package which describes all dependencies (generally other ROS packages). To facilitate reproducibility, all benchmarks are designed to be built and run using the common ROS 2 development flows (ament build tools, colon meta-build tools, etc.). Finally, so that the benchmarks can be easily consumed by other tools, a description of each benchmark, as well as its results, is defined in a machine-readable format. As such, accompanying the package.xml and CMakeLists.txt files required for all ROS packages, a YAML file named benchmark.yaml is in the root of each benchmark which describes the benchmark and includes accepted results.
### _Run Rules_
To ensure the reliability and reproducibility of the performance data, we adhere to a stringent set of run rules. First, tests are performed in a controlled environment to ensure that performance data is not compromised by fluctuating external parameters. As per best practices recommended by ros2_tracing[80], we record and report settings like clock frequency and core count. Second, we look forward to the possibility of RobotPerf being embraced by the community and have results undergo peer review, which can contribute to enhancing reproducibility and accuracy. Finally, we aim to avoid overfitting to specific hardware setups or software configurations by encompassing a broad spectrum of test scenarios.
## IV Evaluation
We conduct comprehensive benchmarking using RobotPerf to evaluate its capabilities on three key aspects vital for a robotics-focused computing benchmark. First, we validate
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Category** & **Benchmark Name** & **Description** \\ \hline \multirow{8}{*}{**Perception**} & a1\_perception\_2nodes & Graph with 2 components: rectify and resize[83, 84]. \\ & a2\_rectify & rectify component[83, 84]. \\ & a3\_stereo\_image\_proc & Computes disparity map from left and right images[85]. \\ & a4\_depth\_image\_proc & Computes point cloud from rectified depth and color images[86]. \\ & a5\_resize & resize component[83, 84]. \\ \hline \multirow{2}{*}{**Localization**} & b1\_visual\_slam & Visual SLAM component[87]. \\ & b2\_map\_localization & Map localization component[88]. \\ & b3\_apriltag\_detection & Apriltag detection component[89]. \\ \hline \multirow{4}{*}{**Control**} & c1\_rbot\_joint\_trajectory\_controller & Joint trajectory controller[90]. \\ & c2\_diffbot\_diff\_driver\_controller & Differential driver controller[91]. \\ & c3\_rbot\_forward\_command\_controller\_position & Position-based forward command controller[92]. \\ & c4\_rbot\_forward\_command\_controller\_velocity & Velocity-based forward command controller[92]. \\ & c5\_rbot\_forward\_command\_controller\_acceleration & Acceleration-based forward command controller[92]. \\ \hline \multirow{4}{*}{**Manipulation**} & d1\_xarm6\_planning\_and\_traj\_execution & Manipulator planning and trajectory execution[93]. \\ & d2\_collision\_checking\_fcl & Collision check: manipulator and box (FCL[94]). \\ \cline{1-1} & d3\_collision\_checking\_bullet & Collision check: manipulator and box (Bullet[95]). \\ \cline{1-1} & d4\_inverse\_kinematics\_kdl & Inverse kinematics (KDL plugin[96]). \\ \cline{1-1} & d5\_inverse\_kinematics\_lma & Inverse kinematics (LMA plugin[97]). \\ \cline{1-1} & d6\_direct\_kinematics & Direct kinematics for manipulator[93]. \\ \hline \hline \end{tabular}
\end{table} TABLE III: RobotPerf beta Benchmarks (see [98]).
the framework's capacity to provide comparative insights across divergent heterogeneous platforms from edge devices to server-class hardware. Second, we analyze the results to understand RobotPerf's ability to guide selection of the optimal hardware solution tailored to particular robotic workloads. Finally, we assess how effectively RobotPerf reveals the advantages conferred by hardware and software acceleration techniques relative to general-purpose alternatives. All of our results and source code can be found open-source at: [https://github.com/robotperf/benchmarks](https://github.com/robotperf/benchmarks).
### _Fair and Representative Assessment of Heterogeneity_
Assessing hardware heterogeneity in robotic applications is imperative in the ever-evolving field of robotics. Different robotic workloads demand varying computational resources and efficiency levels. Therefore, comprehensively evaluating performance across diverse hardware platforms is crucial.
We evaluated the RobotPerf benchmarks over a wide list of hardware platforms, including general-purpose CPUs on edge devices (e.g., Qualcomm RB5), server-class CPUs (e.g., Intel i7-8700), and specialized hardware accelerators (e.g., AMD Kria KR260). Figure 3 illustrates benchmark performance in robotics per category of workload (perception, localization, control, and manipulation) using radar plots, wherein the different hardware solutions are depicted together alongside different robotic workloads per category. Each hardware solution is presented with a different color, with smaller values and areas representing better performance in the respective category. Given our ability to benchmark 18 platforms (bottom of Figure 3), RobotPerf is capable of benchmarking heterogeneous hardware platforms and workloads, paving the way for community-driven co-design and optimization of hardware and software.
### _Quantitative Approach to Hardware Selection_
The rapid evolution and diversity of tasks in robotics means we need to have a meticulous and context-specific approach to computing hardware selection and optimization. A "one-size-fits-all" hardware strategy would be an easy default selection, but it fails to capitalize on the nuanced differences in workload demands across diverse facets like perception, localization, control, and manipulation, each exhibiting distinctive sensitivities to hardware capabilities. Therefore, a rigorous analysis, guided by tools like RobotPerf, becomes essential to pinpoint the most effective hardware configurations that align well with individual workload requirements.
The results in Figure 3 demonstrate the fallacy of a "one-size-fits-all" solution. For example, focusing in on the latency radar plot for control from Figure 3 (col 3, row 1), we see that the i7-12700H (I7H) outperforms the NVIDIA AGX Orin Dev. Kit (NO) on benchmarks C1, C3, C4, and C5, but is \(6.5\times\) slower on benchmark C2. As such, by analyzing data from the RobotPerf benchmarks, roboticists can better determine which hardware option best suits their needs given their specific workloads and performance requirements.
One general lesson learned while evaluating the data is that each workload is unique, making it hard to generalize across both benchmarks and categories. To that end, RobotPerf results help us understand how the use of various hardware solutions and dedicated domain-specific hardware accelerators significantly improves the performance.
### _Rigorous Assessment of Acceleration Benefits_
In the rapidly advancing field of computing hardware, the optimization of algorithm implementations is a crucial factor in determining the success and efficiency of robotic applications. The need for an analytical tool, like RobotPerf, that facilitates the comparison of various algorithmic implementations on uniform hardware setups becomes important.
Figure 2 is a simplified version of Figure 3, depicting AMD's Kria KR260 hardware solution in two forms: the usual hardware and a variant that leverages a domain-specific hardware accelerator (ROBOTCORE Perception, a soft-core running in the FPGA for accelerating perception robotic computations). The figure demonstrates that hardware acceleration can enable performance gains of as much as 11.5\(\times\) (from 173 ms down to 15 ms for benchmark a5). We stress that the results obtained here should be interpreted according to each end application and do not represent a generic recommendation on which hardware should be used. Other factors, including availability, the form factor, and community support, are relevant aspects to consider when selecting a hardware solution.
## V Conclusion and Future Work
RobotPerf represents an important step towards standardized benchmarking in robotics. With its comprehensive evaluation across the hardware/software stack and focus on industry-grade ROS 2 deployments, RobotPerf can pave the way for rigorous co-design of robotic hardware and algorithms. As RobotPerf matures with community involvement, we expect it to compare CPU, GPU and FPGA, exploring their power consumption and flexibility in augmenting real-world robotic computations. With a standardized robotics benchmark as a focal point, the field can make rapid progress in delivering real-time capable systems that will unlock the true potential of robotics in real-world applications.
Fig. 2: Benchmark comparison of perception latency (ms) on AMD’s Kria KR260 with and without the ROBOTCORE Perception accelerator. The benchmarks used are al, a2, and a5 as defined in Table III. We find that hardware acceleration can enable performance gains of as much as 11.5\(\times\).
## V Conclusion
Fig. 3: Benchmarking results on diverse hardware platforms across perception, localization, control, and manipulation workloads defined in RobotPerf beta Benchmarks. Radar plots illustrate the latency, throughput, and power consumption for each hardware solution and workload, with reported values representing the maximum across a series of runs. The labels of vertices represent the workloads defined in Table III. Each hardware platform and performance testing procedure is delineated by a separate color, with darker colors representing Black-box testing and lighter colors Grey-box testing. In the figure’s key, the hardware platforms are categorized into four specific types: general-purpose hardware, heterogeneous hardware, reconfigurable hardware, and accelerator hardware. Within each category, the platforms are ranked based on their Thermal Design Power (TDP), which indicates the maximum power they can draw under load. The throughput values for manipulation tasks and power values for localization tasks have not been incorporated into the beta version of RobotPerf. As RobotPerf continues to evolve, more results will be added in subsequent iterations. |
2309.06121 | Online Name-Based Navigation for Software Meta-languages | Software language design and implementation often involve specifications
written in various esoteric meta-languages. Language workbenches generally
include support for precise name-based navigation when browsing language
specifications locally, but such support is lacking when browsing the same
specifications online in code repositories.
This paper presents a technique to support precise name-based navigation of
language specifications in online repositories using ordinary web browsers. The
idea is to generate hyperlinked twins: websites where verbatim copies of
specification text are enhanced with hyperlinks between name references and
declarations. By generating hyperlinks directly from the name binding analysis
used internally in a language workbench, online navigation in hyperlinked twins
is automatically consistent with local navigation.
The presented technique has been implemented for the Spoofax language
workbench, and used to generate hyperlinked twin websites from various language
specifications in Spoofax meta-languages. However, the applicability of the
technique is not limited to Spoofax, and developers of other language
workbenches could presumably implement similar tooling, to make their language
specifications more accessible to those who do not have the workbench
installed. | Peter D. Mosses | 2023-09-12T10:44:01Z | http://arxiv.org/abs/2309.06121v1 | # Online Name-Based Navigation for
###### Abstract.
Software language design and implementation often involve specifications written in various esoteric meta-languages. Language workbenches generally include support for precise name-based navigation when browsing language specifications _locally_, but such support is lacking when browsing the same specifications _online_ in code repositories.
This paper presents a technique to support precise name-based navigation of language specifications in online repositories using ordinary web browsers. The idea is to generate _hyperlinked twins_: websites where _verbatim copies_ of specification text are enhanced with hyperlinks between name references and declarations. By generating hyperlinks directly from the name binding analysis used internally in a language workbench, online navigation in hyperlinked twins is automatically consistent with local navigation.
The presented technique has been implemented for the Spoofax language workbench, and used to generate hyperlinked twin websites from various language specifications in Spoofax meta-languages. However, the applicability of the technique is not limited to Spoofax, and developers of other language workbenches could presumably implement similar tooling, to make their language specifications more accessible to those who do not have the workbench installed.
code navigation, hyperlinked twins, language specifications, meta-languages, language workbenches +
Footnote †: _E 23, October 23–24, 2023, Cascais, Portugal_ © 2023 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-0396-6/23/10.
[https://doi.org/10.1145/3623476.3623528](https://doi.org/10.1145/3623476.3623528)
P.D.Mosses, 2023
## 1. Introduction
Name-based navigation is a significant aspect of software language engineering. IDEs generally include support for precise name-based navigation when browsing code _locally_, but such support is lacking _online_ when using ordinary webbrowsers on code repositories.
Here, we suggest to generate _hyperlinked twin websites_ from code repositories. The code on the website should look the same as it does in an IDE, and the hyperlinks should support the same name-based navigation as the IDE.
Software _meta-languages_ are a particularly important special case of software languages, and language workbenches implement name-based navigation for the meta-languages that they use. Moreover, a language workbench is likely to provide an API to access ASTs and name binding analyses, facilitating generation of hyperlinked twin websites.
To illustrate the suggested technique, the Spoofax language workbench (Spoofax, 2017) has been used to generate hyperlinked twins from various language specifications in Spoofax meta-languages.1 This involved writing only a small amount of code in the Spoofax meta-language Stratego. The code uses generic AST traversal to generate HTML from parsed and analysed specifications, and a simple API for accessing name binding information. The code is available on GitHub.2
Footnote 1: [https://pdmosses.github.io/hyperlinked-twins/](https://pdmosses.github.io/hyperlinked-twins/)
Footnote 2: [https://github.com/pdmosses/sdf/tree/master/org.metaborg.meta.lang](https://github.com/pdmosses/sdf/tree/master/org.metaborg.meta.lang). template/trans/generation/docs/
The rest of this section expands on the above points. Section 2 then explains the main steps of the generation process, which may be of interest to developers of other language workbenches. Section 3 briefly mentions some details specific to the use of Spoofax. Section 4 concludes, and discusses future work. Appendix A shows how a fragment of a language specification looks in Spoofax, in a GitHub repository, and in the hyperlinked twin generated from that repository.
### Name-Based Code Navigation
Software languages generally include _declarations_ that bind names to entities, and _references_ to those entities using the declared names. Name-based navigation between declarations and references is essential for browsing and exploring code in software languages.
Manual name-based navigation can be tedious and error-prone: it may require scrolling, or entering text in search boxes. It becomes significantly more difficult when declarations can be in different files from references to them - particularly when code is divided into hundreds of files, perhaps with a complicated import relationship.
Integrated software development environments (IDEs) support name-based navigation when locally browsing or editing code. When a reference to a name is selected, the IDE allows navigation directly to the relevant declaration(s). When a declaration is selected, the IDE may also support navigation directly to some or all the references to it.
Often, a name can be used in more than one declaration in the same project - either in different namespaces (e.g., types and constructors) or in different parts of the project. Support for name-based navigation using simple textual search may then be significantly inferior to precise navigation using name binding analysis, due to false positives in search results.
Support for name-based navigation is often weak in online code repositories when using ordinary web browsers. GitHub repositories currently support search-based code navigation in about a dozen mainstream programming languages (Bowards et al., 2017), but precise name-based navigation in only one language (Bowards et al., 2017): Python. GitHub's implementation of precise online name-based navigation requires specifying the name binding analysis of the language in terms of stack graphs (Bowards et al., 2017). Apart from the significant amount of expertise and effort required for that, a potential drawback of GitHub's approach may be the difficulty of validating that the navigation in the repository accurately reflects the name-binding analysis implemented in compilers. In any case, precise navigation on GitHub seems likely to be limited to a few major programming languages, despite the possibility for language developers to contribute support for further languages (Bowards et al., 2018).
### Software Meta-languages
A _meta-language_ is a language for specifying languages (primarily their syntax and semantics). A _software meta-language_ is simply a meta-language for specifying software languages. Specifications of major software languages can be large, and difficult to navigate. Moreover, unfamiliarity with a particular software meta-language can hinder manual name-based navigation in language specifications - especially when name binding in the meta-language differs significantly from that in conventional programming languages.
Development and validation of software language specifications is supported by software language workbenches, which generally implement precise name-based navigation. However, that navigation is not generally available for such language specifications when browsing them in online repositories using ordinary web browsers. To browse a language specification with precise name-based navigation, users then need to install a workbench locally and download a copy of the repository.
### Prior Examples of Hyperlinked Twins
The reference manuals of most current programming languages are available online in HTML or PDF, and can be browsed using ordinary browsers. There, hyperlinks already support name-based navigation in grammars that specify language syntax. When the hyperlinks are generated from repositories containing the plain text of the grammars, the reference manuals may then be regarded as hyperlinked twins.
The author has previously developed support for precise name-based navigation of language specifications online: the CBS-beta website,3 which was generated from CBS specifications whose syntax and name binding were specified in Spoofax meta-languages. In (Bowards et al., 2017) he speculated that the approach used to generate the CBS-beta website might be applicable to other software meta-languages; the present paper confirms that, but it turned out not to be possible to reuse the implementation of the generation process directly: the code involved case analysis on the constructs of CBS, and would need to be almost completely reimplemented for each meta-language.
Footnote 3: [https://plancomps.github.io/CBS-beta/](https://plancomps.github.io/CBS-beta/)
Various other specification frameworks provide tool support for generating hyperlinked websites from specifications. For example, the web version of an online book (Bowards et al., 2017) includes hyperlinked pages generated from (literate) Agda source code. If web versions of source code in other specification languages can be generated using the same tool support, it would be interesting to compare the generation process with that outlined here.
## 2. Generating Hyperlinked Twin Websites
The aim is tool support for online name-based navigation of language specifications in ordinary web browsers. The main idea is to generate web pages where verbatim copies of the specifications are enhanced with hyperlinks between name references and declarations. By generating the hyperlinks directly from analyses used internally in language workbenches, online navigation in language specifications is automatically consistent with local navigation.
The proposed technique has been implemented in the Spoofax language workbench, with only modest effort, as outlined in Section 3; it might be possible to implement it in other language workbenches in much the same way.
Suppose that some language workbench is to generate a hyperlinked website from the plain code of a language specification found online in some GitHub repository. The suggested technique is to proceed as follows.
RequirementsThe language workbench needs to parse and analyse the plain language specification. Unless the workbench can directly access the repository online, a local clone is required; and to add the source files for the generated website to the repository using pull-requests, the clone will need to be published as a fork of the repository.
If the language specification is in meta-languages supported by the workbench, it can already parse and analyse them. However, the results also need to be accessible for transformation to HTML. (That should always be possible when the implementation of the meta-languages in the workbench is bootstrapped.) If the specification uses external meta-languages, those languages need to be loaded into the workbench before proceeding.
The following steps are to be applied to a complete language specification project.
Creating ASTsTo support generation steps that involve tree traversal, the first step is to parse the language specification files and create corresponding abstract syntax trees (ASTs). The generation process is to be completely independent of the detailed structure of the ASTs (and hence of the meta-language used for specification). The ASTs might correspond closely to parse trees, or they could be 'de-sugared' to remove semantically-irrelevant structure such as white space, line breaks, and literal terminal symbols (depending on the language).
However, the ASTs must support the addition of name binding information to nodes that correspond to declarations and references. Such nodes also need to reveal the start and end positions of their source text.
The language workbench may automatically parse files and generate their ASTs, otherwise this step needs to be explicitly executed.
Adding name binding analysisBased on the relevant name binding analysis for the meta-language, this step should ensure that all declarations and references can be detected when traversing the ASTs. Each declaration node needs to provide the source text of the declared name; each reference node needs to provide not only the name, but also the declaration(s) to which the reference has been resolved.
In general, a reference may resolve to a declaration in a different file; and a declaration of a single name may be spread across multiple files.
As with generating ASTs, a language workbench may automatically analyse files and add the resulting information to their ASTs, otherwise this step needs to be explicitly executed. The remaining steps are specific to the generation of hyperlinked websites, but could also be made automatic.
Generating plain HTMLThe obvious way to generate HTML that renders exactly as some plain source text is to enclose the text in <pre><code>...</code></pre> tags. In general, this preserves the white space (i.e., indentation and line breaks) of the source text - assuming that the rendering uses a fixed-width font.
The source text might also contain the characters '<', '>', and '&', which are all treated specially in HTML. These need to be replaced by the corresponding HTML entities '<', '>', and '&', respectively.
Subsequent steps are to enclose parts of the source text in tags for hyperlinks and highlighting. To avoid the need for obtaining the source text of all nodes in an analysed AST, plain HTML can be generated gradually, by copying characters from the source file to the generated file while traversing the AST (top down, left to right).
Generating hyperlinksTo generate hyperlinks between declarations and references, the relevant tags can be inserted whenever the traversal reaches the corresponding node.
When the node is a declaration of name \(N\) at position \(P\), the element <span id="\(N\_P\)">N</span> provides a unique target for references that resolve to this declaration of \(N\). The inclusion of the position \(P\) ensures that the ID of the tag is unique in the generated file
Similarly, when a reference to name \(N\) resolves to a single declaration of \(N\) at position \(P\) in file \(F\), the anchor element <a href="\(F\#N\_P\)">N</a> renders as the desired hyperlink to the declaration.
In general, a reference to a single name may resolve (unambiguously) to multiple declarations, possibly located in multiple files. Similarly, multiple references may resolve to the same declaration(s). Such information can be added to HTML elements as a title attribute, which is usually displayed by HTML browsers as a tooltip while hovering over the element. (Pop-ups or modals could support links to multiple targets, but might be too distracting due to the high density of names in language specifications.)
Generating highlightingIndependently of name-based navigation, language workbenches use syntax highlighting to enhance code readability. To make code rendered on the generated website look the same as in a workbench, the website needs to replicate the colours and fonts that it uses.
Websites often highlight code in many software languages automatically. For example, GitHub highlights code in its repositories for hundreds of languages, using Tree-sitter4 parsing and context-aware token scanning to recognise different kinds of language construct - also coping gracefully with incomplete or syntactically ill-formed code.
Footnote 4: [https://tree-sitter.github.io/tree-sitter/](https://tree-sitter.github.io/tree-sitter/)
When a code editor of a language workbench supports the same automatic highlighting framework as a website, it might seem attractive to exploit it, and avoid the need for
adding highlighting markup when generating web pages. However, this seems incompatible with the simple approach adopted here for generating hyperlinks in HTML. In any case, websites seldom support automatic highlighting for software _meta_-languages.
So here, highlighting is added to generated HTML using tags of the form <span class="C">...</span>, where \(C\) indicates the (syntactic or lexical) sort of the enclosed text. The rendering of the text - font colour, style, and weight - can then be specified in CSS (generated from data in the language workbench).
Generating a websiteWhen generating a website from code in a repository, it is natural to generate a separate web page for each code file, and copy the directory structure. The website navigation panel can then display the directory structure as a tree, with links to the individual pages as leaves. The detailed rendering of the navigation panel on the website is not so important, because name-based navigation reduces (or even eliminates) the need for drilling down through the directory structure of a code project when browsing or exploring code online.
Static site generators (SSGs) such as MkDocs5 and Jekyll6 can generate websites automatically from HTML files. Metadata can be prefixed to the HTML content as so-called front matter, e.g., specified in YAML. HTML can also be embedded directly in Markdown, which facilitates the inclusion of headings and links in the generated source files for the website. An important advantage of relying on an SSG to generate web pages from Markdown is that the resulting HTML can be expected to render properly in any (modern) web browser, on mobile devices as well as desktop and laptop computers.
Footnote 5: [https://www.mkdocs.org](https://www.mkdocs.org)
Footnote 6: [https://jekyllrb.com](https://jekyllrb.com)
Figure 1 illustrates the form of the generated HTML. It is a single line from a source file for a hyperlinked twin website (here wrapped to fit the page width).
## 3. Using Spoofax
The Spoofax Language Workbench7 currently uses three main meta-languages: SDF3 for syntax, Statix for name binding, and Stratego for transformation. The meta-languages are themselves specified using Spoofax meta-languages (including the now-deprecated SDF2, NaBL, and NaBL2). A further meta-language is ESV, for specifying editor services, including syntax highlighting details. The specifications of all the meta-languages are available as Spoofax language projects on GitHub in repositories of the MetaBorg organisation.8
Footnote 7: [https://spoofax.dev/references/](https://spoofax.dev/references/)
The Spoofax language workbench is implemented as an Eclipse plugin. To implement generation of hyperlinked websites for an external language specified using Spoofax meta-languages, it is possible to add the required code to the language specification using the plugin. (That is how the CBS-beta website was generated, based on the specifications of the CBS meta-language in SDF3 and NaBL2.)
To add the required code to a Spoofax meta-language such as SDF3, however, it is necessary to build the complete baseline version for bootstrapping Spoofax-2, following the steps explained in the documentation on Spoofax Development.9 By adjusting the version number in the dependency specification of the relevant meta-language, Spoofax can be used to parse, analyse, and transform its own specifications.
Footnote 9: [https://spoofax.dev/howtos/development/](https://spoofax.dev/howtos/development/)
Spoofax provides a Stratego API for reading text from a file, and for parsing it to produce an AST. The parser is generated automatically from the SDF3 specification of the language when the language project is built. The API also supports analysing the name binding of all the files in an Eclipse project, and adding the analysis as annotations on the AST nodes, which can also be accessed using Stratego. And it supports accessing the source text of nodes in the AST, which is based on origin-tracking. The same API includes strategies for obtaining the character positions of name declarations and references.
The generation of a web page with hyperlinks from each source file in a project is specified as a generic traversal in Stratego, independently of the syntax of the language.
For example, Figure 2 shows the Stratego code for generating HTML from references.
Currently, there is no Stratego API for accessing the kinds of individual lexical tokens determined by parsing. As a workaround, highlighting markup is added using pattern matches on the source text (expressed by Stratego strategy combinators) and rendered using CSS generated from an ESV specification. The result corresponds closely to the highlighting in Spoofax.
The documentation site theme used for the main Spoofax documentation website (Material for MkDocs10) automatically generates a navigation panel with the same structure as the source project, with language-independent configuration. However, the underlying MkDocs SSG transforms directory names; a plugin11 is required to ensure that the rendered links in the navigation panel show the untransformed names.
Footnote 10: [https://squidfunk.github.io/mkdocs-material/](https://squidfunk.github.io/mkdocs-material/)
It is straightforward to deploy the generated web pages to GitHub Pages using Actions. Versioned web pages could also be deployed for different releases or branches.12 |
2301.13468 | Complete identification of spin-wave eigenmodes excited by parametric
pumping in YIG microdisks | We present the parametric excitation of spin-wave modes in YIG micro-disks
via parallel pumping. Their spectroscopy is performed using magnetic resonance
force microscopy (MRFM), while their spatial profiles are determined by
micro-focus Brillouin light scattering (BLS). We observe that almost all the
fundamental eigenmodes of an in-plane magnetized YIG micro-disk, calculated
using a micromagnetic eigenmode solver, can be excited using the parallel
pumping scheme, as opposed to the transverse one. The comparison between the
MRFM and BLS data on one side, and the simulations on the other side, provides
the complete spectroscopic labeling of over 40 parametrically excited modes.
Our findings could be promising for spin-wave-based computation schemes, in
which the amplitudes of a large number of spin-wave modes have to be
controlled. | Titiksha Srivastava, Hugo Merbouche, Igor Ngouagnia Yemeli, Nathan Beaulieu, Jamal Ben Youssef, Manuel Munoz, Ping Che, Paolo Bortolotti, Vincent Cros, Olivier Klein, Soraya Sangiao, Jose Maria De Teresa, Sergej Demokritov, Vladislav Demidov, Abdelmadjid Anane, Claudio Serpico, Massimiliano d'Aquino, Gregoire de Loubens | 2023-01-31T08:14:54Z | http://arxiv.org/abs/2301.13468v1 | # Complete identification of spin-wave eigenmodes excited by parametric pumping in YIG microdisks
###### Abstract
We present the parametric excitation of spin-wave modes in YIG microdisks via parallel pumping. Their spectroscopy is performed using magnetic resonance force microscopy (MRFM), while their spatial profiles are determined by micro-focus Brillouin light scattering (BLS). We observe that almost all the fundamental eigenmodes of an in-plane magnetized YIG microdisk, calculated using a micromagnetic eigenmode solver, can be excited using the parallel pumping scheme, as opposed to the transverse one. The comparison between the MRFM and BLS data on one side, and the simulations on the other side, provides the complete spectroscopic labeling of over 40 parametrically excited modes. Our findings could be promising for spin-wave-based computation schemes, in which the amplitudes of a large number of spin-wave modes have to be controlled.
## I Introduction
Novel proposals for spin-wave-based computing schemes necessitate the generation and control of multiple spin-wave (SW) modes [1; 2; 3; 4; 5]. The most standard way to excite SW modes in a magnetic microstructure is by direct inductive coupling. There, the quasi-uniform microwave field, produced on the magnetic volume by an rf antenna, couples to the transverse dynamical component of the magnetization associated with the SW mode, with a maximal efficiency when the applied rf frequency coincides with the eigenfrequency of the mode. However, this method is not adapted to excite modes with anti-symmetric spatial profiles, as their overlap integral with the excitation field is zero [6], nor short-wavelength modes, as their excitation efficiency quickly decreases with their wavevector. Yet, these two categories of modes make up a significant part of the SW k-space. In order to excite a large number of modes irrespective of their spatial profiles, parametric parallel pumping, which does not suffer from these limitations, becomes the ideal choice [7]. In this case, the microwave magnetic field created by the rf antenna is aligned parallel to the static field. As a result, it does not couple to the SW modes directly. Instead, it interacts with the dynamic component of magnetization oscillating at \(2\omega\) in the static field direction, which arises due to the elliptical trajectory of magnetization precession at \(\omega\). An rf field at \(2\omega\) can therefore excite SW modes at \(\omega\). A quantum mechanical picture of this process is a photon generating two magnons of opposite momenta at half its frequency [8]. Since this is a nonlinear process, SWs are excited only if the amplitude of the excitation field exceeds a parametric threshold, which depends on the mode relaxation, and on the mode ellipticity. The threshold power is lower for lower relaxation rates and higher ellipticities.
Parallel pumping has been employed to generate SW modes in extended films [9; 10; 11; 12; 13] and micro- and nano-waveguides [14; 15] of yttrium iron garnet (YIG), as well as in magnetic nanocontacts [16], magnetic tunnel junctions [17], and micro- and nano-dots of Permalloy [18; 19; 20]. It has also been used for SW amplification [21]. All these studies have been limited to a handful of modes. The excitation and identification of many modes in an adequate system would pave the way towards simultaneous control and manipulation of a large number of SW modes for different applications in magnonics.
In this study, we present the excitation and identification of multiple SW modes in YIG microdisks via parametric pumping. The scheme of the experiments is shown in Fig. 1. The SW modes are excited in YIG disks of diameters 1 \(\upmu\)m, 3 \(\upmu\)m and 5 \(\upmu\)m through an integrated rf antenna and detected using a magnetic resonance force microscope (MRFM). Their spatial profiles can also be recorded using micro-focus Brillouin light scattering spectroscopy (\(\upmu\)-BLS). We observe that almost all the SW eigenmodes are accessible by parametric pumping. As expected, these eigenmodes become fewer in number as the size of the disk decreases. For the 3 \(\upmu\)m disk, we label over 40 eigenmodes by comparing its MRFM parametric spectroscopy to micromagnetic simulations, and confirm the identification of as many as 10 of them through their profiles thanks to \(\upmu\)-BLS. Our results could be instrumental in designing basic units for unconventional computing schemes like neuromorphic computing using hyperconnected populations of a large number of eigen-excitations in a single microstructure.
## II Results
### Sample
We use 50 nm thick YIG grown on 0.5 mm thick GGG substrate by liquid phase epitaxy [22]. The characteristics of |
2307.16846 | Phase transitions of McKean-Vlasov SDEs in Multi-well Landscapes | Phase transitions and critical behaviour of a class of MV-SDEs, whose
concomitant non-local Fokker-Planck equation includes the Granular Media
equation with quadratic interaction potential as a special case, is studied. By
careful analysis of an implicit auxiliary integral equation, it is shown for a
wide class of potentials that below a certain `critical threshold' there are
exactly as many stationary measures as extrema of the potential, while above
another the stationary measure is unique, and consequently phase transition(s)
between. For symmetric bistable potentials, these critical thresholds are
proven to be equal and a strictly increasing function of the aggregation
parameter. Additionally, a simple condition is provided for symmetric
multi-well potentials with an arbitrary number of extrema to demonstrate
analogous behaviour. This answers, with considerably more generality, a
conjecture of Tugaut [Stochastics, 86:2, 257-284]. To the best of our knowledge
many of these results are novel. Others simplify the proofs of known results
whilst greatly increasing their applicability. | Alexander Alecio | 2023-07-31T17:10:09Z | http://arxiv.org/abs/2307.16846v2 | # Phase transitions of McKean-Vlasov SDEs in multi-well landscapes
###### Abstract.
Phase transitions and critical behaviour of a class of MV-SDEs, whose concomitant non-local Fokker-Planck equation includes the Granular Media equation with quadratic interaction potential as a special case, is studied. By careful analysis of an implicit auxiliary integral equation, it is shown for a wide class of potentials that below a certain 'critical threshold' there are exactly as many stationary measures as extrema of the potential, while above another the stationary measure is unique, proving the existence of a phase transition. For symmetric bistable potentials, these critical thresholds are proven to be equal and a strictly increasing function of the aggregation parameter, answering a conjecture of Tugaut [Stochastics, 86:2, 257-284]. Further, a simple condition on symmetric multi-well potentials is provided such that the upper critical transition is similarly strictly increasing. To the best of our knowledge many of these results are novel. Others simplify the proofs of known results whilst greatly increasing their applicability.
The general form of McKean-Vlasov SDEs (MV-SDEs) in one dimension is
\[dX_{t}=b(X_{t},\mu_{t})dt+\sigma(X_{t},\mu_{t})dW_{t},\qquad\mu_{t}=\text{Law}( X_{t}),\,t>0\]
In this work we focus on one-dimensional MV-SDEs with separable drifts that depend on \(\mu_{t}\) via an integral functional, \(b_{\theta}(x,\mu)=f_{1}(x)+\theta\mathbb{E}_{\mu}(f_{2}(x))\) and diffusion \(\sigma(x)=\sigma k(x)\), where aggregation strength (\(\theta\)) and diffusion strength (\(\sigma\)) parameter are strictly positive constants. This leads to MV-SDE
\[dX_{t}=\big{(}-V^{{}^{\prime}}(X_{t})-\theta(P^{{}^{\prime}}(X_{t})-\mathbb{E} _{\rho_{t}}[P^{{}^{\prime}}(x)]\big{)}dt+\sigma k(X_{t})dB_{t},\qquad\rho_{t}= \text{Law}(X_{t}),\,t>0 \tag{1}\]
where \(k>\epsilon\). MV-SDEs of this form have been used in numerous applications, of which we mention systemic risk [9] and global optimisation [14].
The concomitant Fokker-Planck Equation of (1) is of non-linear, non-local type,
\[\frac{\partial}{\partial t}\rho=\frac{\partial}{\partial x}\Big{(}(-V^{{}^{ \prime}}(x)-\theta(P^{{}^{\prime}}(x)-\mathbb{E}_{\rho}[P^{{}^{\prime}}(x)])) \rho+\frac{\sigma}{2}\frac{\partial}{\partial x}k^{2}(x)\rho\Big{)} \tag{2}\]
The particular form of the drift with respect to aggregation parameter was chosen so that, when \(P^{{}^{\prime}}=x\) and \(k=1\), the granular media equation is recovered
\[\frac{\partial}{\partial t}\rho=\frac{\partial}{\partial x}(\rho\frac{\partial} {\partial x}(\log(\rho)+V+F*\rho)) \tag{3}\]
with \(F=\frac{1}{2}x^{2}\), [5]. Although we do not use this approach, it is of note that this is a gradient flow, with respect to the Wasserstein metric, of the free energy functional
\[\mathcal{F}[\rho]=\sigma^{2}\int\rho\ln\rho dx+\int V\rho dx+\int\int F(x-y) \rho(x)\rho(y)dxdy\]
see [18, 19].
Traditionally, solutions to MV-SDE (1) are derived as the hydrodynamic limit of a system of SDEs driven by independent Wiener processes, with the same diffusion and drift as (1) but with their empirical measure replacing \(\mu\). This is called the 'Propogation of Chaos' which is described by a number of papers with differing conditions on the drift, for instance [13, 17]. A sufficient setting for the results of this paper is existence and uniqueness of solutions to the Fokker-Planck equation (2), where positivity of solutions yield a priori bounds on functional \(\mathbb{E}(P^{{}^{\prime}})\).
As with the linear stationary Fokker-Planck equation, direct integration of the stationary form of (2) yields the form of the stationary measure
\[\rho_{0}(\sigma,x,m)=\exp\Big{(}-\frac{2}{\sigma^{2}}\int^{x}\frac{V^{{}^{ \prime}}+\sigma^{2}kk^{{}^{\prime}}+\theta(P^{{}^{\prime}}(x)-m)}{k^{2}}\Big{)} \tag{4}\]
with the important caveat that \(\rho(m)\) is an admissible stationary measure if and only if \(m\) is a solution to the auxiliary equation \(m=\int-V^{{}^{\prime}}\rho_{0}(m)dx\), better known as the self-consistency function.
There are well known examples of MV-SDE (1) ([6, 16] for instance), whose self-consistency equation can have a single or multiple solutions depending on parameters choice, thus multiple stationary measures and a richness of long time behaviour. Casting \(\sigma\) as the control parameter, admissible stationary solutions can be viewed as phases, whose characteristic property \(\mathbb{E}_{\rho_{0}(m)}(P^{{}^{\prime}}(x))\) (solutions of the self-consistency function) plays the role of order parameter. Bifurcations of the order parameter as a function of the control parameter are then continuous phase transitions. For a comprehensive tutorial introduction, how this is compatible with Landau theory and the connexion to self-organisation and synergetics, consult [8].
The purpose of this work is to better understand this critical behaviour which, given much of the attraction of MV-SDE models is in this critical behaviour and multitude of long time dynamics, would seem a timely contribution. Particularly, we study the number of stationary solutions and their location, along with phase transition points (or critical transitions) and their dependence on aggregation parameter.
The approach taken in this work is centered on the self-consistency equation. In fact, many of our results take advantage of a useful equivalence, the genesis of which arose from numerical studies simulating SDE (1) without a free energy functional. Earlier simulation approaches would expand \(P^{{}^{\prime}}\), \(V^{{}^{\prime}}\) as a truncated power series \(a^{n},b^{n}\) and recast the SDE as a denumerable system of ODEs,
\[\dot{m}_{i}=\mathbb{E}(dX_{t}^{i})=i\mathbb{E}[a^{n}(X_{t})X_{t}^{i-1}dt]+ \sigma\frac{i(i-1)}{2}\mathbb{E}[b^{n}(X_{t})X_{t}^{i-2}dt]\]
allied with a method of closure; [3, 4].
These moment truncation schemes, were surprisingly effective in representing critical behaviour with only a few retained moments and the simplest closure schemes. This was surprising, as it was not immediate in what manner the self-consistency equation manifested in the moment evolution equations. That the number of retained moments could be taken so low suggested it was the lowest order moment equation that encapsulated the self-consistency equation. This motivated further work, eventually showing the first moment evolution equation and self-consistency equation were equivalent. This has interesting ramifications for the applicability of moment truncation schemes to SDE (1) and, by extension cumulant truncations; [1]
Working with the first moment evolution equation leads to robust results with proofs that are intuitive and without much technical obfuscation. Further they can yield quantitative estimates of parameters if required. Motivated by applications, the approach taken has been to not be overly prescriptive in assumptions of \(V^{{}^{\prime}}\) to maintain flexibility.
### Outline and relation to other works
This paper naturally divides into two halves, the first deals in problems with fairly general conditions, while the second involves results that require some symmetry.
The main results are:
* For suitably smooth multi-well potentials with unimodal \(\rho\) (specifically that \(\bar{V}^{{}^{\prime}}\) is a diffeomorphism, subsequently weakened to a homeomorphism).
Below the lower critical threshold, \(\sigma<\sigma_{c}^{l}\), there are exactly as many stationary measures as simple roots of \(\bar{V}^{{}^{\prime}}\) - Proposition 1.2 and Corollary 1.3 * Above the upper critical threshold, \(\sigma_{c}^{u}<\sigma\), there is only one stationary measure - Proposition 1.8 * Multimodal \(\rho\) is considered as a special case and a counter-example given where the number of stationary measures is less than the number of roots of \(V^{{}^{\prime}}\) - Remark 1.6
* For antisymmetric \(V^{{}^{\prime}}\), \(P^{{}^{\prime}},k\), but otherwise looser assumptions,
* With a symmetric bistable potential \(\sigma_{c}^{l}=\sigma_{c}^{u}\) and \(\sigma_{c}^{u}(\theta)\) is an increasing function - Proposition 2.2 and 2.8.
* We define a class of symmetric multi-well potentials that behave similarly to the bistable potential, particularly that \(\sigma_{c}^{u}(\theta)\) remains an increasing function - Proposition 2.15 and Corollary 2.16
There has been quite a lot of sustained activity related to these questions. Recently, there has been a growing literature based on modern variational methods for the granular media equation, [20]. Less recently [16] was amongst the first to study the symmetric polynomial bistable problem, studying convergence properties variationally. However, a result substantially like Proposition 2.2 is given, through a study of the self-consistency equation. The proof relies heavily on the the GHS inequality, which is resistant to generalisation. For a similar, entirely non-variational study, see [6]
In a comprehensive series of papers, author Tugaut has studied many problems related to this one. The closest in scope and applicability to this paper is [21], which employs a self-consistency equation centered method to study polynomial bistable potentials and positive interactions of the form \(x^{2n}*\rho,\,n\geq 2\). The first half of [21] is dedicated to quadratic interactions, corresponding to the Granular Media equation with interaction kernel \(F=\frac{x^{2}}{2}\) which, as already discussed, our interactions incorporate. In this overlap our results strengthen his with more general assumptions, in the process simplifying the proofs. For instance, Proposition 1.2/ Corollary 1.3 of [21] demonstrates that, for sufficiently small \(\sigma\), there are at least as many stationary measures as roots \(V^{{}^{\prime}}\) which is improved to equality by our Proposition 1.2. Theorem 2.1 of [21] establishes there are exactly three solutions for bistable potentials of the form \(\sum_{j=2}^{N}|a_{j}|x^{2j}-x^{2}\). Our Proposition 2.2 simplifies the argumentation and extends the result to a large class of bistable potential. Further, for such polynomial bistable potentials, on p.270 it is conjectured \(\sigma_{c}^{u}(\theta)\) is increasing, which
we answer in Proposition 2.8 and extend to symmetric multi-well potentials in Proposition 2.15. Although we do not study interactions of the form \(x^{2n}*\rho,\,n\geqslant 2\), see the discussion in the conclusion.
### Set-up
We formalise the remarks made in the introduction, and introduce definitions needed throughout the rest of this work.
**Definition 0.1**.: _(Self-Consistency Function) The self-consistency function is_
\[G_{(\sigma,\theta)}(m)=\int(P^{{}^{\prime}}(x)-m)\rho_{0}(\sigma,x,m)dx \tag{5}\]
_Solutions of the equation \(F_{\sigma}(m)=0\) correspond to admissible stationary measures of (1)_
The following term has become established in related literature, highlighting parallels between (6) and a stationary measure of a Smoluchowski SDE:
**Definition 0.2**.: _(The Effective Potential) \(\bar{V}_{\theta}(x,m)=\int\frac{V^{{}^{\prime}}+\theta(P^{{}^{\prime}}-m)}{k^ {2}}\), \(\theta\in\mathbb{R}^{+}\)_
In this work we actually take the primitive rather than the potential as the fundamental object of study.
It is useful to distinguish the exponential 'Gibbs' measure' part of the stationary measure
\[\rho_{0}(\sigma,x,m)=\frac{1}{k^{2}}\exp\Big{(}-\frac{2}{\sigma^{2}}\bar{V}_{ \theta}(x,m)\Big{)}:=\frac{1}{k^{2}}\rho(\sigma,x,m) \tag{6}\]
**Proposition 0.3**.: _The self-consistency equation is equivalent to to the first moment evolution equation_
\[F_{(\sigma,\theta)}(m)=\frac{1}{\theta}\int-\frac{V^{{}^{\prime}}}{k^{2}}\rho (\sigma,x,m)dx \tag{7}\]
Proof.: Working from Definition 5
\[G_{\sigma}(m)=\int(P^{{}^{\prime}}(x)-m)\rho_{0}(\sigma,x,m)dx= \int(P^{{}^{\prime}}(x)-m)\frac{1}{k^{2}}\exp(\bar{V}_{\theta}(x,m))dx\\ =\int\partial_{x}\big{(}\exp(-\frac{2}{\sigma^{2}}\int\frac{P^{{ }^{\prime}}-m}{k^{2}})\big{)}k^{2}\exp(-\frac{2}{\sigma^{2}}\int\frac{V^{{}^{ \prime}}}{k^{2}})dx \tag{8}\]
Assuming sufficient regularity to ignore the boundary terms, the result follows from an integration by parts.
Throughout this work \(J\) always denotes a connected compact set. Conditions introduced in each section will always ensure \(\rho\) is normalisable and \(F_{\sigma}\) exists for all \(m,\sigma,\theta\) in their respective domains. We take \(\rho\) normalised in Section 1 and unnormalised in Section 2. We will frequently suppress \(\theta\) dependence when context allows.
In terms of \(F_{\sigma}\), a critical transition is any value of \(\sigma\) where the number of roots changes. Assuming they exist, the upper (lower) critical transition is the largest (smallest) such value.
## 1. Generic Multi-well Potentials
This section is devoted to MV-SDE (1) with a large class of polynomial bounded multi-well potentials and strictly convex (unimodal)effective potentials, subsequently loosened to convex. The non-convex (or multimodal) problem is considered as a series of convex ones.
### Definitions and Assumptions
We introduce the following definitions for this section. First is the inverse of a function that maps \(m\) to the mode of (6)
**Definition 1.1**.: _(Modal dependence) \(x^{*-1}=\frac{1}{\theta}(V^{{}^{\prime}}+\theta P^{{}^{\prime}})\)._
The second is the convenient short-hand \(a=\int_{0}^{x}\frac{1}{k^{2}}\).
Next, we introduce a series of conditions for the results of this section. They are phrased in terms of \(V^{{}^{\prime}}\) and \(\bar{V}\), which was seen to be their most natural form given the proof of Proposition 1.2 and others.
1. \(V^{{}^{\prime}}(x,0)\), \(P^{{}^{\prime}}\in C^{2}(\mathbb{R})\), \(k\in C^{1}(\mathbb{R})\).
2. \(\|a^{{}^{\prime}}\|_{\infty}=\|\frac{1}{k^{2}}\|_{\infty}<\infty\)
3. \(\lim\limits_{|x|\to\infty}\frac{\bar{V}(x,0)}{x^{2}}>0\)
4. \(\lim\limits_{x\to\pm\infty}\bar{V}^{{}^{\prime}}(x,0)=\lim\limits_{x\to\pm \infty}V^{{}^{\prime}}=\pm\infty\)
5. \(|V^{{}^{\prime}}|<K(1+x^{2N})\)
6. \((x^{*-1})^{{}^{\prime}}>0\)
Assumptions 1, 3, 4 & 5 provide useful bounds ensuring, amongst other things that \(F_{\sigma}(m)\) exists for all \(m\), there is than sufficient regularity for the integration by parts and the boundary terms are null. Assumption 4 ensures \(F_{\sigma}(m)\) is bounded away from zero far away from the origin. It is possible to weaken the last condition, which we consider as special cases of the above. See Corollary 1.3 and the Remark 1.6.
**Proposition 1.2** (Convergence of \(F_{\sigma}(m)\) (5)).: _Under Assumptions (1)-(6), \(x^{*}\) exists and, as \(\sigma\downarrow 0\), \(F_{\sigma}(m)\) converges to \(-\frac{V^{{}^{\prime}}}{k^{2}}\circ x^{*}(m)\) uniformly on compact sets._
_Moreover, let \(J\) denote some connected compact set containing \(N<\infty\) zeros of \(V^{{}^{\prime}}\) strictly in its interior. If all these zeros are simple, then there exists a critical threshold \(\sigma_{c}\) such that for \(\sigma<\sigma_{c}\), \(F_{\sigma}(m)|_{\bar{x}^{*-1}(J)}\) has precisely \(N\) zeros._
Proof.: That \(x^{*}(m)\) exists and its range is \(\mathbb{R}\) is an immediate consequence of Condition 4 and 6.
By Laplace's Method,
\[\lim_{\sigma\downarrow 0}\int_{\mathbb{R}}f(x)\frac{\exp(-\frac{2}{\sigma^{2}}( \bar{V}+ma))}{\sqrt{\frac{\pi}{V^{{}^{\prime\prime}}(x^{*})}}\exp(-\frac{2}{ \sigma^{2}}\bar{V}(x^{*}))}=f(x^{*}) \tag{10}\]
where we have suppressed \(m\) dependence in \(x^{*}\).
On compact sets, it follows from our assumptions that \(V^{{}^{\prime}}\), \(P^{{}^{\prime}}\), \(\|\frac{V^{{}^{\prime}}}{k^{2}}\exp(-(\int\frac{\bar{V}-\theta ma}{k^{2}})\|_ {1}\) are all bounded and \(V^{{}^{\prime\prime}}(x^{*})=\frac{(x^{*-1})^{{}^{\prime}}}{k^{2}}(x^{*})> \epsilon>0\). The standard proof of the Laplace method shows that convergence is uniform on compact sets when \(f=-\frac{V^{{}^{\prime}}(x)}{k^{2}}\)1 or a constant.
Footnote 1: Strictly speaking, a constant may need to be added to \(-V^{{}^{\prime}}\) to ensure it is of one sign on the chosen compact set before applying the Laplace method. This is then subtracted off the limit to get the result
As limit (10) with \(f=1\) is bounded away from \(0\), the limit of the reciprocal exists and must be uniformly convergent also. \(\lim_{\sigma\downarrow 0}F_{\sigma}(m)\) is the product of this reciprocal and (10) with \(-\frac{V^{{}^{\prime}}(x)}{k^{2}}\), it is also uniformly converging too, given the denominators from (10) cancel each other.
To establish the second part of the Proposition, it's sufficient to show \(\frac{\partial F_{\sigma}}{\partial m}\) has the sign of \(-V^{{}^{\prime\prime}}(x_{0})\) in some neighbourhood of a simple root \(x_{0}\), for all \(\sigma\) arbitrarily small. Such a neighbourhood can be made arbitrarily small, by the uniform convergence of \(F_{\sigma}\). Given there must be at least one root in the neighbourhood by the intermediate value theorem, and that \(F_{\sigma}\) is there strictly increasing/decreasing, we can conclude there is one root for sufficiently small \(\sigma\).
All that remains is to establish that there is a some compact interval \(A\) on which \(\frac{\partial F_{\sigma}}{\partial m}\), has the sign of \(-V^{{}^{\prime\prime}}(x_{0})\), for all \(x^{*}\in A\). The crux of the method is to identify and exploit the covariance structure of \(\frac{\partial}{\partial m}F_{\sigma}\):
\[\frac{\partial}{\partial m}F_{\sigma}=\frac{2}{\sigma^{2}}\Big{(}\int_{ \mathbb{R}}-aV^{{}^{\prime}}\rho_{0}dx-\int_{\mathbb{R}}a\rho_{0}dx\int_{ \mathbb{R}}-V^{{}^{\prime}}\rho_{0}dx\Big{)}=\frac{2}{\sigma^{2}}\text{Cov}_{ x^{*}}(a,-V^{{}^{\prime}})\]
with \(a=\int^{x}\frac{1}{k^{2}}dx\). Particularly useful is the alternate form \(\operatorname{Cov}_{m}(a,-V)\),
\[\iint_{\mathbb{R}^{2}}\big{(}a(x)-a(y)\big{)}\big{(}-V^{{}^{\prime}}(x)-(-V^{{} ^{\prime}}(y))\big{)}\frac{\rho(x,x^{*})}{k^{2}(x)}\frac{\rho(y,x^{*})}{k^{2}(y )}dxdy \tag{11}\]
where we have used (6). Assuming \(x,y\) are sufficiently close to \(x_{0}\) that \(V^{{}^{\prime}}(x)\) is suitably well approximated by its Taylor expansion, and recalling \(a(x)\) is strictly increasing, it can be seen that the integrand \((a(x)-a(y))(-V^{{}^{\prime}}(x)-(-V^{{}^{\prime}}(y)))\) is positive or negative depending on the sign of \(-V^{{}^{\prime\prime}}(x_{0})\), by taking the cases \(x<y\), \(y<x\). By bounding the integrand, we will demonstrate rigorously that \(\rho\) weights this region sufficiently that the integral has the same sign.
Without loss of generality, we assume \(-V^{{}^{\prime\prime}}(x_{0})\geq 0\). As before, we replace \(\mathcal{Z}\) with \(\bar{\mathcal{Z}}\)
\[\bar{\mathcal{Z}}=\sqrt{\pi}\sigma\exp(-\frac{2}{\sigma^{2}}\bar{V}(x^{*}))\]
incorporating \(\bar{V}(x^{*})\) from \(\bar{\mathcal{Z}}\) into the exponent part of \(\rho\). We proceed by splitting the domain of integration into two parts, \(R_{1}(x^{*})=B_{R}(x^{*},\,x^{*})\) for \(x^{*}\) in some set \(A\), and its complement \(R_{2}\).
To define these regions, we consider the following factors. By Assumption 1 there exists an interval around root \(x_{0}\),
\[I=[x_{0}-2\delta,x_{0}+2\delta]\]
on which \(-V^{{}^{\prime\prime}}>-\underline{V}^{{}^{\prime\prime}}>0\). Further on \(I\) we can bound \(a^{{}^{\prime}}=\frac{1}{k^{2}}\) below by \(\frac{1}{k^{2}}>0\) by Assumption (1) and the Extremal Value theorem applied to \(k\). Then with the mean value theorem,
\[(a(x)-a(y))(-V^{{}^{\prime}}(x)-(-V^{{}^{\prime}}(y)))\geq-\underline{V}^{{}^ {\prime\prime}}\frac{1}{\underline{k}^{2}}(x-y)^{2}>0\]
for \(x,y\in I\times I\).
We bound \(\bar{V}(x)-\bar{V}(x^{*})\) above by \(\alpha(x-x^{*})^{2}\), \(\alpha>0\) for \([x,x^{*}]\subset I\) using the Taylor remainder theorem and Assumption 1.
Consequently we identify
\[A=[x_{0}-\delta,x_{0}+\delta]\]
\[R(x^{*})=\bar{B}_{\delta}(x^{*},x^{*}),\qquad R=\delta,\quad x^{*}\in A\]
By construction, \(R(x^{*})\subset I\times I\) when \(x^{*}\in A\), so the previous establish bounds are still applicable.
We bound (11) below on \(R_{1}\) by
\[\frac{2}{\sigma^{2}}\iint_{R_{1}}\frac{-\underline{V}^{{}^{\prime\prime}}}{ \underline{k}^{2}}\big{(}(x-x^{*})-(y-y^{*})\big{)}^{2}\frac{\exp\Big{(}-\frac{2 \alpha}{\sigma^{2}}\big{(}(x-x^{*})^{2}+(y-x^{*})^{2}\big{)}\Big{)}}{ \underline{k}^{4}2\pi\sigma^{2}}\]
where we have dropped the dependency of \(R\) on \(x^{*}\), as our bounds are uniform in \(x^{*}\). Transforming to polar coordinates - \(r\cos(\theta)=(x-x^{*})\), \(r\sin(\theta)=(y-x^{*})\) - we have
\[\frac{2}{2\pi\sigma^{4}}\int_{0}^{2\pi}(1-\sin(2\theta))d\theta\,\frac{- \underline{V}^{{}^{\prime\prime}}}{\underline{k}^{6}}\int_{R_{1}}r^{2+1}\exp( -\frac{2}{\sigma^{2}}\alpha r^{2})dr\quad\varpropto\quad\frac{1}{\sigma^{4}} (\sigma^{4}-\exp(-\frac{2}{\sigma^{2}})(\dots))\]
where \((\dots)\) is polynomial in \(\frac{1}{\sigma^{2}}\). Consequently the second, negative, term can be made arbitrarily small.
On \(R_{2}\), our argument is similar. We can bound the integrand \(|(V^{{}^{\prime}}(x)-V^{{}^{\prime}}(y))(a(x)-a(y))|<2K(2+x^{2N}+y^{2N})|x|\) by Assumption 2 and 5. Moreover, independently of bounded \(x^{*}\), as \(\bar{V}-\bar{V}(x^{*})\) is (super-) quadratic outside some finite radius (Assumption 3) and \(x^{*}\) is the sole minimum, we can bound \(\bar{V}(x)-\bar{V}(x^{*})>\beta(x-x^{*})^{2}\), \(\beta>0\), \(x^{*}\in A\).
Putting these bounds together and transferring to polar coordinates once again, we have the following lower bound on \(R_{2}\)
\[-K\int\frac{r^{2N+2}}{\tilde{k}^{6}}\exp(-\frac{2\beta}{\sigma^{2}}r^{2})dr\]
which decays \(\exp(-\frac{2}{\sigma^{2}})\). Adding these two bounds we see the integral has the sign of \(-V^{{}^{\prime\prime}}(x_{0})\) for all \(x^{*}\in A\), and the result follows.
Assumption 6 is by far the most onerous restriction needed for Proposition 1.2, requiring \(\bar{V}^{{}^{\prime}}\) to be a diffeomorphism. This can be loosened to a homeomorphism with the following assumptions.
* \(\bar{V}^{{}^{\prime\prime}}(x,0)\geq 0\), where the lower bound is attained at a finite number of isolated points \(\{\tilde{x}_{i}\}_{i}^{n}\) which are still global maxima of \(\rho\) when \(x^{*}=\tilde{x}\) and
* \(V^{{}^{\prime\prime}}(\tilde{x},0)\neq 0\), \(\forall\tilde{x}\in\{\tilde{x}_{i}\}_{i}^{n}\)
**Corollary 1.3**.: _With Assumption 6 replaced by 7 & 8, Proposition 1.2 still holds in its entirety._
Proof.: If Assumption 7 holds, we can apply Proposition 1.2 to all but a finite number of open intervals containing a \(\tilde{x}\) which can be made arbitrarily small with \(\sigma\). In these intervals,
an argument very similar to Proposition 1.2 will be made. We outline this, carefully noting any divergences.
As \(\tilde{x}\) is the unique maxima of \(\rho(x,\tilde{x})\), it is still possible to define \(x^{*}\), which is continuous. Moreover, the Laplace method can be adapted to the lowest order non-null derivative which must be of even order and positive. Applying the method to both the integral and \(\mathcal{Z}\) in the same way as the first part of the above formulation, we see the limit is as described.
The second part (bounding the derivative at roots of \(V^{{}^{\prime}}\) away from \(0\)) is almost entirely applicable, except we caveat that the bound \(\bar{V}-\bar{V}(x^{*})>\beta(x-x^{*})^{2}\) only holds outside an arbitrarily small interval around \(x^{*}\). As this bound need only hold on \(R_{2}\), this is not an obstacle.2
Footnote 2: The derivative at \(\tilde{x}\) may blow up as product of \(\sigma^{-2}\) and an integral with a non-zero \(\sigma^{2-\delta}\) term. As \(F_{\sigma}\) is totally bounded this does not affect the proof
To show convergence is uniform, we invoke Assumption 8. With this, we know \(V^{{}^{\prime\prime}}(x)\) is of one sign in some closed interval centered on \(\tilde{x}\). To that interval, we apply the method of the proof of the second part of Proposition 1.2 to demonstrate the derivative cannot be null (with the same caveats as described on the last paragraph), and conclude \(F_{\sigma}\) is the limit of strictly/increasing decreasing functions on this region. As \(F_{\sigma}\) is bounded on such an interval, convergence is uniform by corollary of the Helly Selection theorem, Exercise 7.13 (b) of [15]. As there are at most a finite number of such intervals, we conclude convergence is uniform on \(J\).
We furnish these propositions with a few remarks and examples.
_Remark 1.4_ (Examples).: Consider the simplest polynomial bistable potential \(V^{{}^{\prime}}=x^{3}-x\) with quadratic interaction \(P^{{}^{\prime}}=x\). Then \(\bar{V}^{{}^{\prime}}=x^{3}+(\theta-1)x\). There are three cases \(\theta<1,\theta=1,\theta>1\).
* \(\theta>1\), \(\bar{V}^{{}^{\prime}}\) is strictly increasing, so Proposition 1.2 can be applied.
* \(\theta<1\), \(\bar{V}^{{}^{\prime}}\) is not strictly increasing and cannot be tackled with neither Proposition 1.2 of Corollary 1.3, though it can be with the results of section 2.1
* \(\theta=1\) is the transition between the two regimes, which manifests as a point of inflexion at \(0\). However \(-V^{{}^{\prime\prime}}(0)=1\) so Corollary 1.3 applies.
[2] considers the related problem of \(\bar{V}^{{}^{\prime}}=\frac{x^{3}+(\theta-1)x}{1+x^{2}}\), where the same parameter regime holds.
_Remark 1.5_.: (Simple zeros) To understand the restriction to simple zeros, consider \(V^{{}^{\prime}}=x^{3}\), \(P^{{}^{\prime}}=x^{2}\). \(V^{{}^{\prime}}\) has a double root at \(0\), but for sufficiently small \(\sigma\), \(F_{\sigma}>0\) near zero (see Remark 2.6). Only at \(\sigma=0\) can there be a zero of \(F_{\sigma}\) at \(0=\bar{x}^{*-1}(0)\).
_Remark 1.6_.: (Multimodal \(\rho\)) In requiring \(\bar{V}^{{}^{\prime\prime}}>0\), Proposition 1.2 is limited to unimodal \(\rho\). This can be generalised to \(\rho\) possessing a finite number of modes, where Laplace's method applies to the largest of them.
The general form of \(\rho\) in (6) means \(\rho\) can still parameterised by \(m\) as \(x^{*-1}(m):=\arg\max_{x}\rho(m,x)\) is an increasing, piecewise continuous function. Discontinuities correspond to multiple modes of equal height, where it is still possible to define the limit \(F_{\sigma}\). For example, the bistable potential, \(P^{{}^{\prime}}=x\) with \(\theta<1\), \(\bar{x}^{*-1}(m)\) will be discontinuous at \(m=0\) but \(F_{\sigma}(0)\) is the average of the left and right limit, which is itself zero.
Discontinuities may restrict the zeros of \(F_{\sigma}\). Consider \(V^{{}^{\prime}}=x(x^{2}-1)(x^{2}-4)\), \(P^{{}^{\prime}}=x\) and \(\theta=1\). \(V^{{}^{\prime}}\) has a root at \(1\), but \(x^{*}(m)\) cannot be unity because of discontinuity.
If \(P^{{}^{\prime}}\) is increasing, it is worthy of note for the sequel that any problem will eventually be unimodal for sufficiently large \(\theta\) as \(|V^{{}^{\prime}}|\) is eventually unbounded, Assumption 4
**Lemma 1.7**.: _Suppose \(P^{{}^{\prime\prime}}\) has a positive lower bound. Then there exists \(\theta^{*}\), such that for all \(\theta>\theta^{*}\)\(\bar{V}^{{}^{\prime}}_{\theta}(x,0)\) is strictly increasing._
It is considerably easier to demonstrate the existence of \(\sigma^{u}_{c}\) than \(\sigma^{l}_{c}\). The following Proposition uses a very straightforward bounding argument with no extra assumptions to Proposition 1.2
**Proposition 1.8**.: _There exists an Upper Critical threshold \(\sigma^{u}_{c}\), such that for \(\sigma>\sigma^{u}_{c}\), \(F_{\sigma}(m)\) has only one root._
Proof.: By Assumption 2 and 4 there exists an \(R\) such that \(|\frac{V^{{}^{\prime}}}{k^{2}}|>c\), on \(\mathbb{R}\backslash B_{0}(R)\).
\[\frac{\partial F}{\partial m}=\int_{\mathbb{R}}a(-V^{{}^{\prime}})\rho(\sigma,x,m)dx\]
Splitting the integral over two domains and using that \(a\) is increasing with \(a(-R)<0<a(R)\) repeatedly:
\[\int_{-R}^{R}a(-V^{{}^{\prime}})\rho(\sigma,x,m)dx <C\int_{-R}^{R}\rho(\sigma,x,0)\exp(\frac{2}{\sigma^{2}}am)dx \tag{13}\] \[<C_{1}\big{(}\mathbb{I}_{m\geq 0}\exp(\frac{2a(R)}{\sigma^{2}}m)+ \mathbb{I}_{m<0}\exp(\frac{2a(-R)}{\sigma^{2}}m)\big{)} \tag{12}\]
with \(C=\sup_{[-R,R]}|a(-V^{{}^{\prime}})|\) and \(C_{1}=2RC\). Similarly,
\[\int_{\mathbb{R}\setminus B_{0}(R)}\frac{aV^{{}^{\prime}}}{k^{2} }\rho(\sigma,x,m)dx >C_{2}\int_{R}^{\infty}\rho(\sigma,x,0)\exp(\frac{2}{\sigma^{2}}am)dx \tag{15}\] \[>2C_{2}I_{\sigma}\Big{(}\mathbb{I}_{m\geq 0}\exp(\frac{2a(R)}{ \sigma^{2}}m)+\mathbb{I}_{m<0}\exp(\frac{2a(-R)}{\sigma^{2}}m)\Big{)} \tag{14}\]
where \(C_{2}=c\cdot\min(-a(-R),a(R))\) and \(I_{\sigma}=2\int_{R}^{\infty}\rho(\sigma,x,0)dx\)
The monotone convergence theorem shows \(I_{\sigma}\) is an increasing unbounded function. Therefore it is possible to find \(\sigma_{c}^{u}\) such that, for \(\sigma>\sigma_{c}^{u}\), \(I_{\sigma}>C_{1}+\epsilon\). Then, subtracting (15) from (13), we have \(\frac{\partial F}{\partial m}(m)<-\epsilon<0\). With this bound, and as \(F(m)\) exists for any finite \(m\), \(\lim_{m\to\pm\infty}F(m)=\mp\infty\). As \(F(m)\) can be bounded above anywhere by a decreasing linear function when \(\sigma>\sigma_{c}^{u}\), we conclude that \(F(m)\) has a unique root.
_Remark 1.9_.: This argument can be extended to multimodal \(\rho\).
As \(F_{\sigma}\) is bounded away from zero outside some finite interval by Assumption 4, we can count roots on all \(\mathbb{R}\). We combine this with a restatement of Proposition 1.2 and 1.8, with the same assumptions, in terms of stationary measures of MV-SDE(1).
**Theorem 1.10**.: _Stationary Measures of MV-SDE (1) Suppose \(V^{{}^{\prime}}\) has \(N<\infty\) zeros, \(\{x_{i}\}_{i}^{N}\), all of which are simple. Then there exists critical transition thresholds \(\sigma_{c}^{l},\,\sigma_{c}^{u}\) such that \(\sigma>\sigma_{c}^{u}\) MV-SDE (1) has one stationary measure while \(\sigma<\sigma_{c}^{l}\), it possesses exactly \(N\) stationary measures._
Proof.: Proposition 1.8 can be applied verbatim. By Assumption 4, there exists a finite interval \(J\) such that \(\{x_{i}\}_{i}^{N}\in J^{o}\) and \(|\frac{V^{{}^{\prime}}}{k^{2}}|>\delta\), \(x\in J^{c}\). Applying Proposition 1.2 to \(J\), noting \(F_{\sigma}\) must be bounded away from \(0\) for sufficiently small \(\sigma\), and the bijection between zeros and stationary measures, we can conclude the result.
Briefly, we compare MV-SDE (1) with \(\theta>0\) to \(\theta=0\). Below \(\sigma_{c}^{l}\), every stationary measure is a unimodal distribution whose mean (and mode) can be made arbitrarily close to any extremal point (maxima and minima) of \(V\). When \(\theta=0\), where this is one stationary
measure whose modes coincide with the minima of \(V\). Above \(\sigma_{c}^{u}\), MV-SDE (1) becomes ergodic but is still unimodal.
_Remark 1.11_ (More Examples).: Section 4 of [10] introduces simple quadratic and polynomial bistable potential perturbed by separable and non-separable fluctuations and quadratic interaction.
[10] approaches the problem by defining the potential \(V_{0}=\int_{x}\frac{V^{{}^{\prime}}}{k^{2}}\). Adding to it the separable perturbation, \(\delta\cos(\frac{x}{\epsilon})\), Proposition 1.2 easily applies with \(\theta>\frac{\delta}{\epsilon^{2}}\).
The non-separable perturbation \(\delta\mathbb{I}_{[-a,a]}x^{2}\cos(\frac{x}{\epsilon})\) requires a little more work as it is non-differentiable. Reading the remarks in [10], the indicator function is producted into the definition to control the fluctuations as \(|x|\to\infty\) to later apply the homogenisation theorem, so \(a\) should be read as 'large'. In light of this, it is reasonable to only consider \(a\) very close to a zero of \(cos(\frac{x}{\epsilon})=0\), and interpolate between the two with quadratic interpolants. This is in the unbounded region, so again Proposition 1.2 can be applied for suitably large \(\theta\)
## 2. Symmetric Effective Potentials
This section is concerned with SDE (1) with a symmetric effective potential (antisymmetric \(P^{{}^{\prime}}\) and \(-V^{{}^{\prime}}\)). Whilst a few results could be deduced from Section 1, they would have narrower applicability and most, such as those on the Critical Transition and its dependence on \(\theta\), cannot.
This is because, in return for this symmetry, some of the most onerous restrictions of Section 1 can be relaxed, such as the prohibition on double roots in \(-V^{{}^{\prime}}\) and non-increasing \(\bar{V}\) can be lifted. Amongst other interesting possibilities, this allows repulsive-attractive \(P^{{}^{\prime}}\).
### Bistable Potential
This section is concerned with SDE (1) with a antisymmetric \(P^{{}^{\prime}}\) and \(-V^{{}^{\prime}}\), which is bistable, i.e possessing three roots at \(-x^{*},0,x^{*}\). The polynomial bistable potential is well studied, [6, 7, 12, 16].
The approach here is still rooted in a study of the self-consistency equation \(F_{\sigma}\). After establishing some key propositions about the number of roots of \(F_{\sigma}\), critical transition points are studied in detail.
The most basic assumptions are
1. \(-V^{{}^{\prime}}\), \(P^{{}^{\prime}}\) and \(k^{{}^{\prime}}\) are antisymmetric and \(C(\mathbb{R})\).
2. \(-V^{{}^{\prime}}\) has three roots at \(\{0,\pm|x^{*}|\}\), and \(-V^{{}^{\prime}}(x)<0\,x>x^{*}\)
3. \(\lim_{|x|\to\infty}\frac{\bar{V}(x,0)}{x^{2}}>0\)
4. \(|V^{{}^{\prime}}|<K(1+x^{2N})\)
Assumption 1 and 3 ensures enough regularity for the integration by parts in section 0.2 and that the boundary terms can be discarded. The polynomial bounds in the fourth ensure integrability and the second simply outlines a bistable potential. Further assumptions (5-8) will be introduced when the critical transition point is studied.
Parity restrictions (Assumption 1) are manifested in \(F_{\sigma}\) according to the following lemma
**Lemma 2.1**.: _F(m) is antisymmetric_
Proof.: Assumption 1 implies \(\rho(x,m)=\rho(-x,-m)\), where \(\rho(x,m)=\exp(-\frac{2}{\sigma^{2}}(\bar{V}(x,m)))\). Then
\[F(-m)=\int_{-\infty}^{\infty}-V^{{}^{\prime}}(x)\rho(-m,x)dx\overset{ y=-x}{=}-\int_{\infty}^{-\infty}(-V^{{}^{\prime}}(y))\rho(-m,-y)dy=\] \[\int_{-\infty}^{\infty}V^{{}^{\prime}}(y)\rho(m,y)dy=-F(m)\]
holding regardless of \(\sigma\) or \(\theta\)
The following proposition characterises the roots of \(F_{\sigma}\). For constant diffusion, compare with (3.21) of [16], Theorem 3.31 of [6] and, particularly, Theorem 2.1 of [21]
**Proposition 2.2** (Properties of \(F(m)\) ).:
1. _F has a root at 0._
2. _There is at most one strictly positive (negative) root of_ \(F_{\sigma}\)_._
3. _These additional roots exist iff_ \(\left.\frac{\partial F}{\partial m}\right|_{m=0}>0\)__
Proof.: Part \((i)\) follows immediately from the antisymmetry of \(F_{\sigma}\), Lemma 2.1.
For Parts \((ii)\) & \((iii)\), we derive the series representation of \(F_{\sigma}\) by substituting \(\exp(\frac{2}{\sigma^{2}}mx)\) with its series representation,
\[F(m)=\int-V^{{}^{\prime}}(x)\rho(x,0)\sum_{n}\frac{(\frac{2}{\sigma^{2}}ma)^{ n}}{n!}dx\]
Given the antisymmetry of \(F_{\sigma}\), we focus on \(m\in\mathbb{R}^{+}\) The strategy for the proof is to show the coefficients of the series are at most change sign once, from positive to negative. It is then straightforward to demonstrate that \(F(m)\) must also have at most one root in \(\mathbb{R}^{+}\)
With the polynomial growth conditions 3 & 4, \(F(m)\) can be bounded above as moments of a Gaussian distribution. Exchange of the summation and integral
\[F(m)=\sum_{n}\frac{(\frac{2}{\sigma^{2}}m)^{n}}{n!}\int-V^{{}^{\prime}}a^{n}\rho( x,0)dx:=\sum_{n}\frac{(\frac{2}{\sigma^{2}}m)^{n}}{n!}I(n) \tag{16}\]
can then be justified with an application of the Tonelli-Fubini Theorem.
From the antisymmetry of the integrand, \(I(2n)=0\). Reciprocally, by the symmetry of the integrand of \(I(2n-1)\)
\[I(2n-1)=2\int_{0}^{\infty}-V^{{}^{\prime}}a^{2n-1}\rho(x,0)dx \tag{17}\]
By assumption on \(k\), \(a\) must be strictly increasing. Consequently,
\[\begin{split}&\big{(}\frac{a}{a(x^{*})}\big{)}^{2n-1}<1,\,x<x^{*} \\ &\big{(}\frac{a}{a(x^{*})}\big{)}^{2n-1}\geqslant 1,\,x\geqslant x^{*} \end{split} \tag{18}\]
With these facts, and introducing the rescaled coefficient
\[\tilde{I}(2n-1,[0,\infty))=2\int_{0}^{\infty}-V^{{}^{\prime}}\Big{(}\frac{a}{ a(x^{*})}\Big{)}^{2n-1}\rho(x,0)dx \tag{19}\]
with
\[\frac{a}{a(x^{*})}=\tilde{a}\]
it can be seen both
\[\begin{split} 0<&\tilde{I}(2n-3,[0,x^{*}])< \tilde{I}(2n-1,[0,x^{*}])\\ &\tilde{I}(2n-3,[x^{*},\infty))<\tilde{I}(2n-1,[x^{*},\infty))<0 \end{split} \tag{20}\]
by the monotonicity of the integral. Adding these two monotonically decreasing inequalities, we see \(\tilde{I}(2n-1,[0,\infty))\) is also strictly decreasing,
\[\tilde{I}(2n-1)<\tilde{I}(2n-3) \tag{21}\]
Applying the Dominated convergence theorem, \(\lim_{n\to\infty}\tilde{I}(2n-1,[0,x^{*}])\to 0\). Therefore there must exist a threshold \(n_{t}\) such that \(|\tilde{I}(2n_{t}-1,[0,x^{*}])|<|\tilde{I}(1,[x^{*},\infty))|\) so, combined with monotonicity (21), we conclude \(\tilde{I}(2n-1,[0,\infty))\) must eventually become negative.
As a positive multiple of \(\tilde{I}(2n-1,[0,\infty))\), \(I(2n-1)\), while not necessarily monotonic, inherits the crucial property that there exists a threshold \(n_{c}<\infty\) such that \(I(2n-1)\geqslant 0\) iff \(n\leqslant n_{c}\).
If all the \(I(2n-1)\) are negative we set \(n_{c}=0\), and clearly \(F(m)<0,^{\prime},m>0\)
For \(n_{c}\neq 0\), splitting the series by sign at \(n_{c}\) and factorising
\[F(m)=m^{2n_{c}-1}\Big{(}\sum_{1}^{n_{c}}\frac{(\frac{2}{\sigma^{2}})^{2n-1}}{2n-1!}I_{2n-1}\frac{1}{m^{2(n_{c}-n)}}+\sum_{n_{c}+1}^{\infty}\frac{(\frac{2}{ \sigma^{2}})^{2n-1}}{2n-1!}I_{2n-1}m^{2(n-n_{c})}\Big{)} \tag{22}\]
Inside the parenthesis is the difference of a strictly decreasing and increasing functions, where the limit as \(m\downarrow 0\) is \(\infty\) and \(m\uparrow\infty\) is \(-\infty\). We conclude \(F_{\sigma}\) has at most one root in \(\mathbb{R}^{+}\), where the sign must change from positive to negative.
As has just been outlined, a root can only occur if \(n_{c}\neq 0\). By the monotonicity of \(\tilde{I}(2n-1)\), \(n_{c}\neq 0\) iff \(I(1)>0\). Then \(I(1)=\int-V^{{}^{\prime}}a\rho=\frac{\partial F}{\partial m}\) which is part \((iii)\)
The rest of this section is devoted to studying critical transitions in \(\sigma\): whether and when the one solution regime turns to the three, and/or vice versa.
This can be restated in terms of the number of roots of \(F_{\sigma}(m)\), section 0.2 and in the bistable case, this reduces by Proposition 2.2\((iii)\) to studying the sign of \(\left.\frac{\partial F}{\partial m}\right|_{m=0}(\sigma)\), i.e
**Lemma 2.3**.: _The critical transition(s) \(\sigma_{c}\) corresponds to roots of \(\left.\frac{\partial F}{\partial m}\right|_{m=0}(\sigma)\)._
The forthcoming Proposition entirely characterises the possible number of roots and their existence. For this the following assumptions are needed, and additionally for all remaining results of this section.
\(\begin{array}{l}\text{\rm(5)}\ \sup\limits_{0\leqslant x\leqslant x^{*}} \bar{V}(x,0)=\bar{V}(x^{*},0)>0\\ \text{\rm(6)}\ \inf\limits_{x\geqslant x^{*}}\bar{V}(x,0)=\bar{V}(x^{*},0)\end{array}\)
_Remark 2.4_.: These assumptions generalise \(\bar{V}(x,0)\) being strictly increasing
**Proposition 2.5**.: _(The Roots of \(\left.\frac{\partial F}{\partial m}\right|_{m=0}(\sigma)\))_
\(\left.\frac{\partial F}{\partial m}\right|_{m=0}(\sigma)\) _can have at most one root. The exact number of roots can be characterised as follows:_
* _There are no roots iff_ \(\lim\limits_{\sigma\downarrow 0}\left.\frac{\partial F}{\partial m}\right|_{m=0}( \sigma)\leqslant 0\)_._
* _There is exactly one root iff_ \(\lim\limits_{\sigma\downarrow 0}\left.\frac{\partial F}{\partial m}\right|_{m=0}( \sigma)>0\)__
Proof.: Suppose there is a root of \(\left.\frac{\partial F}{\partial m}\right|_{m=0}(\sigma)\) at \(\sigma_{c}\). Differentiating,
\[\left.\frac{\partial^{2}F}{\partial\sigma\partial m}\right|_{m=0}=-\left. \frac{2}{\sigma}\frac{\partial F}{\partial m}\right|_{m=0}+\left.\frac{8}{ \theta\sigma^{5}}\int-V^{{}^{\prime}}a\bar{V}(x,0)\rho(x,0)dx\]
At \(\sigma_{c}\), the first term is \(0\) by assumption. For the second, set \(\tilde{V}(x,0)=\frac{1}{k_{\bar{V}}}\bar{V}(x,0)\) with \(k_{\bar{V}}=\bar{V}(x^{*},0)>0\). Then as \(0\leq\tilde{V}(x,0)\leq 1\), \(-V^{{}^{\prime}}a\tilde{V}(x,0)<-V^{{}^{\prime}}a\) so at \(\sigma_{c}\) as
\[\int-V^{{}^{\prime}}a\tilde{V}(x,0)\rho(\sigma_{c},x,0)dx<\int-V^{{}^{\prime}} a\rho(\sigma_{c},x,0)=k\frac{\partial F}{\partial m}\bigg{|}_{m=0}\left(\sigma_{c} \right)=0 \tag{23}\]
where the strict inequality follows from \(\rho>0\). By inequality (23), the gradient at any root \(\sigma_{c}\) must be negative, so if a root exists it must be unique. Reciprocally, \(\frac{\partial F}{\partial m}\big{|}_{m=0}\left(\sigma\right)\leq 0\) cannot have any roots and must be of constant sign.
If \(\lim\limits_{\sigma\downarrow 0}\frac{\partial F}{\partial m}\big{|}_{m=0} \left(\sigma\right)>0\) there must be a root as \(\lim\limits_{\sigma\uparrow\infty}\frac{\partial F}{\partial m}\big{|}_{m=0}<0\). This can be seen by factoring the integral
\[\exp(-\frac{2}{\sigma^{2}}\bar{V})\int\tilde{a}(-V^{{}^{\prime}})\exp(-\frac{ 2}{\sigma^{2}}(\bar{V}(x,0)-\bar{V})) \tag{24}\]
where \(\bar{V}=\min_{x}V(x,0)\) must exist by Assumption 1.
In \([0,x^{*}]\) the integrand is positive and bounded, so can be bounded above by \(\int_{0}^{x^{*}}a(-V^{{}^{\prime}})dx\). Outside, it is negative and unbounded, so there exists some compact set \(K\in[x^{*},\infty)\) such that \(\int_{0}^{x^{*}}a(-V^{{}^{\prime}})dx<\int_{K}|a(-V^{{}^{\prime}})|dx\).
Using \(\lim\limits_{\sigma\uparrow\infty}\exp(-\frac{2}{\sigma^{2}}(\bar{V}-\bar{V})=1\) and applying the dominated convergence theorem, integral (24) must eventually be negative. It follows that \(\lim\limits_{\sigma\downarrow 0}\frac{\partial F}{\partial m}\big{|}_{m=0} \left(\sigma\right)>0\) there must be a root, and it is unique by the first claim in this Proposition.
In other words if \(\lim\limits_{\sigma\downarrow 0}\frac{\partial F}{\partial m}\big{|}_{m=0} \left(\sigma\right)<0\) the only stationary measure possible is the symmetric one. Otherwise there are three stationary measures below \(\sigma_{c}\), and only the one (symmetric) above. The uniqueness of this point implies \(\sigma_{c}^{u}=\sigma_{c}^{u}:=\sigma_{c}\). This rigidity in bifurcation pattern is striking and further investigated in [2].
_Remark 2.6_.: An upcoming assumption, 7, implies \(\rho\) must have a maximum in \([0,x^{*}]\) and that \(\lim\limits_{\sigma\downarrow 0}\frac{\partial F}{\partial m}\big{|}_{m=0} \left(\sigma\right)>0\) by the Laplace theorem, guaranteeing \(\sigma_{c}\)'s existence
If the maxima coincides with a root of \(x(-V^{{}^{\prime}})\) the limit will be \(0\). However \(x(-V^{{}^{\prime}})\geq 0\) in a neighbourhood of that root the limit will be approached from above because of the assumed polynomial bounds on \(-V^{{}^{\prime}}\) and more careful estimates in the style of Theorem 1.2, so there must be a root, strictly greater than \(0\).
The second part of the outlined programme - a study of the dependence of \(\sigma_{c}\) on \(\theta\) - is made possible by Proposition 2.5. As a corollary, the mapping between \(\theta\) and \(\sigma_{c}\) is a well defined function, permitting study in the sequel.
**Lemma 2.7** (The Critical Transition function).: \(\sigma^{*}:\theta\rightarrow\sigma_{c}\)__
\[\sigma^{*}(\theta)=\begin{cases}0&\theta=0\\ \sigma_{c}&\frac{\partial F}{\partial m}\big{|}_{\theta,m=0}\left(\sigma\right)> 0\end{cases} \tag{25}\]
_is well-defined._
Proof.: The necessary properties were all established in Proposition 2.5.
It was conjectured and numerically demonstrated in [21] that \(\sigma^{*}(\theta)\) is an increasing function, for \(P^{{}^{\prime}}\) strictly increasing and \(\theta>0\). Using the results above, this conjecture can be proven under weaker conditions. This final result requires
\[\sup_{0\leqslant x\leqslant x^{*}}\int^{x}\frac{P^{{}^{\prime}}}{k^{2}}=\int^ {x^{*}}\frac{P^{{}^{\prime}}}{k^{2}}>0 \tag{7}\]
\[\inf_{x\geqslant x^{*}}\int^{x}\frac{P^{{}^{\prime}}}{k^{2}}=\int^{x^{*}} \frac{P^{{}^{\prime}}}{k^{2}}>0 \tag{8}\]
Additionally it is assumed that Assumptions 1-8 hold on \(\theta\in J\subseteq\mathbb{R}^{+}\). Then
**Proposition 2.8**.: \(\sigma^{*}(\theta)\) _is an increasing function on \(J\)._
Proof.: Assumption 5 and 7 imply \(P^{{}^{\prime}}(x^{*})>0\) which further implies \(-\bar{V}(x,0)\) has a maxima in \([0,x^{*})\). As explained in Remark 2.6, with the Laplace theorem, \(\lim_{\sigma\downarrow 0}\left.\frac{\partial F}{\partial m}\big{|}_{m=0} \left(\sigma\right)>0\right.\) (or \(0\) but approaching from above) and so \(\sigma^{*}(\theta)>0\) (i.e a root of \(\left.\frac{\partial F}{\partial m}\big{|}_{m=0}\left(\sigma\right)\right)\) by Proposition 2.5.
Differentiating with respect to \(\theta\),
\[\left.\frac{\partial^{2}F}{\partial\theta\partial m}\right|_{m=0}=-\left. \frac{1}{\theta}\frac{\partial F}{\partial m}\right|_{m=0}-\frac{4}{\theta \sigma^{4}}\int-V^{{}^{\prime}}a\int^{x}\frac{P^{{}^{\prime}}}{k^{2}}\rho(x,0 )dx\]
With Assumptions 7 and 8, at \(\sigma^{*}(\theta)=\sigma_{c}\)
\[-\int-V^{{}^{\prime}}a\tilde{P}\rho(\sigma^{*},x,0)dx>-\int-V^{{}^{\prime}}a \rho(\sigma^{*},x,0)=k\frac{\partial F}{\partial m}\bigg{|}_{m=0}\left(\sigma _{c}\right)=0 \tag{26}\]
with \(\tilde{P}=\int\frac{P^{{}^{\prime}}}{k_{P}k^{2}}\) with \(k_{P}=\int^{1}\frac{P^{{}^{\prime}}}{k^{2}}dx\). With identical reasoning to Proposition 2.5 it can be deduced that the derivative with respect to \(\theta\) at any root \(\sigma_{c}\) must be positive.
By the chain rule,
\[\frac{d\sigma^{*}}{\partial\theta}(\theta)=-\frac{\left.\frac{\partial^{2}F}{ \partial\theta\partial m}\right|_{m=0}}{\left.\frac{\partial^{2}F}{\partial \theta\partial m}\right|_{m=0}} \tag{27}\]
the sign of which must be positive by inequality (23) and (26). So \(\sigma_{c}(\theta)\) is an increasing function, and the claim proven.
### Multi-Well Potential
This section is concerned with the obvious generalisation of the double well potential: a potential which possesses multiple extrema in some finite interval. With more degrees of freedom, a greater multiplicity of behaviours can be exhibited. Here, the primary interest is in finding criterion such that the symmetric multi-well case of self-consistency equation (5) behaves like the symmetric bistable case of section 2.1.
Heuristically, it seems reasonable to suppose that, for a multi-well potential \(-V^{{}^{\prime}}\) that is'minimally' negative in \([0,x^{*}]\), the self-consistency equation would exhibit 'close to bistable' behaviour. The non-contiguousness of the positive regions of \(-V^{{}^{\prime}}\) present an immediate barrier to applying the results of Section 2.1 complicating this intuitive picture.
This section will overcome these issues, developing attractively conjunct criteria on \(-V^{{}^{\prime}}\) to satisfy this programme, avoiding clumsy, overly prescriptive assumptions on the location of roots or extrema.
Indeed, the assumptions needed for this section are Assumptions 1-8 from Section 2.1, lightly modified. As a corollary of Assumption 1, specifically the anti-symmetry of \(-V^{{}^{\prime}}\), there is of course a root at 0. No further assumptions are made on the location or number of the roots.
Having already been presented, these modifications are listed here rather than being introduced throughout this section, as before.
* \(x^{*}\) is the root of \(-V^{{}^{\prime}}\) farthest from the origin, and \(-V^{{}^{\prime}}(x)<0\), \(x>x^{*}\)
* \(\sup\limits_{0\leq x\leq x^{*}}\bar{V}_{D}(x,0)=\bar{V}_{D}(x^{*},0)>0\)
* \(\inf\limits_{x\geq x^{*}}\bar{V}_{D}(x,0)=\bar{V}_{D}(x^{*},0)\)
where \(\bar{V}_{D}(s,0)=\mathbb{I}_{0}^{s}\frac{V_{D}^{{}^{\prime}}+P^{{}^{\prime}}}{ k^{2}}\) and \(-V_{D}^{{}^{\prime}}:=(-V^{{}^{\prime}})_{+}-\mathbb{I}_{[x^{*},\infty)}(-V^{{} ^{\prime}})_{-}\)3
Footnote 3: \((f)_{-}=\max(0,-f)\)
Assumption 2 has been loosened to admit multi-well potentials. Assumptions 5-6 have have been modified to apply to a dominating bistable potential \(-V_{D}^{{}^{\prime}}\geq-V^{{}^{\prime}}\) for Corollary 2.11. When Assumption 2/5/6 is referred to in the following, it should be read as \(2^{*}/5^{*}/6^{*}\). All the results in this section require Assumptions 1-8, unless otherwise stated.
Intuitively, as \(\sigma\) increases, the contribution to \(F_{\sigma}\) of the integrand over \([x^{*},\infty)\) will dominate that over \([0,x^{*}]\), just as in the bistable case. This first result established a
threshold \(\sigma_{r}\) above which \([x^{*},\infty)\) will dominate the negative parts of the integrand in \([0,x^{*}]\), which is crucial to the sequel
**Proposition 2.9**.: \(\int(\tilde{a}^{3}-\tilde{a})(-V^{{}^{\prime}})_{-}\rho_{\sigma}dx\) _has exactly one root, \(\sigma_{r}\). Moreover \(\int(\tilde{a}^{3}-\tilde{a})(-V^{{}^{\prime}})_{-}\rho_{\sigma}dx>0\), \(\sigma>\sigma_{r}\)._
Proof.: To highlight the similarities with Proposition 2.5, define \(G(\sigma):=-\int(\tilde{a}^{3}-\tilde{a})(-V^{{}^{\prime}})_{-}\rho_{\sigma}dx\). Now the proof runs identically: bound \(\frac{dG}{d\sigma}\) above by \(G\) using Assumptions 5 & 6 to show the gradient at any root must be negative. Conclude there can only be at most one root, that exist iff \(\lim\limits_{\sigma\downarrow 0}G(\sigma)>0\)
There are then two cases:
* There are no roots iff \(\lim\limits_{\sigma\downarrow 0}\int(\tilde{a}^{3}-\tilde{a})(-V^{{}^{ \prime}})_{-}\rho_{\sigma}dx\geq 0\).
* There is exactly one root iff \(\lim\limits_{\sigma\downarrow 0}\int(\tilde{a}^{3}-\tilde{a})(-V^{{}^{ \prime}})_{-}\rho_{\sigma}dx<0\)
However there must be a maxima for \(\rho\) in \([0,x^{*}]\) by Assumptions 7-8, so \(\lim\limits_{\sigma\downarrow 0}\int(\tilde{a}^{3}-\tilde{a})(-V^{{}^{ \prime}})_{-}\rho_{\sigma}dx<0\) by the Laplace method4. Therefore \(\sigma_{r}\) must exist.
Footnote 4: Even if the limit is \(0\), it will be approached from below, Remark 2.6
That \(\int(\tilde{a}^{3}-\tilde{a})(-V^{{}^{\prime}})_{-}\rho_{\sigma}dx>0\) for \(\sigma>\sigma_{r}\) follows from establishing the limit \(\sigma\uparrow\infty\) is negative in the same manner as Proposition 2.5.
The importance of \(\sigma_{r}\) becomes apparent in this next Proposition, as the point where the series expansion for \(F_{\sigma}\) behaves much like Proposition 2.2, with monotonically decreasing (scaled) coefficients.
**Proposition 2.10**.: _For \(\sigma>\sigma_{r}\), \(F_{\sigma}(m)\) has at most one strictly positive (negative) root of \(F_{\sigma}\), which exist iff \(\left.\frac{\partial F}{\partial m}\right|_{m=0}(\sigma)>0\)_
Proof.: We will demonstrate that if \(\int(\tilde{a}^{3}-\tilde{a})(-V^{{}^{\prime}})_{-}\rho_{\sigma}dx>0\) then \(\{\tilde{I}_{\sigma}(2n-1)\}_{n}\) is decreasing. The full result follows identically to Proposition 2.2.
As the positive and negative regions of \((-V^{{}^{\prime}})\) in \([0,x^{*}]\) are not contiguous, to easily represent them the following are introduced:
(28) \[\begin{split} a_{n}&=\int_{0}^{x^{*}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Corollary 2.11**.: _There exists an upper critical threshold \(\sigma_{c}^{u}\) such that, \(\sigma>\sigma_{c}^{u}\), \(F_{\sigma}(m)<0,\,m>0\)._
Proof.: By identical reasoning to Proposition 2.2, \(F_{\sigma}(m)\) can be represented by power series 16. Then, for \(\sigma>\sigma_{r}\) the coefficients \(I_{\sigma}(n)\) are negative iff \(I_{\sigma}(1)<0\).
Recalling the related dominating bistable potential
\[-V_{D}^{{}^{\prime}}:=\mathbb{I}_{[0,x*]}(-V^{{}^{\prime}})_{+}-\mathbb{I}_{[ x*,\infty)}(-V^{{}^{\prime}})_{-}\]
it can be seen that
\[\tilde{I}_{\sigma}(1)=a_{1}-b_{1}-c_{1}<a_{1}+0-c_{1}=\int-V_{D}^{{}^{\prime}} \tilde{a}\rho_{\sigma}dx:=\tilde{I}_{\sigma}^{D}(1) \tag{32}\]
where the notation established in (28) has been used.
\(-V_{D}^{{}^{\prime}}\) is a bistable potential satisfying all the salient original assumptions, so Proposition 2.5 is applicable, whence there exists some \(\sigma_{c}^{D}\) such that \(\tilde{I}_{\sigma}^{D}(1)<0\), \(\sigma>\sigma_{c}^{D}\) Inequality (32) implies \(\tilde{I}_{\sigma}(1)\) must be negative above the same threshold, so \(\sigma_{c}^{u}\) must exist and
\[\sigma_{c}^{u}\leq\max(\sigma_{c}^{D},\sigma_{r})\]
Given its existence, we set \(\sigma_{c}^{u}\) to be the smallest \(\sigma\) above which \(F_{\sigma}(m)\) has no positive (negative) roots
**Definition 2.12**.: _The Critical Transition function \(\sigma_{c}(\theta)=\inf\{\sigma:F_{(s,\theta)}(m)<0,\,\forall m>0,\,\forall s>\sigma\}\) is well-defined._
Proof.: This set is non-empty by Corollary 2.5 (\(\sigma_{c}<\sigma_{c}^{u}\)), and bounded below above 0.
Again, in terms of stationary measures of MV-SDE (1):
**Theorem 2.13** (Stationary Measures of MV-SDE (1)).: _There exists an upper critical threshold \(\sigma_{c}^{u}\) such that, \(\sigma>\sigma_{c}^{u}\), MV-SDE (1) has only one stationary measure._
In the multi-well case the upper and lower critical thresholds will not necessarily be equal, and it is the upper threshold that is the true analogue of the critical threshold of Section 2.1. The rest of this section is dedicated to a study of the dependence of the upper critical transition dependence on \(\theta\).
_Remark 2.14_.: The approach of Corollary 2.11, applying the results of the previous section to the dominating bistable process would show that the critical transition of the multi-well process is bounded above by an increasing function.
In section 2.1 this relied upon the bijection between critical points and roots of \(\left.\frac{\partial F}{\partial m}\right|_{m=0}(\sigma)\), which in turn relies on ordered, decreasing coefficients \(I(2n-1)\) of the power series expansion of \(F_{\sigma}(m)\). This is no longer globally true, however if \(I_{\sigma_{r}}(1)>0\) then \(\sigma_{c}^{u}>\sigma_{r}\), where the coefficients are ordered and much of the machinery developed for the bistable potential can be repurposed.
The next Proposition translates the result of 2.1 as directly as possible. As with Proposition 2.8, let Assumptions 1-8 hold for \(\theta\in J\).
**Proposition 2.15**.: _Suppose,_
\[\left.\frac{\partial F}{\partial m}\right|_{m=0}(\sigma_{r})=\tilde{I}_{ \sigma_{r}}(1)>0 \tag{33}\]
_and_
\[\int_{0}^{\infty}\tilde{a}(-V^{{}^{\prime}})(1-\tilde{V})\rho dx>0 \tag{35}\] \[\int_{0}^{\infty}\tilde{a}(-V^{{}^{\prime}})(1-\tilde{P})\rho dx>0 \tag{34}\]
_for \(\sigma\in\mathbb{R}^{+}\) and \(\theta\in J\subseteq\mathbb{R}^{+}\)_
_Then \(\sigma_{c}^{u}>\sigma_{r}\) and \(\sigma_{c}^{u}(\theta)\) is an increasing function on \(J\)._
Proof.: With (33), \(\sigma_{c}^{u}\) must necessarily be greater than \(\sigma_{r}\) Inequality (34) implies (23) so by the same process as Proposition 2.5, we know \(\tilde{I}_{\sigma}(1)\) has one root, \(\sigma>\sigma_{r}\). By the monotonicity of the \(\tilde{I}(2n-1)\) from Proposition 2.10, \(\sigma_{c}\) will coincide with the root of \(\tilde{I}_{\sigma}(1)\).
Inequality (35) implies (26), so we conclude the result with the chain rule (27), identically to Proposition 2.8.
In this form, the proposition is cumbersome in use. It requires the calculation of \(\sigma_{r}\) rather than simply relying on its existence, while global inequalities (34) and (35) (crucial to demonstrating \(\sigma_{c}(\theta)\) is a decreasing function) depend on \((\sigma,\,\theta)\) through \(\rho\).
In the bistable case, inequalities (34) and (35) were implied by pointwise conditions on \(\bar{V}^{{}^{\prime}}\) and \(P^{{}^{\prime}}\), unmoderated by \(\rho\). Whilst it is reasonable to expect global conditions in the more general multi-well case may be necessary, there is good reason to suppose that conditions
on \(-V^{{}^{\prime}}\) alone ensuring that it's 'close to bistable' would be sufficient, see the remarks that head this section.
Indeed, if \(\bar{V}^{{}^{\prime}}(x,0)\) and \(P^{{}^{\prime}}\) are strictly increasing, it is possible to find intuitive integral inequalities on \((-V^{{}^{\prime}})_{+}\) versus \((-V^{{}^{\prime}})_{-}\) at a small cost, see Remark 2.17. Further, a similar inequality can be derived on the potential to replace the need for the precise location of \(\sigma_{r}\), with no further penalty.
The following corollary exemplifies the above discussion. Although it deals specifically with the quadratic interaction it can be generalised to \(\bar{V}^{{}^{\prime}}(x,0)\) and \(P^{{}^{\prime}}\) strictly increasing and \(k\) such that \(\lim_{x\uparrow\infty}\tilde{a}(x)>\sqrt{2}\) for \(\theta\in J\) with no change in argumentation.
**Corollary 2.16** (Multi-Well Potential with Quadratic Interaction).:
_Consider SDE (1) with \(P^{{}^{\prime}}=x\), \(k=1\) and \(-V^{{}^{\prime}}\) a multi-well potential satisfying Assumptions 1-4. Suppose \(-V^{{}^{\prime}}\) additionally satisfies Inequalities_
\[\int_{0}^{t}x(1-x)(-V^{{}^{\prime}})_{+}-x(-V^{{}^{\prime}})_{-}dx>0,\;\;\; \forall t<x^{*} \tag{36}\]
\[\int_{0}^{t}x\big{(}(-V^{{}^{\prime}})_{+}-2(-V^{{}^{\prime}})_{-}\big{)}dx>0, \;\;\;\forall t<\sqrt{2}x^{*} \tag{37}\]
_Then there exists \(\theta^{*}\) such that the upper critical temperature function is decreasing on \([\theta^{*},\infty)\)_
Proof.: A bounded below \(P^{{}^{\prime\prime}}\) implies above some \(\theta^{*}\), \(\bar{V}^{{}^{\prime\prime}}(x,0)>0\) by Lemma 1.7. This implies Assumptions 5-6 and further implies \(P^{{}^{\prime}}\) strictly increasing, which in turn implies Assumptions 7-8 on \([x^{*},\infty)\).
To reduce the need for tildes, we rescale the above inequalities in order to be able to take \(x^{*}=1\). For inequality (34) it is sufficient to prove
\[\int_{0}^{1}x(-V^{{}^{\prime}})(1-\tilde{V})\rho dx>0,\,\forall\sigma>0 \tag{38}\]
as the contribution to the integral over \([1,\infty)\) is positive. This inequality implies that \((-V^{{}^{\prime}})_{+}\) dominates \((-V^{{}^{\prime}})_{-}\) in \([0,1]\), which is in line with the approach annunciated at the beginning of this section. 5 From Remark (2.17), \(-xV^{{}^{\prime}}(0)\geq 0\) in a neighbourhood of \(0\). Therefore (33) must hold
\(\tilde{V}\) is convex by the second derivative test, and given \(\tilde{V}(0)=0\) and \(\tilde{V}(1)=1\) by definition, \(1-x<1-\tilde{V}<1\). Consequently (38) can be bounded below with
\[\int_{0}^{1}x\big{(}(-V^{{}^{\prime}})_{+}-(-V^{{}^{\prime}})_{-}\big{)}(1- \tilde{V})\rho dx>\int_{0}^{1}x(1-x)(-V^{{}^{\prime}})_{+}\rho dx-\int_{0}^{1} x(-V^{{}^{\prime}})_{-}\rho dx\]
so it is suffices that
\[\int_{0}^{1}x(1-x)(-V^{{}^{\prime}})_{+}\rho_{\sigma}dx>\int_{0}^{1}x(-V^{{}^{ \prime}})_{-}\rho_{\sigma}dx \tag{39}\]
for all \(\sigma\in\mathbb{R}^{+}\).
The second inequality needed is (26). In general an approach like that just performed is needed, however with this choice of \(P^{{}^{\prime}}\) and \(k\), it actually collapses to (40), which is of course true for \(\sigma>\sigma^{*}\) by Proposition 2.9.
Finally, to ensure \(\sigma^{*}<\sigma_{c}\), it suffices to find conditions that imply \(I_{\sigma}^{*}(1)>0\). Again, inequalities are derived that imply this, however they cannot be as blunt as in the previous. Namely, an inequality on the sign \(I_{\sigma}(1)\) cannot be global in \(\sigma\), because it must be unbounded below as \(\sigma\uparrow\infty\), as explained in Corollary 2.11
For \(\sigma\leqslant\sigma^{*}\),
\[\int_{0}^{1}(x-x^{3})(-V^{{}^{\prime}})_{-}\rho dx\geqslant\int_{1}^{\infty} (x^{3}-x)(-V^{{}^{\prime}})_{-}\rho dx \tag{40}\]
by Proposition 2.9. Then
\[\int_{1}^{\infty}x(-V^{{}^{\prime}})_{-}\rho dx<\int_{1}^{\sqrt{2}}x(-V^{{}^{ \prime}})_{-}\rho dx+\int_{0}^{1}(x-x^{3})(-V^{{}^{\prime}})_{-}\rho dx<\int_ {0}^{\sqrt{2}}x(-V^{{}^{\prime}})_{-}\rho dx\]
where the first inequality comes from simple bounding of the polynomial terms, while the second from inequality (40).
Inserting this into \(\tilde{I}_{\sigma}(1)\),
\[\tilde{I}_{\sigma}(1)=\int_{0}^{1}x(-V^{{}^{\prime}})_{+}\rho dx -\int_{0}^{1}x(-V^{{}^{\prime}})_{-}\rho dx-\int_{1}^{\infty}x(-V^{{}^{\prime }})_{-}\rho dx \tag{42}\] \[>\int_{0}^{1}x(-V^{{}^{\prime}})_{+}\rho dx-2\int_{0}^{\sqrt{2}} x(-V^{{}^{\prime}})_{-}\rho dx \tag{41}\]
So, if \(\forall\sigma\in\mathbb{R}^{+}\)
\[\int_{0}^{1}x(-V^{{}^{\prime}})_{+}\rho dx>2\int_{0}^{\sqrt{2}}x(-V^{{}^{ \prime}})_{-}\rho dx \tag{43}\]
then \(\tilde{I}_{\sigma}(1)>0\) when \(\sigma\leq\sigma^{*}\)
As \(\rho\) is strictly decreasing, inequality conditions (39) and (43) are implied by
\[\int_{0}^{t}x(1-x)(-V^{{}^{\prime}})_{+}-x(-V^{{}^{\prime}})_{-}dx >0, \forall t<1 \tag{45}\] \[\int_{0}^{t}x\big{(}(-V^{{}^{\prime}})_{+}-2(-V^{{}^{\prime}})_{- }\big{)}dx >0, \forall t<\sqrt{2} \tag{44}\]
by the second mean value theorem theorem for integrals.
Therefore the assumptions of this corollary imply those of Proposition 2.15, which proves the result.
_Remark 2.17_.: With strictly increasing \(\bar{V}(x,0)\), \(\rho\) must have global maxima at \(0\). It is possible to chose \(\epsilon\) such that \(x(-V^{{}^{\prime}})\geq 0\) or \(\leq 0\) on \([-\epsilon,\epsilon]\). The limit of integrals (34) and (35) as \(\sigma\downarrow 0\) is, applying the Laplace method, \(0\) and the inequalities imply the limit is approached from above. So, by Remark 2.6, \(-xV^{{}^{\prime}}\geq 0\) for arbitrarily small \(x\) implying \(-V^{{}^{\prime}}(x)>0\) necessarily, for some small interval \(0<x<\epsilon\)
This can be remedied by finding some new lower bound for \(\sigma\) that those inequalities hold. However that would prevent further simplification by application of the second mean value theorem for integrals.
It is possible to construct an admissible \(-V^{{}^{\prime}}\) with an arbitrary number of roots. The simplest way is to start with a multi-well potential with the required number of roots that is positive in \([0,x_{1}]\) and \([x_{2},x^{*}]\) and satisfies Assumptions 1 to 4. It is possible to find coefficients \(\alpha_{i}\)6 such that \(\alpha_{1}\int_{0}^{x_{1}}x(1-x)(-V^{{}^{\prime}})\rho dx>\int_{x_{1}}^{x_{2}} x(-V^{{}^{\prime}})_{-}\rho dx\) and \(\alpha_{2}\int_{x_{2}}^{x^{*}}x(-V^{{}^{\prime}})\rho dx>2\int_{x^{*}}^{\sqrt {2}}x(-V^{{}^{\prime}})_{-}\rho dx\). Then for \(\alpha_{1}\mathbb{I}_{[0,x_{1}]}(-V^{{}^{\prime}})+\mathbb{I}_{[x_{1},x_{2}] \cup[1,\infty)}(-V^{{}^{\prime}})+\alpha_{2}\mathbb{I}_{[0,x_{1}]}(-V^{{}^{ \prime}})\) satisfies inequalities (36 & 37).
Footnote 6: Assumptions 1-4 are not affected by such a scaling, although \(\theta^{*}\) will increase
## 3. conclusion
In this work, we have studied the possible phases and their transition points for MV-SDE (1) We have shown for sufficiently small \(\sigma\) there are exactly as many stationary measures as roots of \(V^{{}^{\prime}}\) and sufficiently large there is only one. In the case of symmetrical potentials we have gone further and additionally demonstrated the (upper) critical transition is a strictly increasing function of the aggregation parameter.
The approach utilised is direct, relying upon the first MEE, and robust enough to generate quantitative estimates. In addition to entirely novel results, where similar results have presented before, their proofs have been simplified and their applicability greatly increased. A choice was made to keep assumptions as general as possible, and the results can yield more when more is known of \(V^{{}^{\prime}}\), \(P^{{}^{\prime}}\)
For related future work, two ideas present themselves. A recent problem to which this machinery can be brought to bear is MV-SDE (1) with coloured noise. [11] studied phase transitions using a small parameter expansion approach, to which our methodology can be employed to fully understand the individual correction terms.
Contrastingly, another open problem would be to extend the methodology of this paper to MV-SDEs whose Fokker-Planck Equation is the Granular media equation (3) with positive interaction kernel \(x^{2n}\), \(n>1\), non constant diffusion and multi-well potential. The case of constant diffusion and polynomial bistable potential was considered in [21]. In the same spirit as proposition 1.2, it can be shown that the only possible points of accumulation of \(\{\rho_{0}^{\sigma}\}_{\sigma}\) is \(\delta_{x}^{*}\), where \(x^{*}\) is a root of \(V^{{}^{\prime}}\), and such sequences must exist with a compactness argument. Counting the exact number of stationary measures, or even something along the lines of Theorem 1.10 is less straightforward and will be the subject of future work.
|
2309.14525 | Aligning Large Multimodal Models with Factually Augmented RLHF | Large Multimodal Models (LMM) are built across modalities and the
misalignment between two modalities can result in "hallucination", generating
textual outputs that are not grounded by the multimodal information in context.
To address the multimodal misalignment issue, we adapt the Reinforcement
Learning from Human Feedback (RLHF) from the text domain to the task of
vision-language alignment, where human annotators are asked to compare two
responses and pinpoint the more hallucinated one, and the vision-language model
is trained to maximize the simulated human rewards. We propose a new alignment
algorithm called Factually Augmented RLHF that augments the reward model with
additional factual information such as image captions and ground-truth
multi-choice options, which alleviates the reward hacking phenomenon in RLHF
and further improves the performance. We also enhance the GPT-4-generated
training data (for vision instruction tuning) with previously available
human-written image-text pairs to improve the general capabilities of our
model. To evaluate the proposed approach in real-world scenarios, we develop a
new evaluation benchmark MMHAL-BENCH with a special focus on penalizing
hallucinations. As the first LMM trained with RLHF, our approach achieves
remarkable improvement on the LLaVA-Bench dataset with the 94% performance
level of the text-only GPT-4 (while previous best methods can only achieve the
87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We
opensource our code, model, data at https://llava-rlhf.github.io. | Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell | 2023-09-25T20:59:33Z | http://arxiv.org/abs/2309.14525v1 | # Aligning Large Multimodal Models
###### Abstract
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-Bench with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-Bench over other baselines. We opensource our code, model, data at [https://llava-rlhf.github.io](https://llava-rlhf.github.io).
## 1 Introduction
Large Language Models (LLMs; Brown et al. (2020); Chowdhery et al. (2022); OpenAI (2023)) can delve into the multimodal realm either by further pre-training with image-text pairs (Alayrac et al.; Awadalla et al., 2023) or by fine-tuning them with specialized vision instruction tuning datasets (Liu et al., 2023; Zhu et al., 2023), leading to the emergence of powerful Large Multimodal Models (LMMs). Yet, developing LMMs faces challenges, notably the gap between the volume and quality of multimodal data versus text-only datasets. Consider the LLaVA model (Liu et al., 2023), which is initialized from a pre-trained vision encoder (Radford et al., 2021) and an instruction-tuned language model (Chiang et al., 2023). It is trained on just 150K synthetic image-based dialogues, which is much less in comparison to the text-only models (Flan (Longpre et al., 2023) utilizing over 100M examples spanning 1800 tasks. Such limitations in data can lead to misalignment between the vision and language modalities. Consequently, LMMs may produce hallucinated outputs, which are not accurately anchored to the context provided by images.
To mitigate the challenges posed by the scarcity of high-quality visual instruction tuning data for LMM training, we introduce **LLaVA-RLHF**, a vision-language model trained for improved multimodal alignment. One of our key contributions is the adaptation of the Reinforcement Learning from Human Feedback (RLHF) (Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022), a general and scalable alignment paradigm that shows great success for text-based AI agents, to the
multimodal alignment for LMMs. By collecting human preferences with an emphasis on detecting hallucinations1, and utilizes those preferences in reinforcement learning for LMM fine-tuning (Ziegler et al., 2019; Stienmon et al., 2020). This approach can improve the multimodal alignment with a relatively low annotation cost, e.g., collecting 10K human preferences for image-based conversations with $3000. To the best of our knowledge, this approach is the first successful adaptation of RLHF to multimodal alignment.
Footnote 1: We instructed crowdworkers to prioritize the responses that exhibit better multimodal alignment and minimize hallucinations. That is, if two responses are free of hallucinations, the crowdworkers were asked to choose/create a more helpful one.
A potential issue with the current RLHF paradigm is called _reward hacking_, which means achieving high scores from the reward model does not necessarily lead to improvement in human judgments. To prevent reward hacking, previous work (Bai et al., 2022; Touvron et al., 2023) proposed to iteratively collect "fresh" human feedback, which tends to be costly and cannot effectively utilize existing human preference data. In this work, we propose a more data-efficient alternative, i.e., we try to make the reward model capable of leveraging existing human-annotated data and knowledge in larger language models. Firstly, we improve the general capabilities of the reward model by using a better vision encoder with higher resolutions and a larger language model. Secondly, we introduce a novel algorithm named **Factually Augmented RLHF (Fact-RLHF)**, which calibrates the reward signals by augmenting them with additional information such as image captions or ground-truth multi-choice option, as illustrated in Fig. 1.
\begin{table}
\begin{tabular}{|p{227.6pt}|} \hline
**Question:** \\ Where is this photo taken? \\
**Ground Truth:** \\ From the slogan “Welcome to Houston, Texas”, we can infer that this photo is taken at an airport in Houston, Texas. \\
**LLaVA:** \\ This photo is taken at an airport. \\
**LLaVA-SFT\({}^{+}\)** : \\ This photo is taken at the Houston airport. \\
**LLaVA-RLHF :** \\ This photo is taken in the baggage claim area of an airport, specifically in the lobby of the George Bush Intercontinental Airport in Houston, Texas. The large sign in the background indicates the airport’s name and location. The presence of multiple people, handbags, and suitcases in the scene further supports the fact that this is indeed an airport setting. \\ \hline
**Question:** \\ What is the gender of the person riding the motorcycle in the image? \\
**Ground Truth:** \\ No one is riding the motorcycle in the image. \\
**LLaVA:** \\ The person riding the motorcycle in the image is a woman. \\
**LLaVA-SFT\({}^{+}\)** : \\ The person riding the motorcycle in the image is a man. \\
**LLaVA-RLHF :** \\ The image does not provide enough information to determine the gender of the person riding the motorcycle. It only shows a motorcycle parked on the side of the road, and there is no indication of the rider’s gender. The rider could be a man or a woman, or even someone who identifies as non-binary or gender non-conforming. It is important to avoid making assumptions about a person’s gender based on visual cues alone, as these assumptions can be inaccurate and potentially harmful. \\ \hline \end{tabular}
\end{table}
Table 1: Qualitative examples to illustrate the effect of RLHF on Large Multimodal Model. LLaVA-RLHF is presented to be more helpful (above) and less hallucinated (bottom).
To improve the general capabilities of LMMs during the Supervised Fine-Tuning (SFT) stage, we further augment the synthetic vision instruction tuning data (Liu et al., 2023) with existing high-quality human-annotated multi-modal data in the conversation format. Specifically, we convert VQA-v2 (Goyal et al., 2017) and A-OKVQA (Schwenk et al., 2022) into a multi-round QA task, and Flickr30k (Young et al., 2014) into a Spotting Captioning task (Chen et al., 2023), and train the \(\textbf{LLAVA-SFT}^{+}\) models based on the new mixture of data.
Lastly, we look into assessing the multimodal alignment of LMMs in real-world generation scenarios, placing particular emphasis on penalizing any hallucinations. We create a set of varied benchmark questions that cover the 12 main object categories in COCO (Lin et al., 2014) and include 8 different task types, leading to MMHal-Bench. Our evaluation indicates that this benchmark dataset aligns well with human evaluations, especially when scores are adjusted for anti-hallucinations. In our experimental evaluation, as the first LMM trained with RLHF, LLaVA-RLHF delivers impressive outcomes. We observed a notable enhancement on LLaVA-Bench, achieving 94%, an improvement by 60% in MMHal-Bench, and established new performance benchmarks for LLaVA with a 52.4% score on MMBench (Liu et al., 2023) and an 82.7% F1 on POPE (Li et al., 2023). We have made our code, model, and data publicly available at [https://llava-rlhf.github.io](https://llava-rlhf.github.io).
Figure 1: Illustration of how hallucination may occur during the Supervised Fine-Tuning (SFT) phase of LMM training and how Factually Augmented RLHF alleviates the issue of limited capacity in the reward model which is initialized from the SFT model.
## 2 Method
### Multimodal RLHF
Reinforcement Learning from Human Feedback (RLHF) (Ziegler et al., 2019; Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022) has emerged as a powerful and scalable strategy for aligning Large Language Models (LLMs) with human values. In this work, we use RLHF to align LMMs. The basic pipeline of our multimodal RLHF can be summarized into three stages:
Multimodal Supervised Fine-TuningA vision encoder and a pre-trained LLM are jointly fine-tuned on an instruction-following demonstration dataset using token-level supervision to produce a supervised fine-tuned (SFT) model \(\pi^{\mathrm{SFT}}\).
Multimodal Preference ModelingIn this stage, a reward model, alternatively referred to as a preference model, is trained to give a higher score to the "better" response. The pairwise comparison training data are typically annotated by human annotators. Formally, let the aggregated preference data be represented as \(\mathcal{D}_{\mathrm{RM}}=\{(\mathcal{I},x,y_{0},y_{1},i)\}\), where \(\mathcal{I}\) denotes the image, \(x\) denotes the prompt, \(y_{0}\) and \(y_{1}\) are two associated responses, and \(i\) indicates the index of the preferred response. The reward model employs a cross-entropy loss function:
\[\mathcal{L}(r_{\mathbf{\theta}})=-\mathbf{E}_{(\mathcal{I},x,y_{0},y_{1},i)\sim \mathcal{D}_{\mathrm{RM}}}\left[\log\sigma(r_{\mathbf{\theta}}(\mathcal{I},x,y_{i })-r_{\mathbf{\theta}}(\mathcal{I},x,y_{1-i}))\right]. \tag{1}\]
Reinforcement LearningHere, a policy model, initialized through multimodal supervised fine-tuning (SFT) (Ouyang et al., 2022; Touvron et al., 2023), is trained to generate an appropriate response for each user query by maximizing the reward signal as provided by the reward model. To address potential over-optimization challenges, notably reward hacking, a per-token KL penalty derived from the initial policy model (Ouyang et al., 2022) is sometimes applied. Formally, given the set of collected images and user prompts, \(\mathcal{D}_{\mathrm{RL}}=\{(\mathcal{I},x)\}\), along with the fixed initial policy model \(\pi^{\mathrm{INIT}}\) and the RL-optimized model \(\pi^{\mathrm{RL}}_{\mathbf{\theta}}\), the full optimization loss is articulated as:
\[\mathcal{L}(\pi^{\mathrm{RL}}_{\mathbf{\phi}})=-\mathbf{E}_{(\mathcal{I},x)\in \mathcal{D}_{\mathrm{RL}},y\sim\pi^{\mathrm{RL}}(y|\mathcal{I},x)}\left[r_{ \mathbf{\theta}}(\mathcal{I},x,y)-\beta\cdot\mathbb{D}_{KL}\left(\pi^{\mathrm{ RL}}_{\mathbf{\phi}}(y|\mathcal{I},x)\|\pi^{\mathrm{INIT}}(y|\mathcal{I},x)\right) \right], \tag{2}\]
where \(\beta\) is the hyper-parameter to control the scale of the KL penalty.
\begin{table}
\begin{tabular}{|p{284.5pt}|} \hline
**Instruction** & \\ \hline We have developed an AI assistant adept at facilitating image-based conversations. However, it occasionally generates what we call hallucinations, which are inaccuracies unsupported by the image content or real-world knowledge. \\ In this task, we request that you select the most appropriate response from the AI model based on the conversation context. When making this selection, primarily consider these two factors: \\
**Honesty**: Fundamentally, the AI should provide accurate information and articulate its uncertainty without misleading the user. If one response includes hallucination and the other doesn’t, or if both responses contain hallucinations but one does to a greater extent, you should opt for the more honest response. \\
**Helpfulness**: In scenarios where both responses are free from hallucinations, you should opt for the more helpful one. The AI should attempt to accomplish the task or answer the question posed, provided it’s not harmful, in the most helpful and engaging manner possible. \\
**Annotation Task** & \\ Please select the better response from A and B [IMAGE] & \\
[CONVERSATION CONTEXT] & \\
[RESPONSE A] & \\
[RESPONSE B] & \\
**Question 1:** Which response has fewer hallucinations in terms of the given image? \\
**Question 2:** If you have selected a tie between Response 1 and Response 2 from the previous question, which response would be more helpful or less incorrect? \\ \hline \end{tabular}
\end{table}
Table 2: The instruction to the crowdworkers for human preference collection.
### Augmenting LLaVA with High-Quality Instruction-Tuning
Recent studies (Zhou et al., 2023; Touvron et al., 2023b) show that high-quality instruction tuning data is essential for aligning Large Language Models (LLMs). We find this becomes even more salient for LMMs. As these models traverse vast textual and visual domains, clear tuning instructions are crucial. Correctly aligned data ensures models produce contextually relevant outputs, effectively bridging language and visual gaps.
For example, LLaVA synthesized 150k visual instruction data using the text-only GPT-4, where an image is represented as the associated captions on bounding boxes to prompt GPT-4. Though careful filtering has been applied to improve the quality, the pipeline can occasionally generate visually misaligned instruction data that can not be easily removed with an automatic filtering script, as highlighted in Table 1.
In this work, we consider enhancing LLaVA (98k conversations, after holding out 60k conversations for preference modeling and RL training) with high-quality instruction-tuning data derived from existing human annotations. Specifically, we curated three categories of visual instruction data: "Yes" or "No" queries from VQA-v2 (83k) (Goyal et al., 2017), multiple-choice questions from A-OKVQA (16k) (Marino et al., 2019), and grounded captions from Flickr30k (23k) (Young et al., 2014). Our analysis revealed that this amalgamation of datasets significantly improved LMM capabilities on benchmark tests. Impressively, these results surpassed models (Dai et al., 2023; Li et al., 2023; Laurencon et al., 2023) trained on datasets an order of magnitude larger than ours, as evidenced by Table 7 and 4. For a comprehensive breakdown of each dataset's influence, refer to Section 3.5.
### Hallucination-Aware Human Preference Collection
Inspired by the recent RLHF studies that collect helpfulness and harmlessness preferences (Bai et al., 2022; Touvron et al., 2023) separately, in this study, we decide to differentiate between responses that are merely less helpful and those that are inconsistent with the images (often characterized by multimodal hallucinations). To achieve this, we provide crowdworkers with the template illustrated in Table 2 to guide their annotations when comparing two given responses. With our current template design, we aim to prompt crowdworkers to identify potential hallucinations in the model's responses.
Nonetheless, our training process integrates a single reward model that emphasizes both multimodal alignment and overall helpfulness2. We collect human preferences on 10k hold-out LLaVA data by re-sampling the last response with our SFT model and a temperature of \(0.7\). The reward model is initialized from the SFT model to obtain the basic multimodal capabilities.
Footnote 2: We are considering the development of a distinct Honest reward model, inspired by the approach in Touvron et al. (2023). This introduces the possibility of constructing a piecewise Honesty-prioritized reward model. We earmark this direction for future exploration.
### Factually Augmented RLHF (Fact-RLHF)
We conduct multimodal RLHF on 50k hold-out LLaVA conversations, with additional 12k multi-choice questions from A-OKVQA and 10k yes/no questions subsampled from VQA-v2. Due to the concerns of existing hallucinations in the synthetic multi-round conversation data of LLaVA, we only use the first question in each conversation for RL training, which avoids the pre-existing hallucinations in the conversational context.
Reward Hacking in RLHFIn preliminary multimodal RLHF experiments, we observe that due to the intrinsic multimodal misalignment in the SFT model, the reward model is weak and sometimes cannot effectively detect hallucinations in the RL model's responses. In the text domain, previous work (Bai et al., 2022; Touvron et al., 2023) proposed to iteratively collect "fresh" human feedback. However, this can be quite costly and cannot effectively utilize existing human-annotated data and there is no guarantee that more preference data can significantly improve the discriminative capabilities of the reward model for multimodal problems.
Factual AugmentationTo augment the capability of the reward model, we propose Factually Augmented RLHF (Fact-RLHF), where the reward model has access to additional ground-truth
information such as image captions to calibrate its judgment. In original RLHF (Stiennon et al., 2020; OpenAI, 2022), the reward model needs to judge the quality of the response only based on the user query (i.e., the input image and prompt):
```
Image:[IMAGE] User:[USER PROMPT] Assistant:[RESPONSE] RewardModel:[SCORE]
```
In Factually Augmented RLHF (Fact-RLHF), the reward model has additional information about the textual descriptions of the image:
```
Image:[IMAGE] FactualInformation:[5 COCOIMAGE CAPTIONS/3A-OKVQARATIONALS] User:[USER PROMPT] Assistant:[RESPONSE] AugmentedRewardModel:[SCORE]
```
This prevents the reward model hacked by the policy model when the policy model generates some hallucinations that are clearly not grounded by the image captions. For general questions with COCO images, we concatenate the five COCO captions as the additional factual information, while for A-OKVQA questions, we use the annotated rationals as the factual information.
The factually augmented reward model is trained on the same binary preference data as the vanilla reward model, except that the factual information is provided both during the model fine-tuning and inference.
Symbolic Rewards: Correctness Penalty & Length PenaltyIn some of our RL data, certain questions come with a predetermined ground-truth answer. This includes binary choices (e.g., "Yes/No") in VQA-v2 and multiple-choice options (e.g., "ABCD") in A-OKVQA. These annotations can also be regarded as additional factual information. Therefore, in the Fact-RLHF algorithm, we further introduce a symbolic reward mechanism that penalizes selections that diverge from these ground-truth options.
Furthermore, we observed that RLHF-trained models often produce more verbose outputs, a phenomenon also noted by Dubois et al. (2023). While these verbose outputs might be favored by users or by automated LLM-based evaluation systems (Sun et al., 2023; Zheng et al., 2023), they tend to introduce more hallucinations for LMMs. In this work, we follow Sun et al. (2023) and incorporate the response length, measured in the number of tokens, as an auxiliary penalizing factor.
## 3 Experiments
### Neural Architectures
Base ModelWe adopt the same network architecture as LLaVA (Liu et al., 2023). Our LLM is based on Vicuna (Touvron et al., 2023; Chiang et al., 2023), and we utilize the pre-trained CLIP visual encoder, ViT-L/14 (Radford et al., 2021). We use grid features both before and after the final Transformer layer. To project image features to the word embedding space, we employ a linear layer. It's important to note that we leverage the pre-trained checkpoints of the linear projection matrix from LLaVA, concentrating on the end-to-end fine-tuning phase for multi-modal alignment in our study. For LLaVA-SFT\({}^{+}\)-7b, we use a Vicuna-V1.5-7b LLM and ViT-L/14 with image resolution \(256\times 256\). For LLaVA-SFT\({}^{+}\)-13b, we use a Vicuna-V1.5-13b LLM and ViT-L/14 with image resolution \(336\times 336\).
RL Models: Reward, Policy, and ValueThe architecture of the reward model is the same as the base LLaVA model, except that the embedding output of the last token is linearly projected to a scalar value to indicate the reward of the whole response. Following Dubois et al. (2023), we initialize the value model from the reward model. Therefore, when training an LLaVA-7B-based policy model with an LLaVA-13B-based reward model, the value model is also of 13B size. To fit all the models (i.e., police, reward, value, original policy) into one GPU, we adopt LoRA (Hu et al., 2021) for all the fine-tuning processes in RLHF. We use Proximal Policy Optimization (PPO;
Schulman et al. (2017)) with a KL penalty for the RL training. Without further notice, both LLaVA-RLHF-7b and LLaVA-RLHF-13b are trained with a LLaVA-SFT\({}^{+}\)-13b initialized reward model. More details can be found in Appendix F.
### MMHal-Bench Data Collection
To quantify and evaluate the hallucination in LMM responses, we have created a new benchmark MMHal-Bench. There are two major differences between MMHal-Bench and previous VLM benchmarks: 1) **Speciality**: In contrast to prevalent LMM benchmarks Liu et al. (2023; 2023); Li et al. (2023) that evaluate the response quality in the general sense (e.g., helpfulness, relevance), we focus on determining whether there hallucination exists in the LMM responses. Our evaluation metrics are directly developed on this main criterion. 2) **Practicality**: Some previous LMM benchmarks Li et al. (2023); Rohrbach et al. (2018) also examine hallucination, but they have limited the questions to yes/no questions, which we found the results may sometimes disagree with the detailed description generated by LMM. Instead of over-simplifying the questions, we adopt general, realistic, and open-ended questions in our MMHal-Bench, which can better reflect the response quality in practical user-LMM interactions.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{Subsets} & \multirow{2}{*}{Full-Set} \\ \cline{2-2} \cline{4-5} & Conv & Detail & & \\ \hline LLaVA\({}_{\text{TB}}\) & 75.1 & 75.4 & 92.3 & 81.0 \\ VIGC\({}_{\text{TB}}\) & 83.3 & **80.6** & 93.1 & 85.8 \\
**LLaVA-SFT\({}^{+}\)\({}_{\text{TB}}\)** & 88.8 & 74.6 & 95.0 & 86.3 \\
**LLaVA-RLHF\({}_{\text{TB}}\)** & **93.0** & 79.0 & **109.5** & **94.1** \\ \hline LLaVA\({}_{\text{13Bx336}}\) & 87.2 & 74.3 & 92.9 & 84.9 \\ VIGC\({}_{\text{13Bx336}}\) & 88.9 & 77.4 & 93.5 & 86.8 \\
**LLaVA-SFT\({}^{+}\)\({}_{\text{13Bx336}}\)** & 85.8 & 75.5 & 93.9 & 85.2 \\
**LLaVA-RLHF\({}_{\text{13Bx336}}\)** & **93.9** & **82.5** & **110.1** & **95.6** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Automatic evaluation of LLaVA-RLHF on the LLaVA-Bench Evaluation. GPT-4 compares the answers from the VLM model outputs with the answers by GPT-4 (text-only) and gives a rating. We report the relative scores (Liu et al., 2023) of VLM models compared to GPT-4 (text-only).
Figure 2: Detailed performance of different models on the eight categories in MMHal-Bench, where “Overall” indicates the averaged performance across all categories. The questions are collected by adversarially filtering on the original LLaVA\({}_{\text{13Bx336}}\) model.
In MMHal-Bench, we have meticulously designed 96 image-question pairs, ranging in 8 question categories \(\times\) 12 object topics. More specifically, we have observed that LMM often make false claims about the image contents when answering some types of questions, and thus design our questions according to these types:
* Object attribute: LMMs incorrectly describe the visual attributes of invididual objects, such as color and shape.
* Adversarial object: LMMs answers questions involving something that does not exist in the image, instead of pointing out that the referred object cannot be found.
* Comparison: LMMs incorrectly compare the attributes of multiple objects.
* Counting: LMMs fail to count the number of the named objects.
* Spatial relation: LMMs fail to understand the spatial relations between multiple objects in the response.
* Environment: LMMs make wrong inference about the environment of the given image.
* Holistic description: LMMs make false claims about contents in the given image when giving a comprehensive and detailed description of the whole image.
* Others: LMMs fail to recognize the text or icons, or incorrectly reason based on the observed visual information.
We create and filter the questions in an adversarial manner. More specifically, we design the image-question pairs to ensure that the original LLaVA\({}_{13\text{Bx}336}\) model hallucinates when answering these questions. While these questions are initially tailored based on LLaVA\({}_{13\text{Bx}336}\)'s behavior, we have observed that they also have a broader applicability, causing other LMMs to hallucinate as well.
To avoid data leakage or evaluation on data that LMMs have observed during training, we select images from the validation and test sets of OpenImages (Kuznetsova et al., 2020) and design all brand-new questions. Our image-question pairs cover 12 common object meta-categories from COCO (Lin et al., 2014), including "accessory", "animal", "appliance", "electronic", "food", "furniture", "indoor", "kitchen", "outdoor", "person", "sports", and "vehicle".
When evaluating LMMs on MMHal-Bench, we employ the powerful GPT-4 model (OpenAI, 2023) to analyze and rate the responses. Currently, the publically available GPT-4 API only supports text input, so it cannot judge directly based on the image contents. Therefore, to aid GPT-4's assessment, we also provide category names of the image content, and a standard human-generated answer in the prompt, in addition to the question and LMM response pair. Consequently, GPT-4 can determine whether hallucination exists in the LMM response by comparing it against the image content and the thorough human-generated answer. When provided with adequate information from MMHal-Bench, GPT-4 can make reasonable decisions aligned with human judgments. For example, when deciding whether hallucination exists in responses from LLaVA\({}_{13\text{Bx}336}\) and IDEFICS\({}_{80\text{B}}\), GPT-4 agrees with human judgments in **94%** of the cases. Please see the Appendix for the example image-question pairs and GPT-4 prompts we used for MMHal-Bench evaluation.
### Results
We use LLaVA-Bench (Liu et al., 2023) and our MMHal-Bench as our main evaluation metrics for their high alignment with human preferences. In addition, we conducted tests on widely-recognized Large Multimodal Model benchmarks. We employed MMBench (Liu et al., 2023), a multi-modal benchmark offering an objective evaluation framework comprising 2,974 multiple-choice questions spanning 20 ability dimensions. This benchmark utilizes ChatGPT to juxtapose model predictions against desired choices, ensuring an equitable assessment of VLMs across varying instruction-following proficiencies. Furthermore, we incorporated POPE (Li et al., 2023), a polling-based query technique, to offer an evaluation of Large Multimodal Model object perception tendencies.
High-quality SFT data is crucial for capability benchmarks.By delving into the specific performances for the capability benchmarks (i.e., MMBench and POPE), we observe a notable improvement in capabilities brought by high-quality instruction-tuning data (LLaVA-SFT\({}^{+}\)) in Tables 4 and 7. LLaVA-SFT\({}^{+}\)\({}_{7\text{B}}\) model exemplifies this with an impressive performance of 52.1% on MMBench and an 82.7% F1 score on POPE, marking an improvement over the original LLaVA by margins of 13.4% and 6.7% respectively. However, it's worth noting that LLaVA-SFT\({}^{+}\) does
trail behind models like Kosmos and Shikra. Despite this, LLaVA-SFT\({}^{+}\) stands out in terms of sample efficiency, utilizing only 280k fine-tuning data--a 5% fraction of what's employed by the aforementioned models. Furthermore, this enhancement isn't confined to just one model size. When scaled up, LLaVA-SFT\({}^{+}\)\({}_{138\times 336}\) achieves commendable results, attaining 57.5% on MMBench and 82.9% on POPE. Comparatively, the effect of RLHF on the capability benchmarks is more mixed. LLaVA-RLHF shows subtle degradations at the 7b scale, but the 13b LLaVA-RLHF improves over LLaVA-SFT\({}^{+}\) by 3% on MMBench. This phenomenon is similar to the **Alignment Tax** observed in previous work (Bai et al., 2022). Nonetheless, with our current empirical scaling law of LLaVA-RLHF, we believe RLHF alignment would not damage in general capabilities of LMMs for models of larger scales.
RLHF improves human alignment benchmarks further.From another angle, even though high-quality instruction data demonstrates large gains in capability assessment, it does not improve much on human-alignment benchmarks including LLaVA-Bench and MMMhal-Bench, which is also evident in recent LLM studies (Wang et al., 2023). LLaVA-RLHF show a significant improvement in aligning with human values. It attains scores of 2.05 (7b) and 2.53 (13b) on MMMhal-Bench and improves LLaVA-SFT\({}^{+}\) by over 10% on LLaVA-Bench. We also presented qualitative examples in Table 1, which shows LLaVA-RLHF produces more reliable and helpful outputs.
### Ablation Analysis
We conduct ablation studies on LLaVA\({}_{7\text{B}}\) and evaluate over the four aforementioned benchmarks.
### Ablation on High-Quality Instruction-Tuning Data
In Table 5, we evaluate the impact of individual instruction-tuning datasets. For the sake of simplicity, we did not adjust the mixture rate, earmarking that consideration for future research. Our findings indicate that A-OKVQA (Schwenk et al., 2022) contributes significantly to performance enhancements, boosting results by +9.8% on MMBench and a more modest +3.8% on POPE. In contrast, VQA-v2 (Goyal et al., 2017) is particularly influential on POPE, where it leads to a 6% improvement, while only having a slight impact on MMBench. This differential can possibly be attributed to the overlapping "Yes/No" format in VQA and the multiple-choice structure of A-OKVQA. Flickr30k notably enhances the performance in LLaVA-Bench and MMMhal-Bench -- a
\begin{table}
\begin{tabular}{l|c|c|c c c c c c} \hline \hline
**LLM** & **Data** & **Overall** & **LR** & **AR** & **RR** & **FP-S** & **FP-C** & **CP** \\ \hline OpenFlamingo\({}_{9\text{B}}\) & - & 6.6 & 4.2 & 15.4 & 0.9 & 8.1 & 1.4 & 5.0 \\ MiniGPT-4\({}_{7\text{B}}\) & 5k & 24.3 & 7.5 & 31.3 & 4.3 & 30.3 & 9.0 & 35.6 \\ LLaMA-Adapter\({}_{7\text{B}}\) & 52k & 41.2 & 11.7 & 35.3 & 29.6 & 47.5 & 38.6 & 56.4 \\ Otter-I\({}_{9\text{B}}\) & 2.8M & 51.4 & 32.5 & 56.7 & 53.9 & 46.8 & 38.6 & 65.4 \\ Shikra\({}_{7\text{B}}\) & 5.5M & 58.8 & 25.8 & 56.7 & **58.3** & 57.2 & **57.9** & **75.8** \\ Kosmos-2 & 14M & 59.2 & **46.7** & 55.7 & 43.5 & 64.3 & 49.0 & 72.5 \\ InstructBLIP\({}_{7\text{B}}\) & 1.2M & 36.0 & 14.2 & 46.3 & 22.6 & 37.0 & 21.4 & 49.0 \\ IDEFICS\({}_{9\text{B}}\) & 1M & 48.2 & 20.8 & 54.2 & 33.0 & 47.8 & 36.6 & 67.1 \\ IDEFICS\({}_{80\text{B}}\) & 1M & 54.6 & 29.0 & **67.8** & 46.5 & 56.0 & 48.0 & 61.9 \\ InstructBLIP\({}_{138}\) & 1.2M & 44.0 & 19.1 & 54.2 & 34.8 & 47.8 & 24.8 & 56.4 \\ \hline LLaVA\({}_{7\text{B}}\) & 158k & 38.7 & 16.7 & 48.3 & 30.4 & 45.5 & 32.4 & 40.6 \\
**LLaVA-SFT\({}^{+}\)\({}_{7\text{B}}\)** & 220k & 52.1 & 28.3 & 63.2 & 37.4 & 53.2 & 35.9 & 66.8 \\
**LLaVA-RLHF\({}_{7\text{B}}\)** & 280k & 51.4 & 24.2 & 63.2 & 39.1 & 50.2 & 40.0 & 66.1 \\ LLaVA\({}_{138\times 336}\) & 158k & 47.5 & 23.3 & 59.7 & 31.3 & 41.4 & 38.6 & 65.8 \\
**LLaVA-SFT\({}^{+}\)\({}_{138\times 336}\)** & 220k & 57.5 & 25.8 & 65.7 & 54.8 & 57.9 & 51.0 & 68.5 \\
**LLaVA-RLHF\({}_{138\times 336}\)** & 280k & **60.1** & 29.2 & 67.2 & 56.5 & **60.9** & 53.8 & 71.5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: CircularEval multi-choice accuracy results on MMBench dev set. We adopt the following abbreviations: LR for Logical Reasoning; AR for Attribute Reasoning; RR for Relation Reasoning; FP-C for Fine-grained Perception (Cross Instance); FP-S for Fine-grained Perception (Single Instance); CP for Coarse Perception. Baseline results are taken from Liu et al. (2023).
likely consequence of the inherently grounded nature of the task. Furthermore, amalgamating these three datasets results in compounded performance gains across various capability benchmarks.
### Ablation on Fact-Augmented RLHF
We compare the performance of Fact-Augmented RLHF (Fact-RLHF) with standard RLHF in Table 5. Our findings indicate that while the conventional RLHF exhibits improvement on LLaVA-Bench, it underperforms on MMHAL-Bench. This can be attributed to the model's tendency, during PPO, to manipulate the naive RLHF reward model by producing lengthier responses rather than ones that are less prone to hallucinations. On the other hand, our Fact-RLHF demonstrates enhancements on both LLaVA-Bench and MMHAL-Bench. This suggests that Fact-RLHF not only better aligns with human preferences but also effectively minimizes hallucinated outputs.
### Data Filtering v.s. RLHF
In our preliminary tests, we employed the Fact-RLHF reward model to filter out 70%, 50%, and 30% of LLaVA data. Subsequently, we finetuned an LLaVA model on this filtered data, yielding scores of 81.2, 81.5, and 81.8 on LLaVA-Bench. However, performance on MMHAL-Bench, POPE, and MMBench remained largely unchanged. We believe this stagnation can be attributed to two factors: the absence of a negative feedback mechanism preventing the model from identifying hallucinations in its output, and the potential limitations of our Fact-RLHF reward model, especially when compared against the high-capacity oracle models in previous successful studies (Touvron et al., 2023b).
## 4 Related Work
Large Multimodal ModelsRecent success in Large Language Models (LLMs) such as GPTs (Brown et al., 2020; OpenAI, 2023), PaLM (Chowdhery et al., 2022; Anil et al., 2023), BLOOM (Scao et al., 2022; Muennighoff et al., 2022), LLaMA (Touvron et al., 2023; 20), Alpaca (Taori et al., 2023) and Vicuna (Chiang et al., 2023) has spurred significant improvements in multi-modal models. Flamingo (Alayrac et al.) pioneered integrating LLMs into vision-language pretraining, utilizing gated cross-attention dense blocks to adapt to visual features; its open-source variant is OpenFlamingo (Avadalla et al., 2023) and IDEFICS (Laurencon et al., 2023). PaLI (Chen et al., 2022; 2023b) studies the scaling factor of V&L components across a wide range of tasks. PaLM-E(Driess et al., 2023) further extends LMM to the embodied domain. BLIP-2 (Li et al., 2023c) introduced the Querying Transformer (Q-former) to bridge the gap between image and language encoders, which was further improved by InstructBLIP (Dai et al., 2023). Otter (Li et al., 2023; 2023b) focuses on enhancing OpenFlamingo's instruction-following capability. MiniGPT-4(Zhu et al., 2023) suggests GPT4's prowess is due to sophisticated LLMs and recommends using a single project layer to align visual and linguistic models. It showcases abilities akin to GPT4 but is computationally efficient. mPLUG-Owl (Ye et al., 2023) offers a new approach: initially aligning
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{3}{*}{Method} & \multirow{3}{*}{PM} & \multirow{3}{*}{RM} & \multicolumn{6}{c}{**SFT Data**} & \multirow{3}{*}{**MMBench**} & \multirow{3}{*}{**POPE**} & \multirow{3}{*}{**LLaVA-B**} & \multirow{3}{*}{**MMHAL-B**} \\ \cline{3-3} \cline{5-10} & & & & & & & & & & \\ \hline SFT & 7b & - & ✗ & ✗ & ✗ & 38.7 & 76.0 & 81.0 & 1.3 \\ SFT & 7b & - & ✓ & ✗ & ✗ & 42.9 & 82.0 & 30.4 & 2.0 \\ SFT & 7b & - & ✗ & ✓ & ✗ & 48.5 & 79.8 & 34.7 & 1.1 \\ SFT & 7b & - & ✗ & ✗ & ✓ & 37.8 & 77.6 & 46.6 & 1.5 \\ SFT & 7b & - & ✓ & ✓ & ✓ & **52.1** & **82.7** & 86.3 & 1.8 \\ \hline RLHF & 7b & 7b & ✗ & ✗ & ✗ & 40.0 & 78.2 & 85.4 & 1.4 \\ RLHF & 7b & 7b & ✓ & ✓ & ✓ & 50.8 & **82.7** & 87.8 & 1.8 \\ RLHF & 7b & 13b & ✓ & ✓ & ✓ & 48.9 & **82.7** & 93.4 & 1.8 \\ Fact-RLHF & 7b & 13b & ✓ & ✓ & ✓ & 51.4 & 81.5 & **94.1** & **2.1** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Abalation studies on methodologies (SFT, RLHF, and Fact-RLHF), data mixtures (LLaVa with additional datasets), and model sizes of the policy model (PM) and the reward model (RM).
visual features and then fine-tuning the language model using LoRA (Hu et al., 2021). Recently, QWen-VL (Bai et al., 2023) scales the pre-training of LMM to 1.4B data and achieves impressive results across benchmarks. Among them, LLaVA (Liu et al., 2023; Lu et al., 2023) pioneered LMM work by harnessing GPT4 (OpenAI, 2023) for generating Vision-language tuning datasets similar to text instruction efforts (Wei et al., 2021; Chung et al., 2022; Longpre et al., 2023; Sanh et al., 2021; Mukherjee et al., 2023; Taori et al., 2023; Kopf et al., 2023). However, due to the syntactic nature of these generated datasets, misalignments between image and text modalities are prevalent. Our research is the first to address this misalignment through RLHF.
HallucinationPrior to the advent of LLMs, the NLP community primarily defined "hallucination" as the generation of nonsensical content or content that deviates from its source (Ji et al., 2023). The introduction of versatile LLMs has expanded this definition, as outlined by (Zhang et al., 2023) into: 1) Input-conflicting hallucination, which veers away from user-given input, exemplified in machine translation (Lee et al., 2018; Zhou et al., 2020); 2) Context-conflicting hallucination where output contradicts prior LLM-generated information (Shi et al., 2023); and 3) Fact-conflicting hallucination, where content misaligns with established knowledge (Lin et al., 2021). Within the LMM realm, "object hallucination" is well-documented (Rohrbach et al., 2018; MacLeod et al., 2017; Li et al., 2023; Biten et al., 2022), referring to models producing descriptions or captions including objects that don't match or are missing from the target image. We expand on this, encompassing any LMM-generated description unfaithful to image aspects, including relations, attributes, environments, and so on. Consequently, we present MMHal-Bench, aiming to holistically pinpoint and measure hallucinations in LMMs.
## 5 Discussions & Limitations
Hallucination phenomena are observed in both Large Language Models (LLMs) and Large Multimodal Models (LMMs). The potential reasons are two-fold. Firstly, a salient factor contributing to this issue is the low quality of instruction tuning data for current LMMs, as they are typically synthesized by more powerful LLMs such as GPT-4. We expect our proposed high-quality vision instruction-tuning data and future efforts on manually curating high-quality vision instruction tuning data can alleviate this problem.
Secondly, the adoption of behavior cloning training in instruction-tuned LMMs emerges as another fundamental cause (Schulman, 2023). Since the instruction data labelers lack insight into the LMM's visual perception of an image, such training inadvertently conditions LMMs to speculate on uncertain content. To circumvent this pitfall, the implementation of reinforcement learning-based training provides a promising avenue, guiding the model to articulate uncertainties more effectively (Lin et al., 2022; Kadavath et al., 2022). Our work demonstrates a pioneering effort in this direction. Figure 3 illustrates the two sources of hallucination in current behavior cloning training of LLMs.
However, while LLaVA-RLHF enhances human alignment, reduces hallucination, and encourages truthfulness and calibration, applying RLHF can inadvertently dampen the performance of small-sized LMMs. Balancing alignment enhancements without compromising the capability of LMM and LLM is still an unresolved challenge. Furthermore, though we've demonstrated the effective use of linear projection in LLaVA with top-tier instruction data, determining an optimal mixture and scaling it to bigger models remains intricate. Our research primarily delves into the fine-tuning phase of VLMs, leaving the issues of misalignment in other modalities and during pre-training yet to be explored.
Finally, while MMHal-Bench emphasizes the evaluation of LMMs with an aim to curtail hallucinations, it is noteworthy that short or evasive responses can inadvertently attain high scores on MMHal-Bench. This underlines an intrinsic trade-off between honesty and helpfulness (Bai et al., 2022). Consequently, for a more comprehensive assessment of alignment with human preferences, we advocate for the evaluation of prospective LMMs using both MMHal-Bench and LLaVA-Bench.
## 6 Conclusion
We proposed several strategies to tackle the multimodal misalignment problems, particularly for vision language models (VLMs), which often produce text inconsistent with the associated images. First, we enrich GPT-4 generated vision instruction tuning data from LLaVA with existing human-authored image-text pairs. Next, we adopt the Reinforcement Learning from Human Feedback (RLHF) algorithm from the text domain to bridge vision-language gaps, wherein human evaluators discern and mark the more hallucinated output. We train the VLM to optimize against simulated human preferences. Moreover, we introduce the Factually Augmented RLHF, leveraging additional factual information such as image captions to enhance the reward model, countering reward hacking in RLHF, and boosting model performance. For tangible real-world impact assessment, we have devised MMHAL-Bench, an evaluation benchmark targeting the penalization of hallucination. Remarkably, LLaVA-RLHF, being the first VLM trained with RLHF, shows a notable surge in performance across benchmarks. We opensource our code, and data and hope our findings could help the future development of more reliable and human-aligned LLMs and LMMs.
|
2309.12632 | Are Deep Learning Classification Results Obtained on CT Scans Fair and
Interpretable? | Following the great success of various deep learning methods in image and
object classification, the biomedical image processing society is also
overwhelmed with their applications to various automatic diagnosis cases.
Unfortunately, most of the deep learning-based classification attempts in the
literature solely focus on the aim of extreme accuracy scores, without
considering interpretability, or patient-wise separation of training and test
data. For example, most lung nodule classification papers using deep learning
randomly shuffle data and split it into training, validation, and test sets,
causing certain images from the CT scan of a person to be in the training set,
while other images of the exact same person to be in the validation or testing
image sets. This can result in reporting misleading accuracy rates and the
learning of irrelevant features, ultimately reducing the real-life usability of
these models. When the deep neural networks trained on the traditional, unfair
data shuffling method are challenged with new patient images, it is observed
that the trained models perform poorly. In contrast, deep neural networks
trained with strict patient-level separation maintain their accuracy rates even
when new patient images are tested. Heat-map visualizations of the activations
of the deep neural networks trained with strict patient-level separation
indicate a higher degree of focus on the relevant nodules. We argue that the
research question posed in the title has a positive answer only if the deep
neural networks are trained with images of patients that are strictly isolated
from the validation and testing patient sets. | Mohamad M. A. Ashames, Ahmet Demir, Omer N. Gerek, Mehmet Fidan, M. Bilginer Gulmezoglu, Semih Ergin, Mehmet Koc, Atalay Barkana, Cuneyt Calisir | 2023-09-22T05:57:25Z | http://arxiv.org/abs/2309.12632v2 | # Are Deep Learning Classification Results Obtained on CT Scans Fair and Interpretable?
###### Abstract
Following the great success of various deep learning methods in image and object classification, the biomedical image processing society is also overwhelmed with their applications to various automatic diagnosis cases. Unfortunately, most of the deep learning-based classification attempts in the literature solely focus on the aim of extreme accuracy scores, without considering interpretability, or patient-wise separation of training and test data. For example, most lung nodule classification papers using deep learning randomly shuffle data and split it into training, validation, and test sets, causing certain images from the CT scan of a person to be in the training set, while other images of the exact same person to be in the validation or testing image sets. This can result in reporting misleading accuracy rates and the learning of irrelevant features, ultimately reducing the real-life usability of these models. When the deep neural networks trained on the traditional, unfair data shuffting method are challenged with new patient images, it is observed that the trained models perform poorly. In contrast, deep neural networks trained with strict patient-level separation maintain their accuracy rates even when new patient images are tested. Heat-map visualizations of the activations of the deep neural networks trained with strict patient-level separation indicate a higher degree of focus on the relevant nodules. We argue that the research question posed in the title has a positive answer only if the deep neural networks are trained with images of patients that are strictly isolated from the validation and testing patient sets.
## 1 Introduction
The society of biomedical image processing has an abundance of image and object classification publications due to the great success of various deep learning methods. The biomedical images in various automatic diagnostic cases may consist of stand-alone image outputs such as X-rays. However, a majority of handled image data contains outputs in terms of batch scans, CT and MRI are typical examples. In a single batch, scans from slightly different offsets are obtained in order to observe the same part of the same person. Deep learning has been shown as a prevalent and effective algorithm in the diagnosis of many medical images [1]. On the other hand, it has also been criticized that deep learning is not reliable because it is not truly explicable [2]. The method may work, yet it may be impossible to understand totally the underlying reason. Consequently, its continuing accuracy for any new diagnosis case is never guaranteed. Furthermore, while the method is slowly learning in the training phase from a new database, a random portion of its learned memories may abruptly fail [2].
Separating data into train-test sets is required to determine the performance of a machine learning (ML) algorithm in the case of supervised and semi-supervised learning. For this purpose, a dataset is taken and divided into two subsets, preferably in a random way. One of these subsets is utilized to adapt the algorithm parameters, and that set is defined as the training set. The features of the exclusive subset (i.e., the set which is not used in the training process) are applied to the algorithm as an input to make a valid success assessment. This excluded subset is defined as the test set. If the sample data is sufficiently large, one can split the whole data into training and test sets and still obtain a large number of samples, both for training and testing. If the data count is relatively small, the remedies include methods such as a modern approach of reinforcement learning, or a more classical approach of cross-folding train/test data.
For example, Goodfellow et al. asserted that the training and testing data could be generated with a probability distribution over datasets which is called the data generation process [3]. As a rule, the independent and identically distributed (i.i.d.) assumption of the data is critical, meaning that the samples in each data set are independent of each other, and the training and test sets are equally distributed.
Unfortunately, many of the ML-based diagnosis attempts in the literature did not handle image datasets that are obtained from batch scans with sufficient care regarding the _independence_ condition as explained above. In a majority of the cases, the test-train separation of images from multiple scans was done randomly, providing images from exactly the same scan to appear in the training as well as the test or validation sets. Since such a situation is a direct violation of the independence requirements, we investigate the effect of such
_unfair_ train-test splitting on the performance of ML methods in terms of detection accuracy and overall algorithmic interpretability. Besides, the efficiency and interpretability improvement under the strict (i.e., patient-wise) separation of train and test (or validation) data splitting case is studied in this work.
As a test case of the careful test-train separation problem, we consider malignancy detection of lung nodules from computed tomography (CT) scans, where the literature is crowded with several deep learning algorithm results.
In a survey paper [4], Gu et al. review available CAD systems applying deep learning to CT scan data for lung nodule detection, segmentation, classification, and retrieval. They argue the advance of deep learning, define various important characteristics of lung nodule CAD systems, and evaluate the performance of certain studies against different databases such as LIDC, LIDC-IDRI, LUNA16, DSB2017, NLST, TianChi, and ELCAP. In the selected classification studies, the accuracy rates range from 75.01% to 98.31%. High accuracy results arise from the inclusion of different CT images belonging to the same patient in both training and test sets. Throughout the paper, we call this the UNFAIR case. In this case, only image-wise cross-fold validation technique is used [1, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. In these studies, LIDC-IDRI database is widely used with various classification methods such as convolutional neural network (CNN) [1], an interpretable and multi-task learning CNN [5], three pre-trained ResNet-50 models [6], multi-view knowledge-based collaborative (MV-KBC) deep model [7], CNN, RNN and softmax [8], forward and backward GAN (F&BGAN) and Multi-scale VGG16 (M-VGG16) network [9], algorithm that fuses the texture, shape and deep model-learned information (Fuset-TSD) [10], multi-crop Convolutional Neural Network (MC-CNN) [11], lightweight and multiple view sampling based Multi-section CNN architecture [12], end-to-end deep multi-view CNN [13], K-Nearest Neighbor (kNN) and Multi-Layer Perceptron (MLP) [14], multi-view CNN (MV-CNN) [15], and obtained accuracy rates vary between 84.15% and 98.31%. Nibali et al. assessed the usefulness of very deep convolutional neural networks in the expert-level classification of malignant lung nodules [16]. Based on the well-known ResNet architecture, they investigated the effect of curriculum learning, transfer learning, and different network depths on malignancy classification accuracy; and obtained an accuracy rate of 89.9% for the LIDC-IDRI database. In [17], only the LIDC database was used with denoising autoencoder (DAE) and 3D Resolved Ambiguity Local Binary Pattern (3D-RALBP) methods, and a maximum accuracy rate of 94.95% was obtained. The Optimal Deep Neural Network (ODNN) was applied to CT images and then, optimized with the Modified Gravitational Search Algorithm (MGSA) to determine the classification of lung cancer, and the accuracy of 94.56% was obtained for ELCAP database [18]. When the CT images taken from the Cancer Imaging Archive were used for lung nodule classification, extreme accuracy rates of 99.51% and 97.14% were obtained using kNN with AlexNet & mRMR feature extractor in [19], and LDA classifier in [20] respectively. In another study, Tran et al. suggested a new 2D architecture for a deep convolutional neural network using focal loss, and they obtained a high accuracy rate of 97.2% for the LUNA16 database [21].
The extreme accuracy rates mentioned above _could_ be attributed to a possible overfit due to an unintentional leak of same-batch image data to both training and test sets. All of these manuscripts report that the whole image lot was shuffled completely, and train-test separation was done randomly. In such a case, it is perfectly possible that some images from the same patient scan may go to the training set, while the rest may go to the test set, making an unfair splitting that is prone to overfitting with too high accuracy results.
Contrary to the above-mentioned unfair splitting, a fair splitting approach is also possible, where one must carefully assign distinct patients' scan images to train and test sets. The literature also contains several papers, where this attention was paid [22, 23, 24, 25, 26]. In these studies, the maximum accuracy rates of 75.01%, 81.47%, 82.1%, 89.45%, and 91.8% were obtained by using deep features extracted from an autoencoder along with a binary decision tree [22] for a part of LIDC database, a 3-D version of the RPN using a modified U-net and a leaky noisy-OR model for DSB2017 database [23], CNN model for LIDC-IDRI database [24], VGG-s CNN models for NLST database [25] and an ensemble of triplet neural networks for LUNA 16 database [26] respectively. In [27] and [28], authors created their own databases, and the accuracy rates of 75.2% and 71.1% were obtained using artificial intelligence (AI) systems obtained from the union of convolutional neural networks (CNN) and 2D deep convolutional neural network architecture respectively.
Although overfitting due to unfair test-train dataset splitting seemingly gives higher accuracy results, the reliability of the results could be questionable from the following aspects:
* Do these trained ML techniques still provide high accuracy for a completely new "challenge" data set?
* Do these trained ML techniques perform classifications by really focusing on the actual nodule positions (marked by radiologist experts)?
* Hence, are these techniques _interpretable_?
A follow-up question automatically arises:
* If we perform strictly fair test-train splitting, does this improve performance on the challenge data set and interpretability?
This study provides experiments comparing the reliability of deep learning algorithms for lung nodule classification by implementing fair and unfair data splitting. Since the datasets from the LIDC-IDRI database have been widely
used for studying nodule detection and classification methods, including various studies relevant to this work, the LIDC-IDRI database is used in the experimental studies of this paper.
The comprehensive review by Loizidou et al. in a different case of detection and classification in mammography clearly points out the problems that arise when strict patient-wise training/validation/test separation is not performed [29]. They propose that images and image labels (i.e., ROIs) of the same patient should be incorporated into the training, validation, or test mammography datasets. They also express concern regarding the high classification accuracy rates reported in various papers that failed to perform this separation, as they render the performances unverifiable for new patient cases in real-life. In this study, we explore this idea in the context of CT scans, demonstrating the invalidity of unfair training accuracy results numerically. Furthermore, we show that deep neural networks trained using unfair random image splitting are incapable of focusing attention on indicator regions of CT images (i.e., nodule regions), which renders the results completely non-interpretable. Several experimental studies related to unfair and fair data splitting cases for lung nodule classification are performed. For this purpose, deep learning neural network with three architectures (mobileNetV2, efficientNet, and VGG16). For all of these deep learning methods, the model evaluation with a new patient dataset demonstrates that data shuffling done inattentively makes the trained model inapplicable in real-life, as well as reducing the learning capability of the model by focusing on irrelevant features in the neural network layers [30]. On the contrary, strict patient isolation between train and test datasets provides significantly better results in real-life challenge datasets containing images from new patients. Besides, this isolation helps deep neural network layers to better focus their attention on the correct nodule locations on the image. This interpretability attribute is visualized with a heat-map technique, which renders higher activation network portions red while rendering the low activation portions blue. Finally, this visualization is further quantized as a numerical value to make an assessment of interpretability using three novel interpretability functions introduced herein.
## 2 Materials and Method
### Dataset
The dataset used in the experimental part of our proposed approach is extracted from the publicly available LIDC/IDRI dataset [31]. National Cancer Institute (NCI) started to create the LIDC database in 2001, and the Foundation for the National Institutes of Health (FNIH) supported it to create a bigger database named LIDC/IDRI in 2004. LIDC was supported by five academic medical centers, and two more centers came with the addition of IDRI.
LIDC/IDRI is one of the largest available databases as it contains 1018 thoracic CT scans taken from 1010 different patients. These scans are acquired by using a number of different scanner devices and acquisition parameters. Each scan in the dataset has an XML (eXtensible Markup Language) file that contains diagnosis and nodule reports created by four experienced radiologists. These reports are created in two phases: blinded and unblinded reading phases. In the blinded reading phase, radiologists independently classify each nodule into three categories (3mm \(\leq\) nodule \(<\) 30mm, nodule \(<\) 3mm, non-nodule \(>\) 3mm) according to the nodule diameters. In the unblinded reading phase, each radiologist sees the blinded phase decisions of the other three radiologists anonymously and specifies his/her final decision about a nodule. Radiologists are not expected to achieve any consensus in this process. Since the probability of being in the malignant class for the nodules having a diameter greater than or equal to 3 mm is higher compared to other nodules, the main goal of the LIDC/IDRI project is to determine the nodules which are in this category. Therefore, radiologists are asked to draw the nodule contours, specify their locations and give malignancy scores only for these nodules. All these data are saved to the XML files of scans and used as ground truths in further studies.
In this study, we utilized the LIDC/IDRI dataset to investigate the malignancy of pulmonary nodules in Computed Tomography (CT) scans. The selection of the CT scans was based on the "LIDC Nodule Size List" document, which was available on the official website of the Cancer Imaging Archive. The document provided information on the number of nodules, the number of radiologists who identified each nodule, and the nodule volumes for each scan. The malignancy scores for each nodule were determined by collecting data from the XML files accompanying each scan in the dataset folder. The nodule ID information was used to identify the malignancy characteristics of a nodule in a scan, as each radiologist gave a different name to the same nodule. To ensure the reliability of the dataset, only nodules that were scored by at least three radiologists were used in the study, and each selected nodule was re-examined and approved by a practicing radiologist. The final malignancy score was determined by averaging the scores assigned by each radiologist. Nodules with an average score of less than or equal to 1.5 were classified as benign, and those with an average score of larger than or equal to 3.5 were classified as malignant. A total of 63 benign nodules and 98 malignant nodules were included in the study. The nodule images were acquired by using the noduleID values of the selected nodules and the "imageZposition" parameter, which determined the slice numbers of each nodule in a scan. The images were in \(512\times 512\) DICOM format, and the MicroDicom software was used to display, analyze, and convert the selected images into \(512\times 512\) PNG format. A total of 303 benign and 919 malignant class images were acquired, and Figure 1 presents a sample of a benign and a malignant lung CT image.
To enhance the generalizability of the trained networks and avoid overfitting, we performed data augmentation on the selected original dataset. Each image in the benign and malignant classes was rotated by +2, -2, +4, and -4 degrees,
resulting in a total of 1515 benign and 4595 malignant images. After augmenting the dataset, we split the data into two categories: unfair data splitting and fair data splitting. The former refers to a dataset where images from the same scan can be used in both the training and testing processes. We randomly divided the augmented dataset into train, validation, and test sets, without applying patient-wise division. The train folder comprised 969 benign and 2940 malignant images, the validation folder contained 410 benign and 1241 malignant images, and the test folder had 136 benign and 414 malignant images. In contrast, the fair dataset was created by implementing patient-wise data splitting. We separated the data into a train-validation folder and a test folder. CT scans belonging to patients in the train-validation folder were solely used for training and validation to prevent any correlation between the images in the train-validation and test folders. Hence, the test dataset did not contain any images that were used to train and validate the models.
### Deep Neural Network Architectures
Nowadays, deep neural networks (DNNs) have become the gold standard in classification problems, and huge portions of these networks are composed of CNNs. In this paper, the classification task is realized by using three well-known DNN architectures, and short explanations of these architectures are given below.
Simonyan and Zisserman studying at the Visual Geometry Group Lab of Oxford University have suggested VGG-16 architecture in 2014 [32]. VGG-16 Network architecture contains 16 groups of layers in total. It takes RGB images with a resolution of 224x224 pixels as input. It has a convolution kernel with the size of 3x3 and a maximum pooling layer with the size of 2x2. It is one of the most widely used architectures in various pattern recognition studies in spite of its comparatively slower training process.
EfficientNet is another DNN architecture that scales some parameters such as depth, width, and resolution with the help of a compound coefficient [33]. EfficientNet differs from the other architectures by uniformly balancing these parameters. It aims to lower the calculation cost by dividing the conventional convolution into two phases. Along with that, it diminishes possible losses resulting from the usage of Rectified Linear Unit (ReLU) by utilizing a linear activation function at its final layer blocks.
MobileNet is a newly invented neural network architecture by a number of Google researchers, and it is adapted mainly to mobile devices [34]. Since many mobile devices have some source limitations, researchers find them attractive due to their fruitful characteristics, such as being small and low-latent. MobileNetV2 is the second version of MobileNet, and some bottleneck layers are used. Also, MobileNetV2 does a filtering operation on the features to overcome the nonlinearity problem.
### Experimental Study
Google Colaboratory or "Colab" was used as an environment for implementing our experiments. The environment provides a tool for writing and executing python code and is especially applicable for machine learning tasks [35]. Keras with Tensorflow backend is used to import the DNN architectures. End-to-end binary classification is carried out by modifying all three ImageNet pre-trained DNN final layers with binary softmax layers and by training them. A simple resizing operation on the dataset images is carried out according to the default input size of the networks before giving them as input to the DNNs.
#### 2.3.1 Training Procedure
Training parameters used in the experiments are given in Table 1. The table shows the starting value of the learning rate, and it is reduced by one-tenth if no validation accuracy improvement is seen for a number of epochs.
In the unfair train-validation process, 70% of the dataset is utilized for training and validation, while the remaining 30% is set aside for testing. This method involves feeding different images from the same CT scan into the input layers of the architectures, resulting in unfair training. Similarly, the images in the test set could also be from the same patients utilized in the train-validation process, resulting in unfair testing. Such train-validation-test sets contain images from all CT scans, producing misleading accuracy values and causing the models to overfit at the early training stages, as demonstrated in Figure 2-a.
In order to avoid overfitting and report reliable accuracy results, CT scans of different patients were divided into separate folders for the second experimental set, which we
\begin{table}
\begin{tabular}{c|c} \hline
**Parameter** & **Value** \\ \hline Learning Rate & 0.0001 \\ Epoch Number & 50-200 \\ Batch Size & 32 \\ Optimizer & ADAM \\ \hline \end{tabular}
\end{table}
Table 1: DNN training parameter settings
Figure 1: Axial CT images of (a) Benign, (b) Malignant pulmonary nodules. Nodules are circled with red color on the images.
call the FAIR training procedure. Monte Carlo Cross Validation (MCCV) [36] was applied to use CT scans belonging to different patients for each training and validation set in the architectures. Images from a random group of patients are used for training, while images from the rest of the patients are used for validation. Furthermore, images from a completely different set of patients are used for testing. The patient-wise train-validation splitting in MCCV is illustrated in Figure 3. The improvement in the learning process and validity of the reported accuracy results are analyzed. Figure 2-b clearly shows that the proposed training process improves in time and no inconsistent overfitting occurs. Furthermore, the resultant networks provide accuracy results that are more reliable, as will be discussed in Sec. 2.3.2.
#### 2.3.2 Classification Results
Three DNN architectures; MobileNetV2, EfficientNetB0, and VGG16, were trained and validated, first through the unfair training-validation separation, and then through fair dataset splitting by MCCV. Table 2 compares the classification accuracies for the unfair and fair experiments of each architecture. As expected, the architectures tend to report misleadingly high accuracies when they're unfairly trained and tested, while they reach lower (but actually correct) accuracy values when patient-wise data splitting is carried out, and different CT scans are used for testing.
In order to assess the correctness and validity of the reported test accuracies, CT images of a completely isolated set of patients (called the challenge set) were applied to the trained networks. The obvious observation is that the reported test accuracies (left-side column) of the unfairly trained network are far from being valid for the challenge set (right-side column), whereas the performance of the fair-trained network is totally consistent with the reported test accuracies. Interestingly, certain networks (i.e., EfficientNet and VGG16) result in an extreme failure in the challenge dataset when they are unfairly trained, giving an impression that overfitting and patient-learning could be a more pronounced issue in these networks. In order to avoid that situation, when the networks are fairly trained, the test and challenge accuracies could become modestly high, consistent, and reliable.
## 3 Interpretability Analysis
The use of heat maps, also referred to as Class Activation Maps (CAMs), is a common technique for visualizing the magnitude of a phenomenon through color-coded representations [37, 38]. In the context of deep neural networks (DNNs), which often operate as black boxes with limited interpretability, visualizing the decision-making process is crucial for assessing the fairness of the model. The creation of heat maps involves several steps, including preprocessing of the input image, prediction of the image class by the trained model, and calculation of gradients using both the output of the last convolutional layer and the output of the deep model. Neuron weights are then acquired via average pooling of these gradients in three axes. The values in each
Figure 3: Monte Carlo Cross-Validation. The diagram shows that patients of the validation set change each time a new epoch starts.
Figure 2: Training and validation accuracies for both (a) unfair and (b) fair train-validation procedures using EfficientNet.
## 5 Conclusion
Figure 4: Heat-Map visualizations (CAMs) for eight randomly selected test images. The first column indicates masks of the nodules, the second column indicates original CT images, the third column indicates fair model CAMs, and the fourth column indicates unfair model CAMs. Nodules are pointed with the red circles in the original CT images.
layer of the last convolutional block are subsequently multiplied by their corresponding neuron weights, and the average and maximum of these values are computed to generate the heat map. The heat map is then normalized, resized to the input image dimension, multiplied by 255, and subjected to color mapping before being combined with the original image. By highlighting the areas of an image that are most influential in the model's prediction, heat maps can provide insight into the internal workings of DNNs and their ability to perform complex tasks.
In order to illustrate the interpretability analysis in detail, a set of 8 images was randomly selected and used to test both fair and unfair models. The aim was to use heat maps to identify the regions in the lungs that the models mainly use to make their final decisions. The results of these tests are presented in Figure 4, with the red color indicating the strongest activations, using standard heat color maps from the OpenCV Python library. Upon reviewing the heat maps, it becomes clear that the unfair model produced malign predictions despite focusing on areas that are not even tumor regions. Conversely, the fair model was able to make malign predictions by focusing exclusively on the tumor regions. This finding demonstrates that the unfair models are not reliable, as they concentrate on areas that are not related to the tumor before making their final decisions. This lack of reliability would likely be amplified if the models were trained and tested on different patient images, as demonstrated by the fair model's test scores.
Figure 5-a shows a malignant CT image which is taken from a scan with an ID of 54 in the LIDC/IDRI dataset (tumor region is indicated). Once this image is tested in a fair and an unfair model, it is predicted correctly as malignant by both models. However, the reliability of these results becomes clear once the activation heat maps are overlaid on the CT image for the tested models. The heat map in Figure 5-b shows that the regional activation (hence visual attention) of the fair model is high at and around the actual tumor region. On the other hand, Figure 5-c, which shows the heat map from the unfair model, indicates no visual focus on the locations around the tumor region. It is argued that the model in Figure 5-c gives a correct decision using an unreliable reasoning as a result of probable overfitting through unfair training.
To extend and generalize the findings from Figure 5, the study employs two prominent interpretability score methodologies. Figure 6 illustrates the general framework for conducting the interpretability analysis.
The first approach for the interpretability assessment focuses on attention heat map values that correspond to the tumor nodule region and compares them to the rest of the image. These values, which indicate higher activation and hence visual attention, can be either averaged inside the nodule regions, or the highest value inside the region can be considered as the attention value. Using the examples of the challenge images in Figure 4, the mean and maximum heat map values inside the nodule regions using the unfair models are provided in Table 3. Clearly, both the maximum and the average heat maps inside the nodule regions using the unfair models are significantly lower than the values obtained using the fair models.
The second approach for the interpretability assessment measures the structural similarity of the nodule regions and compares them to the structure of the shape obtained from the heat map image. It is argued that if these two shapes structurally match, it indicates a high interpretability
\begin{table}
\begin{tabular}{l|c|c c|c c} \hline \hline \multirow{2}{*}{DNN Architecture} & \multirow{2}{*}{Number of Epochs} & \multicolumn{2}{c|}{Fair} & \multicolumn{2}{c}{Unfair} \\ \cline{3-6} & & Test Acc & Challenge Acc & Test Acc & Challenge Acc \\ \hline \multirow{2}{*}{MobileNetV2} & 50 & 0.7365 & 0.7151 & 0.9836 & 0.7402 \\ & 200 & 0.7081 & 0.6702 & 0.9855 & 0.6693 \\ \hline \multirow{2}{*}{EfficientNetB0} & 50 & 0.7035 & 0.6812 & 0.9873 & 0.4331 \\ & 200 & 0.7194 & 0.7188 & 0.9873 & 0.4252 \\ \hline \multirow{2}{*}{VGG16} & 50 & 0.6881 & 0.6432 & 0.9909 & 0.3701 \\ & 200 & 0.7220 & 0.6933 & 0.9873 & 0.3701 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification accuracies obtained by implementing fair and unfair training-testing for both test and challenge datasets.
Figure 5: (a) Example of a CT image with a malignant nodule from LIDC/IDRI (with a yellow arrow showing nodule place); (b) corresponding fair model heat-map output; (c) corresponding unfair model heat-map output.
score, and the DNN focuses on the nodule region with high attention. There are two well-known correlation techniques that measure the pixel layout similarity between two images; the Pearson and the Spearman correlation [39]. Pearson correlation evaluates the linear relationship between two images, whereas Spearman correlation is a more general measure that evaluates the monotonic relationship between two images. These classical correlation values are evaluated with the aim of finding the shape-wise relation between the focus heatmap values and binary morphological shape corresponding to the ground truth nodule label pixels. It is argued that a high correlation (closer to one) would indicate that the heatmap shows a correct focus to the nodule regions, whereas smaller correlation values would mean an incorrect, hence an uninterpretable focus. Table 4 shows Pearson and Spearman correlation values between nodule regions and heat map images for the set that was used in Figure 4.
Similar to the results in Table 4, the stronger correlations between the nodule regions and the provided heat maps for the fair models indicate that the fair model causes a more reliable machine learning process as compared to the unfair model, where these correlation values are visibly lower.
## 4 Discussions and Conclusion
Lung cancer remains to be the leading cause of cancer-related deaths worldwide. Due to the importance of early detection of lung nodules and accurate differentiation between benign and malignant nodules in effective treatment and patient survival, the interest in fast and accurate application of computer-aided diagnosis is overwhelming. In recent years, ML and deep learning techniques have been widely used for the automatic classification of lung nodules in CT scans, providing a promising solution to improve the
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Patient IDs} & \multicolumn{2}{c|}{Unfair} & \multicolumn{2}{c|}{Fair} \\ \cline{2-5} & Pearson Corr & Spearman Corr & Pearson Corr & Spearman Corr \\ \hline
11 & 0.0133 & 0.0286 & 0.0624 & 0.0548 \\ \hline
37 & 0.0306 & 0.0375 & 0.0764 & 0.0668 \\ \hline
47a & 0.0112 & 0.0183 & 0.0563 & 0.0561 \\ \hline
47b & 0.0473 & 0.0462 & 0.0447 & 0.0465 \\ \hline
240a & 0.0053 & 0.0132 & 0.0327 & 0.0353 \\ \hline
240b & 0.0226 & 0.0300 & 0.0671 & 0.0695 \\ \hline \end{tabular}
\end{table}
Table 4: Pearson and Spearman correlations of heatmap and ground truth nodule shapes for Unfair and Fair cases
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Patient IDs} & \multicolumn{2}{c|}{Unfair} & \multicolumn{2}{c|}{Fair} \\ \cline{2-5} & \multicolumn{1}{c|}{Nodule Max} & \multicolumn{1}{c|}{Nodule Mean} & \multicolumn{1}{c|}{Nodule Max} & \multicolumn{1}{c|}{Nodule Mean} \\ \hline
11 & 0 & 0 & 0.8882 & 0.7908 \\ \hline
37 & 0.3824 & 0.2997 & 0.8030 & 0.7076 \\ \hline
47a & 0 & 0 & 0.8677 & 0.8116 \\ \hline
47b & 0.6942 & 0.5901 & 0.7444 & 0.7046 \\ \hline
240a & 0.3762 & 0.1990 & 0.6319 & 0.4167 \\ \hline
240b & 0.6607 & 0.5152 & 0.7529 & 0.6972 \\ \hline \end{tabular}
\end{table}
Table 3: Heatmap values of several patients for Unfair and Fair cases with respect to nodule max and nodule mean criteria
Figure 6: Flow chart of the interpretability analysis
accuracy and efficiency of diagnosis over large volumes of data. The number of published articles regarding the use of ML or DNN-based methods for automatic nodule classification in CT images is well above several hundreds each year. Particularly, DNN-based approaches have shown great potential in the field of computer-aided diagnosis (CAD) for lung nodule classification from CT images. However, most of the deep learning-based classification techniques in the literature only focus on higher _reported_ accuracy results, without considering the true reliability of the eventual system.
Our study has shown that patient-level separation is crucial in the training and testing of deep neural networks for lung nodule classification in CT images. Our findings indicate that careless image splitting without patient-wise separation in training and testing can lead to incorrect and unfair results that cannot be verified in new challenge datasets. On the other hand, patient-wise splitting in the training and testing process provides consistent, correct, and reliable results for accuracy percentages.
Moreover, the experimental results have also shown that patient-wise splitting in training and testing improves the interpretability of the constructed deep neural network by means of showing better attention to the activation values around the correct nodule regions. This improvement in interpretability was demonstrated using two different approaches: analysis of attention heat map values and correlation analysis between heat map images and the nodule regions.
Based on our findings, we recommend the following best practices for deep neural network training and testing for lung nodule classification in CT images:
* Strictly separate the training, validation, and test datasets at the patient level to ensure reliable and interpretable results.
* Verify the interpretability of the trained networks by analyzing attention heat map values and correlation analysis between heat map images and the nodule regions.
* Report accuracy percentages for both overall performance and performance on new patient images to ensure the generalization of the deep neural network to new patients.
* Provide clear documentation of the dataset splitting methodology in any publications related to deep neural network training and testing for lung nodule classification in CT images.
The provided observations indicate that further care must be taken in ML and DNN applications of crucial medical applications such as benign/malignant classifications or diagnosis aid for achieving better reliability and real usability in medicine.
|
2309.10025 | Emergent Chern-Simons Interactions in 3+1 Dimensions | Parity violating superconductors can support a low-dimension local
interaction that becomes, upon condensation, a purely spatial Chern-Simons
term. Solutions to the resulting generalized London equations can be obtained
from solutions of the ordinary London equations with a complex penetration
depth, and suggest several remarkable physical phenomena. The problem of flux
exclusion by a sphere brings in an anapole moment, the problem of
current-carrying wires brings in an azimuthal magnetic field, and the problem
of vortices brings in currents along the vortices. We demonstrate that
interactions of this kind, together with a conceptually related dimensionally
reduced Chern-Simons interaction, can arise from physically plausible
microscopic interactions. | Marcus Stålhammar, Darya Rudneva, Thors Hans Hansson, Frank Wilczek | 2023-09-18T18:00:01Z | http://arxiv.org/abs/2309.10025v2 | # Emergent Chern-Simons Interactions in 3+1 Dimensions
###### Abstract
Parity violating superconductors can support a low-dimension local interaction that becomes, upon condensation, a purely spatial Chern-Simons term. Solutions to the resulting generalized London equations can be obtained from solutions of the ordinary London equations with a complex penetration depth, and suggest several remarkable physical phenomena. The problem of flux exclusion by a sphere brings in an anapole moment; the problem of current-carrying wires brings in an azimuthal magnetic field; the problem of vortices brings in currents along the vortices. We demonstrate that interactions of this kind, together with a conceptually related dimensionally reduced Chern-Simons interaction, can arise from physically plausible microscopic interactions.
pacs: 74.20.De, 74.20.Rp, 03.65.Vf
## I Introduction
The principles of symmetry and locality allow us to survey interactions that are likely to emerge in the description of materials at low energy in a systematic way. This "Ginzburg-Landau" or "effective field theory" approach has proved to be a fruitful guide to low-energy dynamics, phase transitions, and response to external fields in many applications. In this approach, the focus is on possible interactions represented by local operators of low mass dimension. In this paper we shall study terms in the effective Lagrangian, specifically \(\vec{\beta}\cdot\vec{A}\times\vec{E}-A_{0}\vec{\beta}\cdot\vec{B}\) and especially \(\vec{A}\cdot\vec{B}\), that break discrete symmetries. For general reasons related to gauge symmetry it appears that the second of these terms cannot appear in a normal material in its thermodynamic ground state [1]. However, as demonstrated below, it is permitted in superconducting states that can support persistent current. We will provide examples of microscopic models that illustrate both terms. These terms have noteworthy phenomenological implications, which we will explore further and exemplify below.
Interactions mediated by Lagrangian densities of the Chern-Simons form
\[{\cal L}\ \propto\ \epsilon^{\alpha\beta\gamma}A_{\alpha}\partial_{\beta}A_{ \gamma}, \tag{1}\]
involving a gauge field \(A_{\alpha}\), and their multi-field and non-abelian generalizations, have attracted much attention in recent years, mostly in the context of 2+1 dimensional systems, where they can have a topological character. In its most straightforward application, the gauge field \(\vec{A}\) is the gauge field of electromagnetism. Then these terms directly induce, and parameterize, interesting aspects of electromagnetic response that, at a heuristic level, derive from current-field mixing.
Here we will examine a different appearance of interactions of this kind, in 3+1 dimensions, where we take all the indices to be spatial:
\[{\cal L}_{\rm CSt}=-\frac{\beta}{2}\vec{A}\cdot\vec{B}. \tag{2}\]
Terms of this kind are not relativistically invariant, and they also violate parity (but not time reversal). That does not forbid their appearance, since many materials, such as those based on crystals lacking an inversion center, and others described below, violate those symmetries. More seriously, such terms embody only a limited form of gauge symmetry. Under a local gauge transformation \(A_{\alpha}\to A_{\alpha}+\partial_{\alpha}\Lambda\) we have
\[\epsilon^{\alpha\beta\gamma}A_{\alpha}\partial_{\beta}A_{\gamma}\ \to\ \epsilon^{\alpha\beta\gamma}A_{\alpha}\partial_{\beta}A_{\gamma}\ +\ \partial_{\alpha}(\epsilon^{\alpha\beta\gamma}\Lambda \partial_{\beta}A_{\gamma}), \tag{3}\]
so that the change in the bulk interaction can be cast into a surface term; but in possible applications the surface term requires careful consideration. Notably, in the context of the quantum Hall effect it is connected to the existence of edge modes and is cancelled through an anomalous surface theory [2]. In addition to the spatial Chern-Simons term (2), we shall also consider the term
\[{\cal L}_{\rm CSs}=\frac{1}{2}\left(\vec{\beta}\cdot\vec{A}\times\vec{E}-A_{0 }\vec{\beta}\cdot\vec{B}\right). \tag{4}\]
In a superconductor we can generate a spatial Chern-Simons interaction from a conventional, manifestly gauge invariant interaction by condensation, _viz._:
\[{\rm Re}\,\phi^{\dagger}i\vec{D}\phi\cdot\vec{B}\ \to\ qv^{2}\vec{A}\cdot\vec{B}, \tag{5}\]
where \(\vec{D}\equiv\vec{\nabla}-iq\vec{A}\) is the covariant derivative and \(\phi\to\langle\phi\rangle\equiv v\) through condensation. This is similar to how condensation generates a photon mass term \(\propto A^{2}\) from the kinetic energy \(\propto\phi^{\dagger}\vec{D}^{2}\phi\). We can expect such terms to arise even in \(s\)-wave superconductors that violate parity symmetry, for example those based on chiral crystals,
on organic superconductors subject to chiral selection of the base molecules, or generic \(s\)-wave superconductors incorporating chiral dopants. Below, we will also display and analyze a specific microscopic model dynamics that does the job.
Heuristically, one identifies quantities \(\vec{\bar{j}}\) that appear in Lagrangian densities of the form \(\vec{A}\cdot\vec{\bar{j}}\) as effective currents, since they will appear as such in the Maxwell equations. (If \(\vec{\bar{j}}\) depends explicitly on \(\vec{A}\), slight complications ensue.) Famously, the London diamagnetic current \(\vec{j}_{d}\propto\vec{A}\) is characteristic of the superconducting photon mass. Following this heuristic, the spatial Chern-Simons term Eqn. (2) gives us a current \(\vec{j}_{CS}\) proportional to the magnetic field,
\[\vec{\bar{j}}_{CSt}\propto\vec{B}. \tag{6}\]
This yields unusual, interesting and potentially important phenomenological consequences, which is the subject of Section II.
From a broader theoretical perspective, a natural term descending from a Lorentz invariant effective action is \(\epsilon^{\mu\nu\sigma\omega}\beta_{\mu}A_{\nu}F_{\sigma\omega}\), where \(\beta_{\mu}\) is an axial (_i.e._, unnatural) four vector. Constant values of \(\beta\) violate Lorentz invariance, and a constant \(\beta_{0}\) can be powerfully constrained phenomenologically using astronomical data [3]. But interactions of the form Eqn. (2) and (4) arise naturally from the canonical axion coupling to electromagnetic fields
\[{\cal L}\ \propto\ a\epsilon^{\alpha\beta\gamma\delta}\partial_{\alpha}A_{ \beta}\partial_{\gamma}A_{\delta} \tag{7}\]
for the simplest space-time variations of \(a\), corresponding to the axion background \(a=\beta t\) and \(a=\vec{\beta}\cdot\vec{x}\), respectively, where the latter describes an "axion wind" background, _i.e.,_ one that is constant in time but varies linearly in a spatial direction. Of course, both can occur together.
The expression \(\vec{A}\cdot\vec{B}\) appears in many places in the literature on magnetohydrodynamics [4], where it is used to characterize the magnetic field configurations in the plasmas. In particular, Eqn. (6) describes a "force free" field since the Lorentz force on a current parallel with the magnetic field vanishes [5]. We stress, however, that here we are interested in the response of materials where \(\vec{A}\cdot\vec{B}\) is part of the effective action, and thus determines the response to external fields.
## II Phenomena in actively chiral superconductors
For ease of reference, and in view of their connection with chirality and optical activity, we shall refer to superconductors that incorporate a purely spatial Chern-Simons term \({\cal L}_{CSt}\) as _actively chiral_ superconductors. We will work with the Lagrangian density
\[{\cal L}\ =\ \frac{1}{2}E^{2}-\frac{1}{2}B^{2}-\frac{\beta}{2}\vec{A}\cdot \vec{B}-\frac{\gamma}{2}A^{2}. \tag{8}\]
### Plane Waves and Stability
From Eqn. (8) we derive, after fixing the gauge \(A_{0}=0\) and adopting the plane-wave _ansatz_
\[\vec{A}\ =\ \vec{\varepsilon}\exp i(\vec{k}\cdot\vec{x}-\omega t) \tag{9}\]
the equations of motion
\[\vec{k}\cdot\vec{\varepsilon} = 0, \tag{10}\] \[(\omega^{2}-k^{2}-\gamma)\vec{\varepsilon}\ \mp\ \beta\vec{k}\times\vec{\varepsilon} = 0. \tag{11}\]
The eigen-polarizations are transverse and circular. Indeed, with
\[\vec{k} = (0,0,k),\] \[\vec{\varepsilon} \propto (1,\pm i,0), \tag{12}\]
we find the dispersion relations
\[(\omega^{2}-k^{2}-\gamma)\mp\beta k\ =\ 0. \tag{13}\]
The two circular polarizations propagate with different velocities. This gives rise to optical activity, _i.e._, rotation of the plane of linear polarization as the (transverse) wave propagates.
For stability in time we require that for real \(k\) the \(\omega\) that solve the dispersion relation are real. This gives us the stability condition
\[4\gamma\geq\beta^{2}. \tag{14}\]
The same condition also ensures the positivity of the energy. Indeed, since the electric field contribution is manifestly positive, at issue is only the positivity of the magnetic energy
\[{\cal E}\ =\ \frac{1}{2}\,\int B^{2}+\beta\vec{A}\cdot\vec{B}+\gamma A^{2}. \tag{15}\]
We can write this as
\[{\cal E}=\frac{1}{4}\,\int\left(1+\frac{\beta}{2\sqrt{\gamma}} \right)\left(\vec{B}+\sqrt{\gamma}\vec{A}\right)^{2}\,+\] \[\left(1-\frac{\beta}{2\sqrt{\gamma}}\right)\left(\vec{B}-\sqrt{ \gamma}\vec{A}\right)^{2}. \tag{16}\]
When Eqn. (14) is satisfied the coefficients of these two manifestly positive terms will both be non-negative.
The stability condition Eqn. (14) requires, for \(\beta\neq 0\), that \(\gamma>0\). Thus, it requires a non-zero effective photon mass, such as we have in superconductivity. Note that if the condition Eqn. (14) is relaxed, the modes at very low \(k\) will still be stable since the \(A^{2}\) term dominates, and the same will be true for large \(k\) modes where the \(B^{2}\) term dominates. There will however be a region of intermediate \(k\) where the \(\vec{A}\cdot\vec{B}\) term will give an instability. In a more complete theory this instability could be cured by higher order terms and this would open for a non-trivial magnetic structure in the ground state. We will revisit this subject in a slightly different context in the Section III.
### Connection to Optical Activity
The optical activity of a material is usually described in terms of a frequency and momentum dependent dielectric constant and/or magnetic permeability. We shall consider the latter case and write the magnetic energy density as
\[{\cal E}_{m} = \frac{1}{2}B^{i}\mu_{ij}^{-1}(\omega,\vec{\nabla})B^{j} \tag{17}\] \[\rightarrow \frac{1}{2\mu}B^{2}+\frac{\alpha(\omega)}{2}\vec{B}\cdot\vec{ \nabla}\times\vec{B}\,,\]
where in the second line we put \(\mu_{ij}(\omega,\vec{\nabla})=\mu(\delta_{ij}-\alpha(\omega)\epsilon_{ikj} \nabla^{k})\) and expanded to leading order in \(\vec{\nabla}\). The energy corresponding to the second term can be rewritten as
\[E_{c} = \frac{\alpha(\omega)}{2}\int_{V}d^{3}x\,(\vec{\nabla}\times\vec{A })\cdot(\vec{\nabla}\times\vec{B})\] \[= -\frac{\alpha(\omega)}{2}\int_{V}d^{3}x\,\vec{A}\cdot\nabla^{2} \vec{B}\] \[+ \frac{\alpha(\omega)}{2}\int_{\delta V}dS_{i}\,A_{j}(\partial_{i }B_{j}-\partial_{j}B_{i}).\]
We already mentioned that the first term on the second line is not gauge invariant, but in this case it is easy to show that the surface term in the last line, as expected, restores gauge invariance, so there is no need for an additional surface theory.
Let us now assume that we have a superconductor with randomly implanted optically active impurities, that will add a term \(E_{c}\) to the free energy functional of the superconductor. To leading order in \(\alpha\) we can then just substitute the London relation \(\vec{\nabla}^{2}\vec{B}=-\lambda_{L}^{-2}\vec{B}\) in Eqn. (17), and assuming that \(\alpha(\omega)\) can be approximated by a constant \(\alpha\) at low frequencies we obtain the low energy Lagrangian in Eqn. (8) if we identify \(\beta=\alpha(0)/\lambda_{L}^{2}\).
### Solution Schema
We are interested in solving the equation
\[\vec{\nabla}\times\vec{\nabla}\times\vec{B}+\beta\vec{\nabla}\times\vec{B}+ \gamma\vec{B}\ =\ 0. \tag{19}\]
Eqn. (19) is a generalization of the famous London equation for superconducting magnetostatics, which is the special case \(\beta=0\). In the London equation \(\gamma\) represents the inverse square of the penetration depth. As we now demonstrate, one can generate solutions to Eqn. (19) out of solutions to the London equation with a complex coefficient.
Indeed, inserting the superposition _ansatz_
\[\vec{B}\ =\ \vec{B}_{a}\,+\,\kappa\vec{\nabla}\times\vec{B}_{a} \tag{20}\]
into Eqn. (19) leads to
\[(1+\beta\kappa)\,\vec{\nabla}\times\vec{\nabla}\times\vec{B}_{a} \,+\,\gamma\vec{B}_{a}\ +\] \[\kappa\vec{\nabla}\times\vec{\nabla}\times\vec{B}_{a}\,+\,(\beta+ \gamma\kappa)\,\vec{\nabla}\times\vec{B}_{a}\] \[=\ 0,\]
and therefore when
\[\vec{\nabla}\times\vec{\nabla}\times\vec{B}_{a}+\alpha\vec{B}_{a}\ =\ 0, \tag{22}\]
to
\[\left[-\alpha\,\left(1+\beta\kappa\right)\,+\,\gamma\,\right]\, \vec{B}_{a}\ +\] \[\left(\,-\alpha\kappa+\beta+\gamma\kappa\,\right)\,\vec{\nabla} \times\vec{B}_{a}\ =\ 0. \tag{23}\]
Thus, if we enforce the algebraic relations
\[-\alpha\left(1+\beta\kappa\right)\,+\,\gamma = 0,\] \[-\alpha\kappa+\beta+\gamma\kappa = 0, \tag{24}\]
then \(\vec{B}\) will satisfy the generalized London equation Eqn. (19).
We are given \(\beta,\gamma\) and seek to solve for \(\alpha,\kappa\). From Eqn. (24) we derive a quadratic equation for \(\kappa\), that is solved by
\[\kappa\ =\ \frac{-\beta\,\pm\,i\sqrt{4\gamma-\beta^{2}}}{2\gamma}. \tag{25}\]
Here we see that the realistic situation \(4\gamma-\beta^{2}>0\) brings in complex numbers. Having got \(\kappa\) in terms of \(\beta,\gamma\) it is straight-forward to further arrive at
\[\alpha\ =\ \gamma\,+\,\frac{\beta}{2}\left(-\beta\,\mp\,i\sqrt{4\gamma-\beta^{2}} \,\right). \tag{26}\]
Thus, we have two complex conjugate solutions for our auxiliary "inverse square penetration depth". An immediate physical implication is that we can expect oscillations to accompany the exponential damping of fields (and currents) we usually encounter as we penetrate a superconductor.
Ultimately we want real solutions of our field equations. Since our auxiliary equations are linear, we can simply use the real and imaginary parts of their solutions. Note that since the auxiliary equations are complex conjugates of one another, they both lead us to the same real fields.
This solution scheme embodies in a precise form the concept of field-current mixing that we anticipated heuristically. Indeed, since \(\vec{\nabla}\times\vec{B}_{a}\) is the London diamagnetic current associated to \(\vec{B}_{a}\), the solution \(\vec{B}\) defined in Eqn. (20) is a linear combination of its field and current.
Finally let us note the curious fact that our construction in Eqn. (20) leads to (complex-valued) fields \(\vec{B}\) that, like \(\vec{B}_{a}\), satisfy Eqn. (22) and thus, in view of Eqn. (19),
\[\beta\,\vec{\nabla}\times\vec{B}\ =\ (\alpha-\gamma)\,\vec{B}, \tag{27}\]
or
\[\vec{\nabla}\times\vec{B}\ =\frac{1}{2}\left(-\beta\,\mp\,i\sqrt{4\gamma- \beta^{2}}\right)\,\vec{B}. \tag{28}\]
In the critical case \(4\gamma-\beta^{2}=0\) we get force-free fields.
### Slab geometry
A relatively simple, yet physically significant and mathematically transparent situation to analyze is the half-space or slab geometry. Thus, we imagine our superconductor to fill the half-space \(x>0\) while in the remaining half-space we have a constant magnetic field
\[\vec{B}^{\rm ext.}=\hat{z}B_{0},\ \ \ \ \ (x<0). \tag{29}\]
Here we match onto the solution of the ordinary London equation Eqn. (22) proportional to
\[\vec{B}_{a}\ =\ \hat{z}B_{0}e^{-\sqrt{\alpha}x}, \tag{30}\]
whose curl
\[\vec{\nabla}\times\vec{B}_{a}\ =\ \hat{y}B_{0}\sqrt{\alpha}e^{-\sqrt{\alpha}x} \tag{31}\]
is the diamagnetic screening current. We will build our solution using this auxiliary form with \(B_{0}=1\).
We invoke Eqn. (26) to choose
\[\sqrt{\alpha}\ =\ \sqrt{\gamma-\frac{\beta^{2}}{4}}+i\frac{\beta}{2}\ \equiv\ p+iq, \tag{32}\]
where positive square roots are understood throughout. Note that in order to get solutions that fall off as \(x\rightarrow\infty\) we must take roots with a positive real part; the remaining choice associated with the \(\mp\) in \(\alpha\) has no effect on our final result, and we have chosen the lower sign.
With that preparation, we can use our solution scheme to solve the generalized London equation Eqn. (19). After some algebra, we arrive at
\[B_{z} = e^{-px}\cos qx,\] \[B_{y} = -e^{-px}\sin qx, \tag{33}\] \[B_{z} = -e^{-px}\sin qx,\] \[B_{y} = -e^{-px}\cos qx, \tag{34}\]
for the real and imaginary parts. Finally, to insure continuity of the magnetic field at the boundary we take the linear combination of these two solutions that has \(B_{y}(0)=0\) and \(B_{z}(0)=B_{0}\). This gives us the magnetic field
\[B_{z}(x) = B_{0}e^{-px}\cos qx,\] \[B_{y}(x) = -B_{0}e^{-px}\sin qx, \tag{35}\]
and the current
\[j_{z}(x) = \frac{\partial B_{y}}{\partial x}=-B_{0}e^{-px}(-p\sin qx+q\cos qx),\] \[j_{y}(x) = -\frac{\partial B_{z}}{\partial x}=-B_{0}e^{-px}(p\cos qx+q\cos qx), \tag{36}\]
inside the superconductor.
As anticipated, this solution displays three qualitatively new features relative to the usual London (\(\beta=0\)) case. Most profoundly, there is a current running parallel to the external field direction. Secondly, there is an induced perpendicular magnetic field in the interior. Thirdly, the interior fields and currents have an oscillatory character.
We can also consider a slab, occupying the region \(0\leq x\leq a\). We can use the same solution inside the superconductor, matched to the constant field
\[B_{z}(x\geq a) = -B_{0}e^{-pa}\sin qa,\] \[B_{y}(x\geq a) = -B_{0}e^{-pa}\cos qa. \tag{37}\]
This represents the result of applying \(B=B_{0}\hat{z}\) at \(x\leq 0\) and a rotated (and damped) field at \(x\geq a\). Here we see a close analogy, in magnetostatics, to optical activity (accompanied by absorption).
### Intrinsic Solenoid (Trapped Flux and Model Vortex)
We can notionally insert a solenoid into our superconductor, and ask that be generated self-consistently by screening currents within the superconductor. This is of interest in itself, and also allows us to anticipate and model, within the relatively simple and parameter-sparse context of the (modified) London equations, properties of trapped flux and of quantized magnetic vortices.
In cylindrical coordinates, our solenoid is defined by:
\[\vec{B}(r,z,\phi)\ =\ B_{0}\hat{z},\ \ \ r\leq R, \tag{38}\]
and it joins on to a solution of Eqn. (19) for \(r\geq R\). The self-consistency condition is that there are no singular surface currents, which we enforce by demanding continuity of the tangential magnetic fields at \(r=R\).
The auxiliary solution of the ordinary London equation brings in the Bessel function \(K_{1}\), which dies exponentially at infinity:
\[B_{a}(r,z,\phi)\ =\ B_{0}\hat{z}\,\frac{K_{1}(\sqrt{\alpha}r)}{K_{1}(\sqrt{ \alpha}R)},\ \ \ r\geq R. \tag{39}\]
Figure 1: Azimuthal and longitudinal components of the magnetic field and the currents inside the superconductor with a solenoid for \(\beta=15\), \(\gamma=57\), \(B_{0}=100\) and \(R=1\), displaying a (spatially damped) longitudinal current. Notably, the applied, constant and longitudinal magnetic field in the solenoid gives rise to an azimuthal field component inside the superconductor, that further oscillates in a damped fashion, just as the longitudinal component, as a function of \(r\).
From the curl of this field, we infer the azimuthal diamagnetic screening current. With this starting point, we can invoke the machinery of our solution schema to generate solutions of our generalized London equation in the exterior (superconducting) region. Details are spelled out in Appendix B.
Let us mention how the qualitative novelties we observed above get manifested here: within the superconductor we find longitudinal current flows \(j_{z}\), azimuthal magnetic fields \(B_{\phi}\), and oscillatory behavior (possibly damped) of all the fields and currents as functions of \(r\). These general findings are displayed in Fig. 1 for some exemplary values of \(\beta\) and \(\gamma\).
It is possible to consider cylindrical shells, fields imposed from the outside, and so forth, both analytically and numerically (and, presumably, experimentally), based on the same ideas.
### Sphere Geometry
Another accessible problem, often considered to be the paradigmatic "Meissner effect", is the superconducting sphere exposed to a constant external magnetic field. The auxiliary reference problem here was solved and presented by London himself in his classic book [6]. One finds, for spheres much larger than the penetration depth, the field cancelled or "expelled" by azimuthal diamagnetic screening currents near the surface of the sphere. Magnetic field lines with the superconductor get routed to within that penetration region, as displayed by Fig. 2. In additional to the imposed field, one finds a calculable magnetic dipole arising from the circulating currents.
Since the auxiliary solution is expressed in terms of exponentials, we can use our solution schema to generate completely explicit solutions of the modified equations in terms of exponentials and trigonometric functions. The behavior of the magnetic field is illustrated in Fig. 2 for various penetration depths, and calculational details along with the full solution are spelled out in Appendix B.
In Fig. 3, we see that the currents that run along the surface of the sphere and return in a (squashed) toroidal fashion give no external moment, but represent a form of what are called anapole moments in the literature. An anapole moment, or a magnetic toroidal moment, is a term in the multipole expansion of the electromagnetic field that violates both P and T symmetry. The anapole moment is given by
\[T_{i}=\frac{1}{10}\int\left[r_{i}\left(\vec{r}\cdot\vec{j}\right)-2r^{2}J_{i} \right]d^{3}x, \tag{40}\]
where \(r_{i}\) are the Cartesian coordinates, and \(\vec{j}\) the current. Using the explicit solutions of the magnetic field inside the sphere (see Appendix B for their explicit appearance), the current is given by \(\vec{j}_{\text{sphere}}=\vec{\nabla}\times\vec{B}_{\text{sphere}}^{\text{in}}\), it can be shown that \(T_{x}\) and \(T_{y}\) are identically zero. However, \(T_{z}\) is finite and for the two cases illustrated in Fig. 2 given by,
\[T_{z}\left(\beta=15,\gamma=65\right) =-15.5804, \tag{41}\] \[T_{z}\left(\beta=20,\gamma=200\right) =2182.52. \tag{42}\]
Figure 2: Expulsion of the magnetic field by a superconducting unit sphere for representative parameter values. The magnetic field forms closed loops inside the superconducting sphere. Panels (a) and (d) display how the parity violating azimuthal component oscillates inside the sphere, for polar angle \(\theta=\frac{\pi}{3}\). The field configuration gives rise to an anapole moment, as discussed in the text.
Appendix B contains a general expression for the \(z\)-component of the anapole moment.
## III Actively chiral magnetism
Chiral materials generally, and not only superconductors, support a \(\vec{B}\cdot(\vec{\nabla}\times\vec{B})\) term in the effective Lagrangian density. At optical frequencies, it gives optical activity. One should also consider its effect in magneto-statics. In that context, the most relevant terms are
\[-{\cal L} = \frac{1}{2\mu}B^{2}+\kappa\vec{B}\cdot\left(\vec{\nabla}\times \vec{B}\right)+\frac{\lambda}{2}\left(\vec{\nabla}\times\vec{B}\right)^{2}. \tag{43}\]
This Lagrangian density bears a close family resemblance to the effective Lagrangian we used in our analysis of chirally active superconductivity.. Indeed, the substitution \(\vec{\nabla}\times\vec{B}\rightarrow\vec{A}\) brings it into the form we analyzed above.
A simple heuristic consideration suggests the common occurrence of this term in organic (chiral) diamagnetism. Imagine a helical molecule segment along which diamagnetic currents can flow. If we apply magnetic flux along the axis of the helix, the diamagnetic current that along the helix, regarded vectorially, will have a component along the applied magnetic field direction. But the current sources \(\vec{\nabla}\times\vec{B}\), so this correlation represents a \(\vec{B}\cdot\vec{\nabla}\times\vec{B}\). If the magnetic field is off-axis only its component along the axis will be operative, but the same logic applies. Helices of the same chirality will all contribute with the same sign.
We can write the energy density as
\[\frac{1}{4}\left[\left(\frac{1}{\mu}+\frac{\kappa}{\sqrt{\lambda \mu}}\right)\left(\vec{B}+\sqrt{\lambda\mu}\vec{\nabla}\times\vec{B}\right)^{2}\right.\] \[\left.+\left(\frac{1}{\mu}-\frac{\kappa}{\sqrt{\lambda\mu}} \right)\left(\vec{B}-\sqrt{\lambda\mu}\vec{\nabla}\times\vec{B}\right)^{2} \right], \tag{44}\]
from which we see that we have stability for
\[\frac{\lambda}{\mu}\geq\kappa^{2}, \tag{45}\]
and of course \(\mu,\lambda\geq 0\).
Alternatively we can consider plane waves
\[\vec{A} = \left(\begin{array}{c}1\\ \pm i\\ 0\end{array}\right)e^{ikz}, \tag{46}\]
with energy density proportional to
\[\frac{1}{\mu}\pm 2\kappa k+\lambda k^{2}. \tag{47}\]
Here positivity of the energy density (for real \(k\)) leads again to Eqn. (45). As long as \(\lambda>0\) we can stabilize the model by adding a \((B^{2})^{2}\) term.
Taking \(\lambda,\kappa,k>0\), the minimum energy density taking into account only quadratic terms occurs, according to Eqn. (47), at
\[k_{c}=\frac{\kappa}{\lambda}, \tag{48}\]
with the lower choice of sign, where it has the value
\[\varepsilon \equiv \frac{1}{\mu}-\frac{\kappa^{2}}{\lambda}. \tag{49}\]
When \(\varepsilon<0\) we can lower the energy by bringing in fields of the form.
\[B_{x} \propto \cos k_{c}z,\] \[B_{y} \propto \sin k_{c}z,\] \[B_{z} = 0. \tag{50}\]
Note that this instability does not require \(\mu<0\), _i.e.,_ instability toward ordinary (_i.e._, \(k=0\)) ferromagnetism, though of course it includes that possibility. To describe a stable system, we must bring in a \((B^{2})^{2}\) penalty term that limits the amplitude of the spontaneously developed structure. Eqn. (50) represents fields that are constant within \(z=\mathrm{const}\). planes whose direction rotates periodically within the \(x-y\) plane as \(z\) varies. In other words, we see here magnetic fields characteristic of optical activity frozen in time.
A point of interest is that because the coupling \(\frac{\lambda}{2}(\vec{\nabla}\times\vec{B})^{2}\) required to stabilize the \(\kappa\vec{B}\cdot(\vec{\nabla}\times\vec{B})\) "optical activity" term contains a larger number of derivatives than the minimal Maxwell \(\frac{1}{2\mu}B^{2}\) term it cannot be regarded as a uniformly small perturbation, even when \(\lambda\) is small. Indeed, it changes the nature of the boundary value problem. If we fix the gauge \(\vec{\nabla}\cdot\vec{A}=0\) and (for simplicity) set \(\kappa=0\), varying \({\cal L}\) leads to the equation
\[\left[\frac{1}{\mu}+\lambda\left(\nabla^{2}\right)\right]\,\nabla^{2}\vec{A} = 0. \tag{51}\]
Any harmonic vector field will solve this equation, and very naively one might expect that for small \(\lambda\) such fields provide excellent approximate solutions in general. But in regions where \(\vec{A}\) varies rapidly the second term comes
Figure 3: Currents in spherical coordinates inside a superconducting unit sphere generated by the external and constant magnetic, for polar angle \(\theta=\frac{\pi}{3}\). Note that the radial component of the current vanishes at the boundary. No current escapes from the sphere, but there is an anapole moment.
in strongly, and other solutions may be physically appropriate. In particular, the boundary conditions one must apply at surfaces where the value of \(\lambda\) changes (notably, at boundaries between our chiral magnetic materials and conventional materials, or empty space) one must enforce additional continuity of normal derivatives, beyond what is usually required for harmonic fields, and this may require substantial adjustments of candidate solutions near the boundary. Closely related mathematical issues arise in hydrodynamics, where they have stimulated the development of boundary layer theory.
The solution schema we used in the \(\vec{A}\cdot\vec{B}\) problem continues to work in this new context, so we can leverage known solutions of Eqn. (51) to get solutions of the full (\(\kappa\neq 0\)) equations in various geometries.
## IV Phenomenology of axion wind materials
The axion wind term
\[\mathcal{L}_{w}\ \propto\ \vec{\beta}_{i}\cdot\epsilon^{i\alpha\beta\gamma}A_{ \alpha}F_{\beta\gamma}\ \propto\ \vec{\beta}\cdot(\vec{A}\times\vec{E})-A_{0}\vec{\beta}\cdot\vec{B} \tag{52}\]
breaks rotation symmetry (and time-reversal symmetry), so its phenomenology is more complicated. Here we confine ourselves to a simple but important general observation and a calculation of its effect on wave propagation. For simplicity, we will take \(\vec{\beta}=\beta\hat{z}\).
Whereas the term \(\propto\vec{A}\cdot\vec{B}\) brings in spatial derivatives in all directions, and thereby involves the material as a whole, the term \(\vec{\beta}\cdot(\vec{A}\times\vec{E})\) does not bring in derivatives in the \(\hat{z}\) direction, and in that sense reduces to a stack of planar terms. (Of course, other terms in the Lagrangian will link the planes.) Within each plane, we have in effect a 2+1 dimensional Chern-Simons theory. Indeed, through the alternative formulation \(\epsilon^{3\alpha\beta\gamma}A_{\alpha}\partial_{\beta}A_{\gamma}\) of our term, we see that in isolation it represents literally a stack of independent 2+1 dimensional Chern-Simons theories. This interpretation indicates that in a bounded sample there will be massless surface modes, as in the quantum Hall effect, whose anomalies cancel the surface terms that otherwise obstruct full gauge invariance of the bulk theory.
From the Lagrangian
\[\mathcal{L}=\frac{1}{2}E^{2}-\frac{1}{2}B^{2}+\frac{\beta}{2}\left[\hat{z} \cdot\left(\vec{A}\times\vec{E}\right)-A_{0}B_{z}\right], \tag{53}\]
we derive the equations of motion
\[\vec{\nabla}\cdot\vec{B} = 0,\] \[\vec{\nabla}\times\vec{E} = -\frac{\partial\vec{B}}{\partial t},\] \[\vec{\nabla}\cdot\vec{E} = \beta B_{z},\] \[\vec{\nabla}\times\vec{B} = \frac{\partial\vec{E}}{\partial t}-\beta\hat{z}\times\vec{E}. \tag{54}\]
Thus we have effective charge and current densities
\[\rho_{e} = \beta B_{z}, \tag{55}\] \[j_{e} = -\beta\hat{z}\times\vec{E}, \tag{56}\]
that automatically satisfy the conservation equation.
Having imposed the corresponding variational equation, we can set the non-dynamical field \(A_{0}=0\).
For plane waves propagating in the \(\hat{z}\) direction we use the _ansatz_\(A=\epsilon\,e^{i(kz-\omega t)}\). We find that transverse circular polarizations lead to uncoupled dispersion relations, in the forms
\[\vec{A}=\left(\begin{array}{c}1\\ \pm i\\ 0\end{array}\right)e^{i(kx-\omega t)}, \tag{57}\] \[0=\omega^{2}-k^{2}\mp\beta\omega. \tag{58}\]
Here again we find that the different circular polarizations travel at different velocities, so there is optical activity. Unlike before, however, here we have no zone of instability.
For plane waves propagating in the \(\hat{x}\) direction we use the _ansatz_\(A=\epsilon e^{i(kx-\omega t)}\). One eigenmode does not feel the new term at all:
\[\vec{A}=\left(\begin{array}{c}0\\ 0\\ 1\end{array}\right)e^{i(kx-\omega t)}, \tag{59}\] \[0=\omega^{2}-k^{2}. \tag{60}\]
The other eigenmode is more unusual. It is
\[\vec{A}\ =\ \left(\begin{array}{c}i\frac{\beta}{\omega}\\ 1\\ 0\end{array}\right)e^{i(kx-\omega t)}, \tag{61}\]
with the dispersion relation
\[0\ =\ \omega^{2}-k^{2}-\beta^{2}. \tag{62}\]
This dispersion relation is characteristic of a massive excitation. The polarization, which is never transverse, becomes increasingly longitudinal at low frequencies.
It is straightforward, though lengthy, to calculate the general case \(\vec{k}=k(\sin\theta\hat{x}+\cos\theta\hat{z})\). Here we only record the dispersion relation,
\[\omega^{2}\ =\ k^{2}\,+\frac{\beta^{2}}{2}\pm\beta\sqrt{k^{2}\cos^{2}\theta+ \frac{\beta^{2}}{4}}. \tag{63}\]
## V Microscopic models based on semimetals
Previously we have discussed in general terms several situations where we can expect emergent Chern-Simons terms to arise. In this section we will describe in detail a specific construction, inspired by the appearance
of anomalies in quantum field theories, that gives rise to them.
The P or T breaking terms we seek are quadratic in gauge potentials and involve cross products. A natural way for those to arise, in Feynman graphs, is through the structure \(\mathrm{Tr}[(\vec{a}\cdot\vec{\sigma})(\vec{b}\cdot\vec{\sigma})(\vec{c}\cdot \vec{\sigma})]\) when integrating over fermions in vacuum polarization loops. This is similar to the structure that gives rise to chiral anomalies in relativistic theories in even dimensions. Inspired by these thoughts, we are led to consider Dirac and Weyl materials. In Ref. [7] such terms were argued to arise in non-centrosymmetric s-wave superconductors when Zeeman couplings are included.
### Axlon Wind Terms in Weyl Semimetals
In Weyl semimetals (WSMs) the different chiralities can be split in momentum space by imposing stress. The vector separating the nodes is axial, and thus provides a candidate for the axial vector \(\vec{k}\). We now show, by an explicit calculation, that such a splitting does lead to a \(\sim\vec{k}\cdot(\vec{A}\times\vec{E})\) term in \(S_{\mathrm{eff}}\)[8].
Consider an inversion symmetric Weyl semimetal with Hamiltonian
\[\mathcal{H} =\sum_{a=\pm}\sum_{\vec{q}}c_{a}^{\dagger}(\vec{q}_{a})h_{a}( \vec{q}_{a})c_{a}\] \[=\sum_{\vec{q}}\tilde{\Psi}^{\dagger}(\vec{q})\begin{pmatrix}h_{- }(\vec{q}_{-})&0\\ 0&h_{+}(\vec{q}_{+})\end{pmatrix}\tilde{\Psi}(\vec{q}), \tag{64}\]
where
\[h_{\pm}(\vec{q}^{\pm}) =q_{x}^{\pm}\sigma^{x}+q_{y}^{\pm}\sigma^{y}+q_{z}^{\pm}\sigma^{z }-\mu_{\pm}\sigma^{0}, \tag{65}\] \[\vec{q}_{\pm} =\left[k_{x}\pm b_{x},k_{y}\pm b_{y},\mp\left(k_{z}\pm b_{z} \right)\right], \tag{66}\]
and the wave functions are defined as
\[\tilde{\Psi}(\vec{q})=\begin{pmatrix}c_{-}(\vec{q})\\ c_{+}(\vec{q})\end{pmatrix}. \tag{67}\]
Here, we have set the product of Planck's constant and the Fermi energy, \(\hbar v_{F}=1\). In the following, to simplify notation the sum over momenta will be left implicit.
The Hamiltonian can be re-written as
\[\mathcal{H} =\tilde{\Psi}^{\dagger}\left(\begin{pmatrix}\vec{k}+\vec{b}\\ 0&\sigma^{z}\left[-\left(\vec{k}-\vec{b}\right)\cdot\vec{\sigma}+\mu_{+}\right] \sigma^{z}\end{pmatrix}\tilde{\Psi}\right)\] \[=\Psi^{\dagger}\left(\begin{pmatrix}\vec{k}+\vec{b}\\ 0&-\left(\vec{k}-\vec{b}\right)\cdot\vec{\sigma}+\mu_{+}\end{pmatrix}\Psi, \tag{68}\]
with \(\Psi(\vec{q})=(c_{-}(\vec{q}),\sigma^{z}c_{+}(\vec{q}))^{\mathrm{T}}\). Using the chiral representation of the Weyl matrices,
\[\gamma^{0}=\begin{pmatrix}0&\sigma^{0}\\ \sigma^{0}&0\end{pmatrix},\quad\vec{\gamma}=\begin{pmatrix}0&\vec{\sigma}\\ -\vec{\sigma}&0\end{pmatrix},\quad\gamma^{5}=\begin{pmatrix}-\sigma^{0}&0\\ 0&\sigma^{0}\end{pmatrix}. \tag{69}\]
the Hamiltonian becomes
\[\mathcal{H}=\tilde{\Psi}\left[-\vec{\gamma}\cdot\vec{k}+\vec{\gamma}\cdot \vec{b}\gamma^{5}+\gamma^{0}\left(P_{-}\mu_{-}+P_{+}\mu_{+}\right)\right]\Psi, \tag{70}\]
with \(P_{\pm}=\frac{1}{2}\left(\mathbf{1}\pm\gamma^{5}\right)\) and \(\tilde{\Psi}=\Psi^{\dagger}\gamma^{0}\). Heisenbergs equations of motion then yields,
\[\left[\not{\partial}-\vec{\gamma}\cdot\vec{b}\gamma^{5}-\gamma^{0}\left(P_{- }\mu_{-}+P_{+}\mu_{+}\right)\right]\Psi=0\,, \tag{71}\]
from which we can extract the Lagrangian,
\[\mathcal{L}=\tilde{\Psi}\left[i\not{\partial}-\vec{\gamma}\cdot\vec{b}\gamma ^{5}-\gamma^{0}\left(P_{-}\mu_{-}+P_{+}\mu_{+}\right)\right]\,. \tag{72}\]
Coupling to the electromagnetic field via minimal coupling gives
\[\mathcal{L}=\bar{\Psi}\left[i\not{\partial}+\not{A}-\vec{\gamma}\cdot\vec{b} \gamma^{5}-\gamma^{0}\left(P_{-}\mu_{-}+P_{+}\mu_{+}\right)\right]\Psi\,, \tag{73}\]
By shifting the zeroth component of the gauge field \(A\) such that \(A_{0}\to A_{0}-\frac{1}{2}\left(\mu_{+}+\mu_{-}\right)\), and defining \(b_{0}=\frac{1}{2}\left(\mu_{-}-\mu_{+}\right)\), it can be written as,
\[\mathcal{L}=\bar{\Psi}\left(i\not{\partial}+\not{A}+\not{b}\gamma^{5}\right)\Psi. \tag{74}\]
Thus, the shifts in momentum and energy of the Weyl nodes can be recast into an effective axial gauge field (of a very special form) in the Lagrangian. A constant \(b_{\mu}\) can be written \(b_{\mu}=\partial_{\mu}\xi\), with \(\xi=b_{\mu}x^{\mu}\). Naively such a \(b_{\mu}\) can be eliminated by the chiral rotation \(\psi\to e^{i\xi\gamma^{3}}\psi\). However. a non-zero contribution to the effective action arises from the triangle anomaly. After a partial integration it becomes (see _e.g._ Ref. [9] ),
\[S_{\mathrm{top}}=\frac{1}{8\pi^{2}}\int d^{4}x\epsilon^{\mu\nu\rho\sigma} \operatorname{Tr}\left(b_{\mu}A_{\nu}\partial_{\rho}A_{\sigma}\right). \tag{75}\]
Here, \(\epsilon^{\mu\nu\rho\sigma}\) is the Levi-Civita symbol with convention \(\epsilon^{0123}=1\). We will now break down the individual components of this variation and write them out explicitly to understand their physical significance.
Figure 4: Given an effective axial vector potential, two-photon response includes the classic VVA triangle anomaly graph.
\[S_{\rm top} = \frac{1}{8\pi^{2}}\int d^{4}x\left(\epsilon^{0ijk}b_{0}A_{i}\partial _{j}A_{k}+\epsilon^{i0jk}b_{i}A_{0}\partial_{j}A_{k}+\epsilon^{ij0k}b_{i}A_{j} \partial_{0}A_{k}+\epsilon^{ijk0}b_{i}A_{j}\partial_{k}A_{0}\right) \tag{76}\] \[= \frac{1}{8\pi^{2}}\int d^{4}x\left(b_{0}\vec{A}\cdot\vec{B}-A_{0} \vec{b}\cdot\vec{B}+\vec{b}\cdot\vec{A}\times\vec{E}\right).\]
Here, we again used that \(b_{\mu}\) are constants, in order to perform integration by parts.
At first glance it seems that an \(\vec{A}\cdot\vec{B}\) term appears already here. But for Weyl semimetals in equilibrium \(\mu_{+}-\mu_{-}=0\), and \(b_{0}\) therefore vanishes in view Eqn. (76) [10]. Note that this term is directly related to the chiral magnetic effect (CME) which is known to be a non-equilibrium effect in WSMs [11; 12]. We are left with
\[S_{\rm top}=\frac{1}{8\pi^{2}}\int d^{4}x\left(\vec{b}\cdot\vec{A}\times\vec {E}-A_{0}\vec{b}\cdot\vec{B}\right), \tag{77}\]
where \(\vec{b}\), as promised, is the (constant) axial vector that gives rise to the \(\vec{b}\cdot\vec{A}\times\vec{E}\)-term in \(S_{\rm eff}\). Note that there is an additional term \(\sim\vec{b}\cdot\vec{B}\) whose strength is determined by \(A_{0}\).
### \(k_{0}\vec{A}\cdot\vec{B}\) Term From a Flux Biased Weyl Superconductor
It is more difficult to generate an \(S_{\rm top}[\vec{E},\vec{B}]\) with an \(\vec{A}\cdot\vec{B}\) term by this mechanism. Formally, such a term corresponds to an imbalance between Weyl nodes of positive and negative chirality, which is disallowed in a system with bounded energy bands according to fermion doubling theorems.
Such imbalances are, however, known to arise in several contexts including Floquet systems [13; 14; 15], and situations where effects that only gap out nodes of one particular chirality are present. Examples of the latter include chirality locking charge density waves [16] and certain Weyl superconductors [17; 18; 19]. To illustrate that terms on the form \(\vec{A}\cdot\vec{B}\) indeed do arise in physically realizable systems, we will show it for a single Weyl node Hamiltonian originating from a flux-biased Weyl superconductor.
For the sake of completeness, we first outline how the considered system is set up, referring the reader to the very insightful original work in Ref. [17] for further details. The parent Hamiltonian is taken as,
\[\mathcal{H} = \sum_{\vec{k}}\Psi_{\vec{k}}^{\dagger}H(\vec{k})\Psi_{\vec{k}}, \quad\Psi_{\vec{k}}=\left(\psi_{\vec{k}},\sigma^{9}\psi_{-\vec{k}}^{\dagger} \right), \tag{78}\] \[H(\vec{k}) = \begin{pmatrix}H_{0}(\vec{k}-e\vec{A})&\Delta_{0}\\ \Delta_{0}^{*}&-\sigma^{y}H_{0}^{*}(-\vec{k}-e\vec{A})\sigma^{y}\end{pmatrix},\] (79) \[H_{0}(\vec{k}) = \sum_{i}\tau^{z}\sigma^{i}\sin k_{i}+\tau^{0}\left(\beta\sigma^{ z}-\mu\sigma^{0}\right)+m_{\vec{k}}\tau^{x}\sigma^{0},\] (80) \[m_{\vec{k}} = m_{0}+\sum_{i}(1-\cos k_{i}). \tag{81}\]
\(\tau_{i}\) and \(\sigma_{i}\), \(i=x,y,z\), are orbital and spin Pauli matrices, respectively, \(\beta\) a magnetization, \(\mu\) a chemical potential, \(\vec{A}\) the electromagnetic vector potential, and \(\Delta_{0}\) the BCS-pairing potential. A system with this Hamiltonian can be obtained by stacking alternate layers of topological insulators and conventional BCS superconductors, which introduces a coupling between the Weyl nodes centered at \((0,0,\pm\sqrt{\beta^{2}-m_{0}^{2}})\) of \(H_{0}\) and their corresponding particle-hole conjugates. The number of ungapped particle-hole conjugate Weyl cones determine the topological phase of the Weyl superconductor, phases that can be accessed in an externally controllable way, as explained in Ref. [17]. This is done by coupling a _flux-bias circuit_ to the material slab. This will alter the Hamiltonian, as the flux bias will be taken into account as a constant shift in the vector potential \(\vec{A}\). In this particular setup of, the flux bias gives a contribution \(\Lambda/e\) to \(A_{z}\). As a result, the Weyl nodes appear at \((0,0,b_{\pm})\) and \((0,0,-b_{\pm})\), with
\[b_{\pm}^{2}=\left(\sqrt{\beta^{2}-m_{0}^{2}}\pm\Lambda\right)^{2}-\Delta_{0}^{ 2}, \tag{82}\]
meaning that when
\[\left|\sqrt{\beta^{2}-m_{0}^{2}}-\Lambda\right|<\Delta_{0}<\sqrt{\beta^{2}-m_{ 0}^{2}}+\Lambda, \tag{83}\]
one of the two pairs of particle-hole conjugate Weyl nodes are gapped out, leaving only the nodes with positive chirality.
In the regime where Weyl nodes of only one chirality are gapped out, we are left with two nodes of the same chirality. The contribution from the current from these respective nodes will then add up, instead of cancelling one another (as for nodes of opposite chirality). The two
nodes are described by essentially the same Hamiltonian,
\[\tilde{\mathcal{H}}_{\alpha}=\sum_{\vec{k}}\tilde{\psi}_{\vec{k}}^{\dagger}\left[ \sum_{i}\nu_{i}(\delta k_{i}-Q_{i}A_{i})\sigma^{i}-Q_{0}\mu\sigma^{0}\right] \tilde{\psi}_{\vec{k}}, \tag{84}\]
where, \(\vec{k}=(0,0,b_{\alpha})+\delta\vec{k}\), \(\vec{\nu}=(1,1,-\kappa)\), \(Q_{0}=\kappa\), \(\vec{Q}=e(\kappa,\kappa,1/\kappa)\), and
\[\kappa\approx\sqrt{1-\frac{\Delta_{0}^{2}}{(\beta+\Lambda)^{2}}}. \tag{85}\]
Using standard procedure, this can be recast as a Lagrangian,
\[\mathcal{L}_{\alpha}=\tilde{\psi}_{\alpha}(i\tilde{\not{\partial}}+\tilde{ \mathcal{A}}_{\alpha})\psi_{\alpha}, \tag{86}\]
where \(\tilde{\not{\partial}}=\gamma^{0}\partial_{0}-\nu_{i}\gamma^{i}\partial_{i}\) and the left-handed chiral gauge field is \(\tilde{A}_{\alpha}=(A_{0}-Q_{0}\mu\), \(\nu_{i}Q_{i}A_{i}-\nu_{i}b_{\alpha;i})\).
We now set \(A_{0}=0\), and take \(\vec{b}_{\alpha}\) to be constant, but allow for an \(\vec{x}\)-dependent chemical potential \(\mu(\vec{x})\). The left-handed chiral anomaly is \(\tilde{\partial}_{\mu}J^{\mu}=-\frac{e^{2}}{32\pi^{2}}\epsilon^{\mu\nu\sigma \lambda}\tilde{F}_{\mu\nu}\tilde{F}_{\sigma\lambda}\), where \(\tilde{F}\) is the field strength related to \(A\). From this we can extract the topological part of the chiral current,
\[J^{i}=-\frac{e^{2}}{16\pi^{2}}\epsilon^{i0jk}\tilde{A}_{0}\tilde{\partial}_{i }\tilde{A}_{k}\,. \tag{87}\]
where \(\tilde{A}_{0}=-Q_{0}\mu\). This expression is well defined and finite also for constant \(\mu\), but a direct calculation of \(J^{i}\) will give a logarithmically divergent result, as is shown in the Appendix A. This is no contradiction with (87), since for constant \(\mu\)\(\partial_{i}J^{i}=0\), so there can be an extra contribution that is not determined by the anomaly. We believe that the limiting procedure \(\partial_{i}\mu\to 0\), which parallels the derivation in Ref. [17], gives the correct result. It is moreover consistent with a physically motivated subtraction procedure which is explained in Appendix A.
We now express the components of the current in terms of the original fields \(\vec{A}\) and \(\vec{B}\). Recalling that we have two nodes, and that \(J\equiv J_{em}=2J_{L}\), we get (for details, see Appendix A),
\[J^{x} =\frac{\kappa e^{2}\mu}{4\pi^{2}}\left[B^{x}+\left(1-\kappa^{2} \right)\partial_{z}A_{y}\right],\] \[J^{y} =\frac{\kappa e^{2}\mu}{4\pi^{2}}\left[B^{y}-\left(1-\kappa^{2} \right)\partial_{z}A_{x}\right], \tag{88}\] \[J^{z} =\frac{\kappa e^{2}\mu}{4\pi^{2}}B^{z}\,.\]
Integrating Eqn. (88) we get the topological action,
\[S_{\rm top}[A]=-\frac{\kappa e^{2}\mu}{8\pi^{2}}\int d^{3}x\,\left[\vec{A} \cdot\vec{B}+2\left(1-\kappa^{2}\right)A_{x}\partial_{z}A_{y}\right], \tag{89}\]
which is the central result of this section.
The electromagnetic response in this regime can be expected to support a chiral magnetic effect (CME), _i.e.,_ a current in the direction on the externally applied magnetic field, which is often read off directly from the anomaly equation. In our approach the dynamics is determined by considering the full action, as in Sect. II, where we do find qualitative effects of that kind. In prospective experiments it would be natural to take the field perpendicular to the stacked planes, and to require that the slabs should be thinner than the penetration depth, so that we our averaging over layers is justified.
The term \(\sim\vec{A}\cdot\vec{B}\) in Eqn. (89) is gauge invariant up to a surface term, which is not true for the term \(\sim A_{x}\partial_{z}A_{y}\). Using Eqn. (85) this term is \(\sim\frac{\Delta_{0}^{2}}{(\beta+\Lambda)}\) and could be made small in certain parameter ranges. Note however that since we have a superconductor, gauge invariance can be restored by the substitution \(2e\vec{A}\to 2e\vec{A}+\vec{\nabla}\phi\) where \(\phi\) is the phase of the superconducting order parameter. Making this substitution in Eqn. (89) we generate both a higher derivative term \(\sim\phi\,\partial_{x}\partial_{y}\partial_{z}\phi\) and a coupling between \(\vec{A}\) and \(\phi\).
We note that the expression for \(J^{z}\) is the same as derived in Ref. [17]. In an Appendix A we shall give an alternative derivation of Eqn. (89) which is more in line with the derivation in this paper and does not rely on the chiral anomaly.
We should note that there is a hidden assumption in the preceding derivation, in that we neglected the possibility of adding a Wess-Zumino counter term, but took the perturbative result at face value. Usually this ambiguity is fixed by requiring gauge invariance, but in our superconducting context it is less clear. Still it is reassuring that the simple argument based on the anomaly gives the same result as the direct calculation in Appendix A if we there make a physically motivated subtraction inspired by the treatment in Ref. [17]. Importantly, it was shown there that the expression for \(J^{z}\) (which was the only one considered in that paper) agrees numerically with a direct calculation in the parent eight band theory, Eqn. (78). Although we believe that the connection to the anomaly cannot be a coincidence, we presently lack a sound theoretical argument excluding any Wess-Zumino term.
## VI Summary and outlook
We have motivated the consideration of emergent Chern-Simons interactions in 3+1 dimensions, displayed some of their striking phenomenological consequences, and indicated how they might be realized in plausible material systems.
These interactions arise in two forms, \(\vec{A}\cdot\vec{B}\) and \(\hat{n}\cdot\vec{A}\times\vec{E}-A_{0}\hat{n}\cdot\vec{B}\). Highlights for the first type include a precise form of current-field mixing, non-dissipative complex penetration depths, and anapole moments. This type generically arises in \(s\)-wave superconductors that break parity symmetry. It is closely related, at a mathematical level, to optical activity. We suggest that it can
be achieved in chirally purified organic superconductors or, more generally, by chiral doping or through parity-violating crystalline structures. We also calculated its appearance in a microscopic model based on superconducting Weyl semimetals, where it arises through a mechanism closely related to the chiral anomaly of quantum field theory.
Highlights for the second type include massless boundary excitations and unusual bulk effects in electromagnetic wave propagation. This type appears to be comparatively easy to achieve in the Weyl semimetal context.
We also extended the optical activity analogy in a slightly different direction - formally, towards higher rather than lower orders of gradient - to define "actively chiral" magnets. These do not bring in Chern-Simons terms, but physical intuition and mathematical techniques carry over. This extension frees us of the constraints of superconductivity (notably, cryogenic temperatures and magnetic screening) and opens up many possibilities for realization in organic magnetism and metamaterials, as well as naturally occurring materials.
The next, crucial development for this work will be to bring its mathematical paradise down to earth in concrete material realizations.
**Acknowledgement**: MS and THH thanks Julia Hannukainen and Jens H Bardarson for insightful discussions. MS acknowledges fruitful discussions with Emil J. Bergholtz at an early stage of this project. FW is supported by the U.S. Department of Energy under grant Contract Number DE-SC0012567, by the European Research Council under grant 742104, and by the Swedish Research Council under Contract No. 335-2014-7424.
In this appendix we recalculate (89) using straight-forward diagrammatic perturbation theory. To get the pertinent static electromagnetic response function to quadratic order, we evaluate the Feynman diagram in Fig. 5. to linear order in \(\mu\) and \(\vec{q}\). The relevant integrand is,
\[\mathrm{Tr}\left[\sigma^{i}\frac{1}{\vec{k}}\frac{1}{\vec{k}}\sigma^{j}\frac{1} {\vec{k}-\not{q}}+\sigma^{i}\frac{1}{\vec{k}}\sigma^{j}\frac{1}{\vec{k}-\not{ q}}\frac{1}{\vec{k}-\not{q}}\right]=\frac{1}{k^{2}}\,\mathrm{Tr}\left[\sigma^{i} \sigma^{j}(\not{k}-\not{q})+\sigma^{i}\not{k}\sigma^{j}\right]\frac{1}{(k-q)^ {2}}=-\frac{2i}{k^{4}}\epsilon^{ijk}q_{k}+O(q), \tag{100}\]
where we used the 4-vector notation \(k_{\mu}=(\omega,\vec{k})\) and the cyclic property of the trace. Restoring \(\mu\), the energy and momentum integrals, and the minus sign due to the fermion loop, we get the polarization tensor,
\[\Pi^{ij}=2i\mu q_{k}\epsilon^{ijk}\int\frac{d^{3}k}{(2\pi)^{3}}\int\frac{d \omega}{2\pi}\frac{1}{(\omega^{2}+k^{2})^{2}}. \tag{101}\]
The appearance of \(\Pi^{ij}\) makes it clear that the contribution from two nodes of positive chirality symmetrically shifted from the origin of momentum space, will indeed add up instead of cancel. This can be seen by shifting \(k\to k\pm b\), and make an expansion for small \(b\).
The integral in (101) is logarithmically divergent both in the infrared and the ultraviolet. The infrared divergence is clearly a result of expanding to order \(\mu\), and is regulated if the full \(\mu\)-dependence is kept. However, in order to follow as close as possible to Ref. [17] we shall instead regulate the infrared by a finite temperature \(T=1/k_{B}\beta\). The ultraviolet divergence is a consequence of that naively there is a contribution to the current from the whole Dirac sea, as will be discuss below.
Using the standard Euclidean formulation of finite temperature QFT, the polarization tensor (101) at temperature \(T\), becomes,
\[\Pi^{ij}=2i\mu q_{k}\epsilon^{ijk}\int\frac{d^{3}k}{(2\pi)^{3}}\frac{1}{\beta }\sum_{n}\frac{1}{(\omega_{n}^{2}+k^{2})^{2}}, \tag{102}\]
where \(T=\frac{1}{k_{\mathrm{B}}\beta}\), \(k_{\mathrm{B}}\) is the Boltzmann constant, and \(\omega_{n}=\frac{2\pi}{\beta}\left(n+\frac{1}{2}\right)\) are the fermionic Matsubara poles. Rewriting the sum in (102) as
\[\frac{1}{\beta}\sum_{n}\frac{1}{(\omega_{n}^{2}+k^{2})^{2}}=\frac{1}{\beta} \left(-\frac{1}{2k}\right)\frac{\partial}{\partial k}\sum_{n}\frac{1}{\omega_ {n}^{2}+k^{2}}=\frac{1}{\beta}\left(-\frac{1}{2k}\right)\frac{\partial}{ \partial k}\frac{\beta^{2}}{(2\pi)^{2}}\sum_{n}\frac{1}{(n+\frac{1}{2})^{2}+ \left(\frac{\beta k}{2\pi}\right)^{2}}. \tag{103}\]
and using \(\sum_{n=-\infty}^{\infty}\frac{1}{(n+\frac{1}{2})^{2}+(Ax)^{2}}=\frac{\pi}{ Ax}\tanh(A\pi x)\), the finite \(T\) polarization tensor becomes,
\[\frac{1}{\beta}\sum_{n}\frac{1}{(\omega_{n}^{2}+k^{2})^{2}}=\left(-\frac{1}{2 k}\right)\frac{\partial}{\partial k}\frac{1}{2k}\tanh\left(\frac{\beta k}{2} \right)=\frac{\mathrm{sech}^{2}\left(\frac{\beta k}{2}\right)[\sinh\left(\beta k \right)-\beta k]}{8k^{3}}. \tag{104}\]
Since the integrand is isotropic, we use spherical coordinates and \(\int\frac{d^{3}k}{(2\pi)^{3}}=\int\frac{dk}{2\pi^{2}}k^{2}\), and after inserting the Jacobian factor relating \(\bar{A}\) to \(A\) in the integral measure, we get
\[\Pi^{ij}=2iQ_{0}\mu q_{k}\epsilon^{ijk}\frac{1}{8\pi^{2}}\int_{0}^{\infty} \frac{dk}{|\nu_{x}\nu_{y}\nu_{z}|}\left[\frac{2\tanh\left(\frac{\beta k}{2} \right)}{k}-\frac{\beta}{2}\mathrm{sech}^{2}\left(\frac{\beta k}{2}\right) \right]. \tag{105}\]
Figure 5: Feynman diagrams for the polarization tensor to leading order in \(\mu\).
The first term Eqn. (101) is divergent, while the second is convergent. Subtracting the divergent piece, we arrive at the final result for the polarization tensor,
\[\Pi^{ij}=\frac{2iQ_{0}\mu\nu_{k}q_{k}\epsilon^{ijk}}{|\nu_{x}\nu_{y}\nu_{z}|}\left( -\frac{1}{8\pi^{2}}+\text{Div}\right). \tag{102}\]
This subtraction will be discussed below. The finite part of the effective action, after substituting \(q_{k}\rightarrow-i\partial_{k}\), becomes
\[S_{eff}[A]=\frac{1}{2}\nu_{i}Q_{i}A_{i}\Pi^{ij}\nu_{j}Q_{j}A_{j}=-\frac{Q_{0} \mu\nu_{i}Q_{i}\nu_{j}Q_{j}\nu_{k}}{8\pi^{2}|\nu_{x}\nu_{y}\nu_{z}|}A_{i} \partial_{k}A_{j}\epsilon^{ijk}=-\frac{1}{8\pi^{2}}\operatorname{sgn}\left( \nu_{x}\nu_{y}\nu_{z}\right)Q_{0}\mu Q_{i}Q_{j}A_{i}\partial_{k}A_{j}\epsilon ^{ijk}\,, \tag{103}\]
and finally the the current
\[J^{l} =\frac{\delta S_{eff}[A]}{\delta A_{l}}\] \[=-\operatorname{sgn}\left(\nu_{x}\nu_{y}\nu_{z}\right)\frac{\mu Q _{i}Q_{j}}{8\pi^{2}}\partial_{k}A_{j}\epsilon^{ijk}\frac{\delta A_{i}}{\delta A _{l}}\] \[=-\operatorname{sgn}\left(\nu_{x}\nu_{y}\nu_{z}\right)\frac{\mu Q _{i}Q_{j}}{8\pi^{2}}\partial_{k}A_{j}\epsilon^{ijk}, \tag{104}\]
where the index \(l\) in the right hand side is not summed over.
Inserting the system parameters from the Hamiltonian in (84), gives,
\[S_{top}[A] =-\frac{\mu\kappa}{8\pi^{2}}Q_{l}Q_{k}A_{i}\partial_{j}A_{k} \epsilon^{ijk}\] \[t =-\frac{\mu\kappa}{8\pi^{2}}\left[Q_{x}A_{x}\left(Q_{z}\partial_{ y}A_{z}-Q_{y}\partial_{z}A_{y}\right)+Q_{y}A_{y}\left(Q_{x}\partial_{z}A_{x}-Q_{z} \partial_{x}A_{z}\right)+Q_{z}A_{z}\left(Q_{y}\partial_{x}A_{y}-Q_{z}\partial _{y}A_{x}\right)\right]\] \[=-\frac{\mu e^{2}\kappa}{8\pi^{2}}\left[A_{x}\left(\partial_{y}A_ {z}-\kappa^{2}\partial_{z}A_{y}\right)+A_{y}\left(\kappa^{2}\partial_{z}A_{x} -\partial_{x}A_{z}\right)+A_{z}\left(\partial_{x}A_{y}-\partial_{y}A_{x} \right)\right]\] \[=-\frac{\mu e^{2}\kappa}{8\pi^{2}}\left\{A_{x}\left[\partial_{y} A_{z}-\partial_{z}A_{y}+\left(1-\kappa^{2}\right)\partial_{z}A_{y}\right]+A_{y} \left[\partial_{z}A_{x}-\partial_{x}A_{z}-\left(1-\kappa^{2}\right)\partial_ {z}A_{x}\right]+A_{z}\left(\partial_{x}A_{y}-\partial_{y}A_{x}\right)\right\}\] \[=-\frac{\mu e^{2}\kappa}{8\pi^{2}}\left[-\vec{A}\cdot\vec{B}+ \left(1-\kappa^{2}\right)\left(A_{y}\partial_{z}A_{x}-A_{x}\partial_{z}A_{y} \right)\right]\] \[=\frac{\mu e^{2}\kappa}{8\pi^{2}}\left[\vec{A}\cdot\vec{B}+ \left(1-\kappa^{2}\right)\left(A_{y}\partial_{z}A_{x}-A_{x}\partial_{z}A_{y} \right)\right]\,, \tag{105}\]
and finally restoring \(\hbar\), using the notation \(e^{*}=\kappa e\), the current becomes,
\[J^{l}=\frac{\delta S_{top}[A]}{\delta A_{l}}=\frac{\mu ee^{*}}{h^{2}}\left[B ^{l}+\left(1-\kappa^{2}\right)\left(\delta_{y}^{l}\partial_{z}A_{x}-\delta_{x }^{l}\partial_{z}A_{y}\right)\right]. \tag{106}\]
Using the gauge where \(\vec{A}=(0,Bx,\Lambda/e)\), the only surviving component of the current reads,
\[J^{z}=\frac{ee^{*}\mu}{h^{2}}B_{z}. \tag{107}\]
which agrees with the result of Ref. [17].
We now return to discuss the subtraction of the logarithmic UV divergence in the integral (101). Note that the second, convergent, term has support only close to the Fermi surface (_i.e._ at \(k\approx 0\)) while the first, UV divergent part gets contributions from the full Fermi sea. Since anomalies are expressed in IR phenomena, it is plausible to subtract the first term and keep the second. This is what was done in Ref. [17], where it is shown that it gives results consistent with a numerical simulation of the full Hamiltonian (78). It is reassuring that these various methods give the same result for the current, and satisfying that our derivations make a close connection to the chiral anomaly.
## Appendix B Details of Sphere and Cylinder Solutions
### Sphere
Following Eqn. (26) and Eqn. (32), we define \(\alpha\) and \(\sqrt{\alpha}\):
\[\alpha_{1} = \gamma\,+\,\frac{\beta}{2}\big{(}-\beta\,+\,i\sqrt{4\gamma-\beta^{2 }}\,\big{)}, \tag{35}\] \[\sqrt{\alpha}_{1} = \sqrt{\gamma-\frac{\beta^{2}}{4}}+i\frac{\beta}{2}=p+iq \tag{36}\]
The reference solution has the form:
\[B_{r} = \frac{2}{\alpha r^{3}}\left[\sinh\left(\sqrt{\alpha}r\right)- \sqrt{\alpha}r\cosh\left(\sqrt{\alpha}r\right)\right]\cos\theta, \tag{37}\] \[B_{\theta} = \frac{1}{\alpha r^{3}}\left[\left(1+\alpha r^{2}\right)\sinh \left(\sqrt{\alpha}r\right)-\sqrt{\alpha}\ r\cosh\left(\sqrt{\alpha}r\right) \right]\sin\theta,\] (38) \[B_{\phi} = \frac{1}{r^{2}}\left[\sinh\left(\sqrt{\alpha}r\right)-\sqrt{ \alpha}r\cosh\left(\sqrt{\alpha}r\right)\right]\sin\theta \tag{39}\]
and since we must take \(\alpha\) and \(\sqrt{\alpha}\) complex, we get complex fields, whose real and imaginary parts satisfy our equation Eqn. (19) separately, that we must combine in order to satisfy the boundary conditions.
The real part \(B_{r}\), after considerable algebra, reads
\[B_{r}^{r} = \frac{\cos\theta}{\gamma^{2}r^{3}}\times\] \[\sqrt{4\gamma-\beta^{2}}\cosh[\frac{1}{2}r\sqrt{4\gamma-\beta^{2} }]\left(\beta\sin\frac{\beta r}{2}-\gamma r\cos\frac{\beta r}{2}\right)-\sinh[ \frac{1}{2}r\sqrt{4\gamma-\beta^{2}}]\left(\left(\beta^{2}-2\gamma\right)\cos \frac{\beta r}{2}+\beta\gamma r\sin\frac{\beta r}{2}\right)\] \[B_{\theta}^{r} = \frac{\sin\theta}{2\gamma^{2}r^{3}}\times\] \[\sqrt{4\gamma-\beta^{2}}\cosh[\frac{1}{2}r\sqrt{4\gamma-\beta^{2 }}]\left(\beta\sin\frac{\beta r}{2}-\gamma r\cos\frac{\beta r}{2}\right)-\sinh [\frac{1}{2}r\sqrt{4\gamma-\beta^{2}}]\left(\left(\beta^{2}-2\gamma\left( \gamma r^{2}+1\right)\right)\cos\frac{\beta r}{2}+\beta\gamma r\sin\frac{ \beta r}{2}\right)\] \[B_{\phi}^{r} = \frac{\sin\theta}{2\gamma r^{2}}\times \tag{40}\] \[\sinh[\frac{1}{2}r\sqrt{4\gamma-\beta^{2}}]\left(2\gamma r\sin \frac{\beta r}{2}+\beta\cos\frac{\beta r}{2}\right)-\sqrt{4\gamma-\beta^{2}} \sin\frac{\beta r}{2}\cosh[\frac{1}{2}r\sqrt{4\gamma-\beta^{2}}]\]
and the imaginary part \(B^{i}\) reads
\[B_{r}^{i} = \frac{\cos\theta}{\gamma^{2}r^{3}}\times\] \[\cosh\left[\frac{1}{2}r\sqrt{4\gamma-\beta^{2}}\right]\left(\beta \gamma r\cos\frac{\beta r}{2}-\left(\beta^{2}-2\gamma\right)\sin\frac{\beta r }{2}\right)-\sqrt{4\gamma-\beta^{2}}\sinh\left[\frac{1}{2}r\sqrt{4\gamma-\beta ^{2}}\right]\left(\gamma r\sin\frac{\beta r}{2}+\beta\cos\frac{\beta r}{2}\right)\] \[B_{\theta}^{i} = \frac{\sin\theta}{2\gamma^{2}r^{3}}\times \tag{41}\] \[\cosh\left[\frac{1}{2}r\sqrt{4\gamma-\beta^{2}}\right]\left(2 \gamma\big{(}\big{(}\gamma r^{2}+1\big{)}-\beta^{2}\big{)}\sin\frac{\beta r}{2 }+\beta\gamma r\cos\frac{\beta r}{2}\right)\] \[-\sqrt{4\gamma-\beta^{2}}\sinh\left[\frac{1}{2}r\sqrt{4\gamma- \beta^{2}}\right]\left(\gamma r\sin\frac{\beta r}{2}+\beta\cos\frac{\beta r}{ 2}\right)\] \[B_{\phi}^{i} = \frac{\sin\theta}{2\gamma r^{2}}\times\] (42) \[\sqrt{4\gamma-\beta^{2}}\cos\frac{\beta r}{2}\sinh\left[\frac{1} {2}r\sqrt{4\gamma-\beta^{2}}\right]+\cosh\left[\frac{1}{2}r\sqrt{4\gamma- \beta^{2}}\right]\left(\beta\sin\frac{\beta r}{2}-2\gamma r\cos\frac{\beta r }{2}\right)\]
We note that this is the solution inside the superconducting sphere, and that it is thus valid only if \(r\leq R\). In order to satisfy the boundary condition \(j_{r}(R)=0\) we must take a linear superposition \(B^{r}+\eta B^{i}\) to get the full solution
inside the superconducting sphere. One finds
\[\eta=\frac{\sinh[\frac{1}{2}R\sqrt{4\gamma-\beta^{2}}]\left(2\gamma R\sin\frac{ \beta R}{2}+\beta\cos\frac{\beta R}{2}\right)-\sqrt{4\gamma-\beta^{2}}\sin \frac{\beta R}{2}\cosh[\frac{1}{2}R\sqrt{4\gamma-\beta^{2}}]}{\cosh[\frac{1}{2 }R\sqrt{4\gamma-\beta^{2}}]\left(2\gamma R\cos\frac{\beta R}{2}-\beta\sin \frac{\beta R}{2}\right)-\sqrt{4\gamma-\beta^{2}}\cos\frac{\beta R}{2}\sinh[ \frac{1}{2}R\sqrt{4\gamma-\beta^{2}}]}. \tag{100}\]
Thus, the final solution for the magnetic field inside the superconducting sphere reads
\[\vec{B}_{\rm sphere}^{\rm in}=\left(B_{r}^{r},B_{\theta}^{r},B_{\phi}^{r} \right)+\eta\left(B_{r}^{i},B_{\theta}^{i},B_{\phi}^{i}\right) \tag{101}\]
Outside the sphere. i.e., for \(r>R\), the magnetic field takes the usual dipole expansion form according to the solution of London, and reads [6],
\[B_{r}^{\rm out}(r,\theta)=\left(H_{0}+\frac{2M}{r^{3}}\right) \cos\theta, \tag{102}\] \[B_{\theta}^{\rm out}(r,\theta)=\left(-H_{0}+\frac{M}{r^{3}} \right)\sin\theta, \tag{103}\]
with,
\[M=\frac{R^{3}}{3}\left[\frac{B_{r}^{\rm in}(R,\theta)}{\cos \theta}+\frac{B_{\theta}^{\rm in}(R,\theta)}{\sin\theta}\right], \tag{104}\] \[H_{0}=\frac{1}{3}\left[\frac{B_{r}^{\rm in}(R,\theta)}{\cos \theta}-2\frac{B_{\theta}^{\rm in}(R,\theta)}{\sin\theta}\right]. \tag{105}\]
From the internal solutions, one can calculate the corresponding currents \(\vec{j}_{\rm sphere}=\vec{\nabla}\times\vec{B}_{\rm sphere}^{\rm in}\), which can be used to explicitly calculate the magnetic anapole moment, which is given by,
\[T_{i}=\frac{1}{10c}\int\left[r_{i}\left(\vec{r}\cdot\vec{j}\right)-2r^{2}J_{i} \right]d^{3}x. \tag{106}\]
In the present situation, \(T_{x}\) and \(T_{y}\) are both zero, but \(T_{z}\) takes a finite value. On the unit sphere, and in units where \(c=1\), it explicitly reads,
\[T_{z}=-2\pi\frac{-6\beta\left(1+\gamma\right)\cosh\left(2\delta^{-1}\right)+2 \left[-3\beta\left(\gamma-1\right)\cos\beta+\left(3\beta^{2}-\gamma^{2}\right) \sin\beta\right]+\beta\delta\left[-3\beta^{2}+\gamma\left(12+\gamma\right) \right]\sinh\left(2\delta^{-1}\right)}{3\gamma^{2}\left[\delta\cosh\left( \delta^{-1}\right)\left(2\gamma\cos\frac{\beta}{2}-\beta\sin\frac{\beta}{2} \right)-2\cos\frac{\beta}{2}\sinh\left(\delta^{-1}\right)\right]}, \tag{107}\]
where \(\delta=\frac{2}{\sqrt{4\gamma-\beta^{2}}}\) is the penetration depth.
### Cylinder
In case of a cylinder, we take real and imaginary parts of an ansatz consisting of modified Bessel functions of the second kind:
\[B_{r}=0,\quad B_{\phi}=\sqrt{\alpha}\kappa K_{1}(r\sqrt{\alpha}),\quad B_{z}= K_{0}(r\sqrt{\alpha})\]
As in the spherical case, we take the real and imaginary parts independently and find coefficients in order to satisfy boundary conditions \(B_{z}=B_{0},\quad B_{\phi}=0\), fixing \(R=1\). Since this brings in Bessel functions, it must be done numerically for specific values \(\beta\) and \(\gamma\).
We looked at a few cases to get numerical solutions and here are the answers that we get for the azimuthal components of the magnetic field.
For \(\beta=15,\gamma=57\) we find
\[\left(-4.77814B_{0}\right)\mathop{\rm Re}\left\{K_{1}\left[\left( \frac{15i}{2}+\frac{\sqrt{3}}{2}\right)r\right]\right\} \tag{108}\] \[+ \left(-2.14645B_{0}\right)\mathop{\rm Im}\left\{K_{1}\left[\left( \frac{15i}{2}+\frac{\sqrt{3}}{2}\right)r\right]\right\}\]
For \(\beta=10,\gamma=4\), we find
\[(-69.1343B_{0})\operatorname{Re}\left\{K_{1}\left[\left(5i+\sqrt{15} \right)r\right]\right\} \tag{16}\] \[+ (69.2327B_{0})\operatorname{Im}\left\{K_{1}\left[\left(5i+\sqrt{1 5}\right)r\right]\right\}\]
For \(\beta=1,\gamma=200\), we find
\[(2.06099\times 10^{6}B_{0})\operatorname{Re}\left\{K_{1}\left[ \left(\frac{i}{2}+\frac{\sqrt{799}}{2}\right)r\right]\right\} \tag{17}\] \[+ (3.61166\times 10^{6}B_{0})\operatorname{Im}\left\{K_{1}\left[ \left(\frac{i}{2}+\frac{\sqrt{799}}{2}\right)r\right]\right\}\]
|
2309.07559 | On the Spectral properties of Andrásfai Graphs | In this paper, we investigate the spectral properties of Andr\'asfai graphs,
focusing on key parameters: the second-largest and smallest eigenvalues, the
number of distinct eigenvalues, and the multiplicities of the eigenvalues 1 and
-1. The results obtained reveal insights into the connectivity, the structural
properties, and the spectral distinctiveness. | Bharani Dharan K, S Radha | 2023-09-14T09:42:01Z | http://arxiv.org/abs/2309.07559v4 | # Spectrum and Local metric dimension of Andrasfai Graph
###### Abstract
The Andrasfai graph \(And(k)\) for \(k\geq 1\) is a circulant and triangle-free graph on 3k-1 vertices. In this paper, we have determined the least eigenvalue, second largest eigenvalue and the number of distinct eigenvalues of the adjacency spectrum of \(And(k)\). Also, we have found out the local metric dimension of \(And(k)\).
keywords: Cayley Graph, Andrasfai Graph, Spectrum of a graph, Resolving Set, Local metric dimension. Pacs: 02.10.Ox, 02.10.Ud, 02.10.Yn Msc: 05C12, 05C25, 05C50 +
Footnote †: journal: Applied Mathematics and Computation
## 1 Introduction
Spectra of some graphs related to networks are intensively useful for identifying drugs for complex diseases [1], investigating global commerce networks [2], determining the stability of a system [3] and many other processes.
The majority of current research on the total number of unique eigenvalues of matrices and graphs has focused on the connections between matrices and graphs. Particularly, the idea of the number of distinct eigenvalues of different types of graphs has been studied in works like [4; 5; 6; 7; 8; 9]. Various matrices are discussed in other recent articles such as [10; 11; 12; 13] concerning the total number of distinct eigenvalues. Rachid Marsli in [14] showed how the diagonalizable matrix's rank can give data about the number of different eigenvalues.
The solution to Konigsberg's seven bridge problem [15] and the family of Cayley graphs has found amazing applications in many real-world systems[16]. The theory of networks[17], which investigates complicated interacting units represented as graphs, makes extensive use of the family of Cayley graphs. It has been shown that information about the network's structural properties as well as the dynamic behavior of the connected complex system may be found in the adjacency matrix of the network's spectra. For instance, the deterioration of the zero eigenvalues provides a hint to the architectural similarities of the underlying networks[18]. The largest eigenvalue encapsulates significant information and correlates with the entrainment of the diffusely coupled dynamical units in the network [19]. Additionally, sparse symmetric matrices [20] have provided a technique for calculating the statistical properties of the second largest eigenvalue and the constituents of the associated characteristic vector. It should be noted that many network structural features are computationally difficult to establish; nevertheless, spectral measures often provide valuable insights into the structure of networks and are computationally more straightforward to compute. For instance, computing several network growth aspects is computationally difficult. Fortunately, the second-largest eigenvalue, which can be calculated fast (in \(O(n3)\), where n is the number of network nodes), is strongly connected to these characteristics[21]. Furthermore, the algebraic connectedness of the network and the Fiedler eigenvalue, commonly known as the second largest eigenvalue of a network's Laplacian matrix, are the same [22].
The majority of research on the second largest eigenvalue \(\lambda_{2}\) is done for some arbitrary regular graphs, with infrequent studies conducted for other networks. All of these investigations opine that the eigenvalue \(\lambda_{2}\) of a graph can provide useful information into the characteristics of the underlying network topology[23]. In particular, \(\lambda_{2}\) of a graph determines whether a network is appropriate for a certain application, and a least value of \(\lambda_{2}\) is frequently desired [24].
The metric dimension of some circulant matrices and some families of Cayley graphs are given in [25, 26]. This leads to an extension for the respective dimensions of \(And(k)\), their complements, and the cartesian product of \(And(k)\), and a path over n vertices is determined in [27]. The concept of local dimension is first introduced in [28]. Then some different types of dimensions of some families of Cayley graphs are determined in [25].
In this paper we have studied the adjacency spectrum and the local metric dimension of the Andrasfai graphs[29, 30, 31].
In the second section of this paper, we provide some basic definitions and findings relevant to our work, in section 3, we stated and proved our main results on the spectra of Andrasfai graphs \(And(k)\), such as the number of distinct eigenvalues, the least eigenvalue, and the second largest eigenvalue and in section 4, the local metric dimension of \(And(k)\) is determined.
## 2 Preliminaries
Let G be graph on n vertices and let \(A_{G}\) be the adjacency matrix of the graph G, then the eigenvalues of \(A_{G}\) is usually denoted by \(\lambda_{0},\lambda_{1},\lambda_{2},\ldots,\lambda_{n-1}\) where \(\lambda_{0}\geq\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n-1}\) (i.e.) \(\lambda_{0},\lambda_{1}\) and \(\lambda_{n-1}\) are the largest, second largest and least eigenvalues of \(A_{G}\) respectively[31].
In our paper, we denote eigenvalues of \(A_{And(k)}\) by \(x_{0},x_{1},x_{2},\ldots,x_{n-1}\) where \(x_{i}\)'s need not be of the form \(x_{0}\geq x_{1}\geq x_{2}\geq\cdots\geq x_{n-1}\) (i.e.) \(x_{i}\)'s need not be equal to \(\lambda_{i}\), where \(0\leq i\leq n-1\).
**Definition 2.1**.: [28; 25; 26] For a graph G, any two vertices u and v of G, d(u,v) is the length of the shortest distance between them. Let \(W=\{w_{1},w_{2},w_{3},\ldots,w_{p}\}\) be an ordered collection of p different vertices of a graph G. Then the representation
\[r(v|W)=(d(v,w_{1}),d(v,w_{2}),d(v,w_{3}),\ldots,d(v,w_{p}))\]
is the metric representation of the vertex \(v\in V(G)\) concerning the set W. W is known as the resolving set or locating set if \(r(u|W)\neq r(v|W)\) for every pair u,v of neighboring vertices of G[27]. The local metric basis of G is thus the set W with the lowest cardinality among all the resolving sets of G, and that cardinality is known as the local metric dimension of G, indicated by \(dim_{L}(G)\).
**Definition 2.2**.: [29; 30; 31] Let \(k\geq 2\) be any natural number and take \(n=3k-1\). Andrasfai Graph is a Cayley graph over the additive group \(\mathbb{Z}\) mod n (i.e.) \(\text{Cay}(\mathbb{Z}_{n},S)\) where the generating set S={ x \(|\) x\(\in\mathbb{Z}_{n}\) and x\(\equiv\)1 mod 3 }. Generally, Andr\(\acute{a}\)sfai graphs are denoted by \(\text{\it And}(k)\).
For example, \(And(5)\) is shown in figure 1 given below.
**Lemma 2.1**: _[_32_]_ _Let u and v be any two vertices of \(And(k)\) with \(0\leq u,v\leq 3k-1\). If u is connected with v by an edge, then \(u-v\equiv\pm 1\ mod\ 3\)._
From [32], we also know that the \(And(k)\) is triangle-free and hence of girth 4, for all \(k\geq 2\)
**Lemma 2.2**: _[_32_]_ _Let \(v_{j}\) be a vertex in \(And(k)\) such that \(j=3l+i\), where \(j\neq 0\), \(0\leq l\leq k-1\) and \(0\leq i\leq k-1\). (If \(l=k-1\), then \(i=0\ or\ 1\)). Then_
\[d(v_{0},v_{j})=\begin{cases}2&\quad if\ i=0,2\\ 1&\quad if\ i=1\end{cases}\]
## 3 Main Results
### Structure of eigenvalues of Adjacency Matrix of And(k)
Let G=\(\mathit{And}(k)\), k\(\geq\)2. Then G must be k-regular and circulant. The adjacency matrix of G is
Figure 1: Andřřřasi Graph (And(5))
\[\begin{array}{cccccccccccccccc}0&1&2&3&4&5&.&.&.&3k-5&3k-4&3k-3&3k-2\\ 0&1&0&0&1&0&.&.&.&1&0&0&1\\ 1&0&1&0&0&1&.&.&.&0&1&0&0\\ 0&1&0&1&0&0&.&.&.&0&0&1&0\\ 0&0&1&0&1&0&.&.&.&1&0&0&1\\ 4&1&0&0&1&0&1&.&.&.&0&1&0&0\\ 5&0&1&0&0&1&0&.&.&.&0&1&0\\ 6&0&0&1&0&0&1&.&.&1&0&0&1\\ 7&1&0&0&1&0&0&.&.&0&1&0&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots& \vdots&\vdots&\vdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots& \vdots\\ 3k-5&1&0&0&1&0&0&.&.&0&1&0&0\\ 3k-4&0&1&0&0&1&0&.&.&1&0&1&0\\ 3k-3&0&0&1&0&0&1&.&.&0&1&0&1\\ 1&0&0&1&0&0&.&.&0&0&1&0\end{array}\]
Then the eigenvalues \(x_{l}\)'s of the above matrix \(A_{And(k)}\) are given by
\[x_{l}=\sum_{j=0}^{3k-2}a_{j}\omega^{lj},\hskip 28.452756pt0\leq l\leq 3k-2 \tag{1}\]
where \(a_{j}\) is the \(j^{th}\) entry of the first row of \(A_{G}\) and \(\omega\) is the \((3k-1)^{th}\) root of unity.
From, (1) we have
\[\begin{array}{c}x_{l}=\omega^{l}+\omega^{4l}+\ldots..+\omega^{3k-5}+\omega^{ 3k-2}\\ (i.e.),\ x_{l}=\omega^{l}+\omega^{4l}+\ldots..+\omega^{-4l}+\omega^{-l}\end{array} \tag{2}\]
**Case 1** When k is even
\[\begin{array}{c}x_{l}=\omega^{l}+\omega^{-l}+\omega^{4l}+\omega^{-4l}+\cdots +\omega^{\frac{(3k-4)}{2}l}+\omega^{\frac{-(3k-4)}{2}l}\\ x_{l}=e^{\left(\frac{2l\pi i}{n}\right)}+e^{\left(\frac{-2l\pi i}{n}\right)}+e^{ \left(\frac{8l\pi i}{n}\right)}+e^{\left(\frac{-8l\pi i}{n}\right)}+\cdots+e^{ \left(\frac{2\left(\frac{(3k-4)}{2}\right)l\pi i}{n}\right)}+e^{-\left(\frac{2 \left(\frac{(3k-4)}{2}\right)l\pi i}{n}\right)}\\ x_{l}=&2cos\left(\frac{2l\pi}{n}\right)+2cos\left(\frac{8l\pi}{n}\right)+\ldots....+2cos\left(\frac{(3k-10)l\pi}{n}\right)+2cos\left(\frac{(3k-4)l\pi}{n}\right) \end{array}\]
\[x_{l}=2\left(cos\left(\frac{2(3(0)+1)l\pi}{n}\right)+cos\left(\frac{2(3(1)+1)l \pi}{n}\right)+\ldots\right. \tag{3}\] \[\left.\qquad\qquad\qquad\cdots+cos\left(\frac{2\left(3\left(\frac{ k-4}{2}\right)+1\right)l\pi}{n}\right)+cos\left(\frac{2\left(3\left(\frac{k-2}{2} \right)+1\right)l\pi}{n}\right)\right)\] \[x_{l}=2\left[\sum_{j=0}^{\frac{k-2}{2}}cos\left(\frac{2(3j+1)l \pi}{n}\right)\right]\]
**Case 2** When k is odd
\[x_{l}= \omega^{l}+\omega^{-l}+\omega^{4l}+\omega^{-4l}+\cdots+\omega^{ \frac{(3k-7)}{2}l}+\omega^{\frac{-(3k-7)}{2}l}+\omega^{\frac{(3k-1)}{2}l} \tag{4}\] \[x_{l}= e^{\left(\frac{2l\pi i}{n}\right)}+e^{\left(\frac{-2l\pi i}{n} \right)}+e^{\left(\frac{8l\pi i}{n}\right)}+e^{\left(\frac{-8l\pi i}{n} \right)}+\cdots+e^{\left(\frac{2\left(\frac{(3k-7)}{2}\right)l\pi i}{n} \right)}\] \[+e^{-\left(\frac{2\left(\frac{(3k-7)}{2}\right)l\pi i}{n} \right)}+e^{\left(\frac{2\left(\frac{(3k-1)}{2}\right)l\pi i}{n}\right)}\] \[x_{l}= 2cos\left(\frac{2l\pi}{n}\right)+2cos\left(\frac{8l\pi}{n} \right)+\ldots\ldots+2cos\left(\frac{(3k-7)l\pi}{n}\right)+(-1)^{l}\] \[x_{l}= 2\left(cos\left(\frac{2(3(0)+1)l\pi}{n}\right)+cos\left(\frac{2 (3(1)+1)l\pi}{n}\right)+\ldots\right.\] (5) \[\left.\qquad\qquad\cdots+cos\left(\frac{2\left(3\left(\frac{k-5 }{2}\right)+1\right)l\pi}{n}\right)+cos\left(\frac{2\left(3\left(\frac{k-3}{2} \right)+1\right)l\pi}{n}\right)\right)+(-1)^{l}\] \[x_{l}=2\left[\sum_{j=0}^{\frac{k-3}{2}}cos\left(\frac{2(3j+1)l \pi}{n}\right)\right]+(-1)^{l}\]
(3) and (5) are the two structures of the eigenvalues of \(And(k)\).
**Theorem 3.1**: _Let G be an Andrasfai Graph \(And(k)\). Then G has \(k+\lceil\frac{k}{2}\rceil\) different adjacency eigenvalues._
Proof. Before proving the main result, first, we will prove that,
\[x_{i}=x_{3k-1-i}\]
We know that,
\[x_{i} =\omega^{i}+\omega^{4i}+\omega^{7i}+\ldots\cdots+\omega^{(3k-5)i}+ \omega^{(3k-2)i}\] \[=\omega^{i}+\omega^{4i}+\omega^{7i}+\ldots\cdots+\omega^{-7i}+ \omega^{-4i}+\omega^{-i}\] \[=1.\omega^{i}+1.\omega^{4i}+1.\omega^{7i}+\ldots\cdots+1.\omega^ {-7i}+1.\omega^{-4i}+1.\omega^{-i}\] \[=\omega^{3k-1}\omega^{i}+\omega^{3k-1}\omega^{4i}+\omega^{3k-1} \omega^{7i}+\ldots\cdots+\omega^{3k-1}\omega^{-7i}+\omega^{3k-1}\omega^{-4i}+ \omega^{3k-1}\omega^{-i}\] \[=\omega^{3k-1+i}+\omega^{3k-1+4i}+\omega^{3k-1+7i}+\ldots\cdots+ \omega^{3k-1-7i}+\omega^{3k-1-4i}+\omega^{3k-1-i}\] \[=x_{3k-1-i}\]
Hence,
\[x_{i}=x_{3k-1-i} \tag{5}\]
**Case 1** k is even,
When k is even, \(n=3k-1\) is odd. Then \(A_{G}\) has an eigenvalue \(x_{0}\) with multiplicity one by (5).
To find the multiplicities of other eigenvalues:
Suppose that
\[\text{if }x_{l}=x_{m},\text{ for }l\neq m\text{ and }1\leq l,m\leq 3k-2\]
Then,
\[x_{l}-x_{m}=0\] \[2\left[\sum_{j=0}^{\frac{k-2}{2}}cos\left(\frac{2(3j+1)l\pi}{n} \right)\right]-2\left[\sum_{j=0}^{\frac{k-2}{2}}cos\left(\frac{2(3j+1)m\pi}{n }\right)\right]=0\]
\[2\left[cos\left(\frac{2l\pi}{n}\right)+cos\left(\frac{8l\pi}{n} \right)+\cdots+cos\left(\frac{(3k-4)l\pi}{n}\right)\right]\\ -2\left[cos\left(\frac{2m\pi}{n}\right)+cos\left(\frac{8m\pi}{n} \right)+\cdots+cos\left(\frac{(3k-4)m\pi}{n}\right)\right]=0\]
\[2\left[\left(cos\left(\frac{2l\pi}{n}\right)-cos\left(\frac{2m \pi}{n}\right)\right)+\left(cos\left(\frac{8l\pi}{n}\right)-cos\left(\frac{8 m\pi}{n}\right)\right)+\ldots\right.\\ \left.+\left(cos\left(\frac{(3k-4)l\pi}{n}\right)-cos\left(\frac {(3k-4)m\pi}{n}\right)\right)\right]=0\]
\[4\left[sin\left(\frac{\pi(l+m)}{n}\right)sin\left(\frac{\pi(m-l)}{n} \right)+sin\left(\frac{4\pi(l+m)}{n}\right)sin\left(\frac{4\pi(m-l)}{n}\right)+ \right.\\ \left.\cdots+sin\left(\frac{(3k-4)\pi(l+m)}{2n}\right)sin\left( \frac{(3k-4)\pi(m-l)}{2n}\right)\right]=0 \tag{6}\]
For \(\mathrm{l}\neq\mathrm{m}\) the above equation (6) becomes true only when \(m=-l\)
where \(\mathrm{m}\) and \(\mathrm{l}\) are the elements of the group \(\mathbb{Z}_{n}\). Then it can be written as
\(l=-m\) mod n,
(i.e) \(m=3k-1-l\)
Hence, \(x_{l}=x_{m}\) only when \(m=3k-1-l\)
Then all other eigenvalues have multiplicity two except \(x_{0}\).
**Case 2**\(\mathrm{k}\) is odd...
For an odd k, n is even. Then \(A_{G}\) has two eigenvalues \(x_{0}\) and \(x_{\frac{3k-1}{2}}\) with multiplicity one by 5. To find the multiplicities of other eigenvalues:
Suppose that
\[\mathrm{if}\ x_{l}=x_{m},\ \mathrm{for}\ l\neq m\]
Then,
\[x_{l}-x_{m}=0\]
\[2\left[\sum_{j=0}^{\frac{k-3}{2}}cos\left(\frac{2(3j+1)l\pi}{n}\right)\right] +(-1)^{l}-2\left[\sum_{j=0}^{\frac{k-3}{2}}cos\left(\frac{2(3j+1)m\pi}{n} \right)\right]-(-1)^{m}=0\]
\[2\left[cos\left(\frac{2l\pi}{n}\right)+cos\left(\frac{8l\pi}{n} \right)+\cdots+cos\left(\frac{(3k-7)l\pi}{n}\right)\right]-2\left[cos\left( \frac{2m\pi}{n}\right)\right.\\ \left.+cos\left(\frac{8m\pi}{n}\right)+\cdots+cos\left(\frac{(3k- 7)m\pi}{n}\right)\right]+(-1)^{l}-(-1)^{m}=0\]
\[2\left[\left(cos\left(\frac{2l\pi}{n}\right)-cos\left(\frac{2m\pi}{n}\right) \right)+\left(cos\left(\frac{8l\pi}{n}\right)-cos\left(\frac{8m\pi}{n}\right) \right)+\ldots\right.\\ \left.+\left(cos\left(\frac{(3k-7)l\pi}{n}\right)-cos\left(\frac {(3k-7)m\pi}{n}\right)\right)\right]+(-1)^{l}-(-1)^{m}=0\]
\[4\left[sin\left(\frac{\pi(l+m)}{n}\right)sin\left(\frac{\pi(m-l)}{n} \right)+sin\left(\frac{4\pi(l+m)}{n}\right)sin\left(\frac{4\pi(m-l)}{n}\right)+\ldots\right.\] \[\left.+sin\left(\frac{(3k-7)\pi(l+m)}{2n}\right)sin\left(\frac{(3 k-7)\pi(m-l)}{2n}\right)\right]+(-1)^{l}-(-1)^{m}=0\]
For l \(\neq\) m the above equation (7) becomes true only when \(m=-l\)
where l and m are the elements of the group \(\mathbb{Z}_{n}\). Then it can be written as \(l=-m\) mod n,
(i.e) \(m=3k-1-l\)
Hence, \(x_{l}=x_{m}\) only when \(m=3k-1-l\)
Then all other eigenvalues have multiplicity two except \(x_{0}\) and \(x_{\frac{3k-1}{2}}\).
From the above discussion,
when k is even, the number of different eigenvalues of \(A_{G}\) is equal to \(\frac{3k+1}{2}\),
when k is odd, the number of distinct eigenvalues of \(A_{G}\) is equal to \(\frac{3k}{2}\).
Without the loss of generality we can write the number of different eigenvalues of \(A_{G}\) is equal to \(k+\lceil\frac{k}{2}\rceil\).
**Theorem 3.2**: _For the adjacency matrix of an Andrasfai graph \(And(k)\), \(x_{k}=x_{2k-1}\) is the smallest eigenvalue._
Proof. The theorem can be proved by the method of contradiction.
Suppose that there exists \(l\neq 0,k,2k-1\), such that \(x_{l}<x_{k}\), then
\[x_{l}-x_{k}<0 \tag{8}\]
**Case 1** When k is even
\[x_{l}-x_{k}<0\]
\[2\left[\sum_{j=0}^{\frac{k-2}{2}}cos\left(\frac{2(3j+1)l\pi}{n}\right)\right] -2\left[\sum_{j=0}^{\frac{k-2}{2}}cos\left(\frac{2(3j+1)k\pi}{n}\right)\right] <0\]
\[2\left[cos\left(\frac{2l\pi}{n}\right)+cos\left(\frac{8l\pi}{n} \right)+\cdots+cos\left(\frac{(3k-4)l\pi}{n}\right)\right.\] \[\left.-cos\left(\frac{2k\pi}{n}\right)-cos\left(\frac{8k\pi}{n} \right)\cdots-cos\left(\frac{(3k-4)k\pi}{n}\right)\right]<0\]
\[2\left[\left(cos\left(\frac{2l\pi}{n}\right)-cos\left(\frac{2k \pi}{n}\right)\right)+\left(cos\left(\frac{8l\pi}{n}\right)-cos\left(\frac{8k \pi}{n}\right)\right)+\ldots\right.\] \[\left.+\left(cos\left(\frac{(3k-4)l\pi}{n}\right)-cos\left(\frac {(3k-4)k\pi}{n}\right)\right)\right]<0\]
\[4\left[sin\left(\frac{\pi(l+k)}{n}\right)sin\left(\frac{\pi(k-l)}{n} \right)+sin\left(\frac{4\pi(l+k)}{n}\right)sin\left(\frac{4\pi(k-l)}{n}\right) +\ldots\right.\] \[\left.+sin\left(\frac{(3k-4)\pi(l+k)}{2n}\right)sin\left(\frac{ (3k-4)\pi(k-l)}{2n}\right)\right]<0 \tag{9}\]
The L.H.S on the equation (9) is always non-negative for every \(l\in\mathbb{Z}_{3k-1}\) and contradicts (9).
Hence, \(x_{k}\) is the smallest eigenvalue of the adjacency matrix of a Andrasfai graph \(And(k)\) when k is even.
**Case 2** When k is odd
\[x_{l}-x_{k}<0\]
\[2\left[\sum_{j=0}^{\frac{k-3}{2}}cos\left(\frac{2(3j+1)l\pi}{n}\right)\right]+ (-1)^{l}-2\left[\sum_{j=0}^{\frac{k-3}{2}}cos\left(\frac{2(3j+1)k\pi}{n} \right)\right]-(-1)^{m}<0\]
\[2\left[cos\left(\frac{2l\pi}{n}\right)+cos\left(\frac{8l\pi}{n} \right)+\cdots+cos\left(\frac{(3k-7)l\pi}{n}\right)-cos\left(\frac{2k\pi}{n} \right)\right.\] \[\left.-cos\left(\frac{8k\pi}{n}\right)\cdots-cos\left(\frac{(3k- 7)k\pi}{n}\right)\right]+(-1)^{l}-(-1)^{k}<0\]
\[2\left[\left(cos\left(\frac{2l\pi}{n}\right)-cos\left(\frac{2k\pi}{n} \right)\right)+\left(cos\left(\frac{8l\pi}{n}\right)-cos\left(\frac{8k\pi}{n} \right)\right)+\ldots\right.\] \[\left.\qquad+\left(cos\left(\frac{(3k-7)l\pi}{n}\right)-cos \left(\frac{(3k-7)k\pi}{n}\right)\right)\right]+(-1)^{l}-(-1)^{k}<0\]
\[4\left[sin\left(\frac{\pi(l+k)}{n}\right)sin\left(\frac{\pi(k-l)}{n} \right)+sin\left(\frac{4\pi(l+k)}{n}\right)sin\left(\frac{4\pi(k-l)}{n}\right) +\ldots\right.\] \[\left.+sin\left(\frac{(3k-7)\pi(l+k)}{2n}\right)sin\left(\frac{(3 k-7)\pi(k-l)}{2n}\right)\right]+(-1)^{l}-(-1)^{k}<0 \tag{10}\]
The L.H.S of the equation (10) is always non-negative for every \(l\in\mathbb{Z}_{3k-1}\) and contradicts (10).
Hence, (by (5)) \(x_{k}=x_{2k-1}\) is the smallest eigenvalue of the adjacency matrix of a Andrasfai graph \(And(k)\).
Hence proved.
**Theorem 3.3**: _For the adjacency matrix of a Andrasfai graph \(And(k)\), \(x_{k-1}=x_{2k}\) is the second largest eigenvalue._
Proof. _This theorem can be proved by the method of contradiction. We know that \(x_{0}\) is the greatest eigenvalue as the graph is k-regular. So, we consider the eigenvalues other than \(x_{0}\)._
_Suppose that there exists \(l\neq 0,k-1,2k\), such that, \(x_{l}>x_{k-1}\),_
\[x_{l}-x_{k-1}>0 \tag{11}\]
_Case 1_ _When k is even_
\[x_{l}-x_{k-1}>0\]
\[2\left[\sum_{j=0}^{\frac{k-2}{2}}cos\left(\frac{2(3j+1)l\pi}{n}\right)\right] -2\left[\sum_{j=0}^{\frac{k-2}{2}}cos\left(\frac{2(3j+1)(k-1)\pi}{n}\right) \right]>0\]
\[2\left[cos\left(\frac{2l\pi}{n}\right)+cos\left(\frac{8l\pi}{n} \right)+\cdots+cos\left(\frac{(3k-4)l\pi}{n}\right)\right.\\ \left.-cos\left(\frac{2(k-1)\pi}{n}\right)-cos\left(\frac{8(k-1)\pi}{n} \right)\cdots-cos\left(\frac{(3k-4)(k-1)\pi}{n}\right)\right]>0\]
\[2\left[\left(cos\left(\frac{2l\pi}{n}\right)-cos\left(\frac{2(k-1)\pi}{n} \right)\right)+\left(cos\left(\frac{8l\pi}{n}\right)-cos\left(\frac{8(k-1)\pi} {n}\right)\right)+\\ \left.\cdots+\left(cos\left(\frac{(3k-4)l\pi}{n}\right)-cos\left( \frac{(3k-4)(k-1)\pi}{n}\right)\right)\right]>0\]
\[4\left[sin\left(\frac{\pi(l+(k-1))}{n}\right)sin\left(\frac{\pi( (k-1)-l)}{n}\right)+sin\left(\frac{4\pi(l+(k-1))}{n}\right)\right.\\ \left.sin\left(\frac{4\pi((k-1)-l)}{n}\right)+\cdots+sin\left( \frac{(3k-4)\pi(l+(k-1))}{2n}\right)\right.\\ \left.sin\left(\frac{(3k-4)\pi((k-1)-l)}{2n}\right)\right]>0 \tag{12}\]
_The L.H.S of the equation (12) is always non-positive for every \(l\in\mathbb{Z}_{3k-1}\) and contradicts (12). Hence, \(x_{k-1}=x_{2k}\) is the second largest eigenvalue of the adjacency matrix of the Andrasfai graph \(And(k)\) when k is even._
_Case 2_ _When k is odd_
\[x_{l}-x_{k-1}>0\]
\[2\left[\sum_{j=0}^{\frac{k-3}{2}}cos\left(\frac{2(3j+1)l\pi}{n} \right)\right]+(-1)^{l}-2\left[\sum_{j=0}^{\frac{k-3}{2}}cos\left(\frac{2(3j+ 1)(k-1)\pi}{n}\right)\right]-(-1)^{(k-1)}>0\]
\[2\left[cos\left(\frac{2l\pi}{n}\right)+cos\left(\frac{8l\pi}{n} \right)+\cdots+cos\left(\frac{(3k-7)l\pi}{n}\right)-cos\left(\frac{2(k-1)\pi }{n}\right)\right.\\ \left.-cos\left(\frac{8(k-1)\pi}{n}\right)\cdots-cos\left(\frac{ (3k-7)(k-1)\pi}{n}\right)\right]+(-1)^{l}-(-1)^{(k-1)}>0\]
\[2\left[\left(cos\left(\frac{2l\pi}{n}\right)-cos\left(\frac{2(k-1) \pi}{n}\right)\right)+\left(cos\left(\frac{8l\pi}{n}\right)-cos\left(\frac{8(k-1 )\pi}{n}\right)\right)+\ldots\right.\] \[+\left(cos\left(\frac{(3k-7)l\pi}{n}\right)-cos\left(\frac{(3k-7) (k-1)\pi}{n}\right)\right)\right]+(-1)^{l}-(-1)^{(k-1)}>0\]
\[4\left[sin\left(\frac{\pi}{n}(l+(k-1))\right)sin\left(\frac{\pi} {n}((k-1)-l)\right)+sin\left(\frac{4\pi}{n}(l+(k-1))\right)\right.\] \[\left.sin\left(\frac{4\pi}{n}((k-1)-l)\right)+\cdots+sin\left( \frac{(3k-7)\pi}{2n}(l+(k-1))\right)\right.\] \[\left.sin\left(\frac{(3k-7)\pi}{2n}((k-1)-l)\right)\right]+(-1)^ {l}-(-1)^{(k-1)}>0 \tag{13}\]
_The L.H.S of the above equation is always non-positive for every \(l\in\mathbb{Z}_{3k-1}\) and contradicts (13). Hence, by (5) \(x_{k-1}=x_{2k}\) is the second largest eigenvalue of the adjacency matrix of the Andrasfai graph \(And(k)\). Hence proved._
For example, consider k=5,
the graph And(5) is given in figure 1 and its adjacency matrix is
\[A_{And(5)}=\begin{array}{c}0\\ 0\\ 1\\ 2\\ 3\\ 4\\ 5\\ 6\\ 7\\ 8\\ 9\\ 10\\ 1\end{array}\begin{pmatrix}0&1&2&3&4&5&6&7&8&9&10\\ 0&1&0&0&1&0&0&1&0&0&1\\ 0&1&0&1&0&0&1&0&0&1&0\\ 0&1&0&1&0&0&1&0&0&1&0\\ 0&0&1&0&1&0&0&1&0&0&1\\ 0&1&0&0&1&0&1&0&0&1&0\\ 0&0&1&0&0&1&0&1&0&0&1\\ 0&1&0&0&1&0&0&1&0&1&0\\ 0&1&0&0&1&0&0&1&0&1&0\\ 0&0&1&0&0&1&0&0&1&0&1\\ 1&0&0&1&0&0&1&0&0&1&0\end{pmatrix}\]
The eigenvalues of the above matrix are
\[x_{0}=5\] \[x_{1}=x_{13}=0.356896\] \[x_{2}=x_{12}=0.445042\] \[x_{3}=x_{11}=0.692022\] \[x_{4}=x_{10}=1.801938\] \[x_{5}=x_{9}=-4.048917\] \[x_{6}=x_{8}=-1.24698\] \[x_{7}=-1\]
From the above, the number of distinct eigenvalues of \(And(5)\) is \(k+\lceil\frac{k}{2}\rceil\)=8. The largest, second largest and the least eigenvalues are given by \(\lambda_{0}=x_{0}=5\), \(\lambda_{1}=x_{k-1}=x_{2k}=1.801938\) and \(\lambda_{7}=x_{k}=x_{2k-1}=-4.048917\) respectively.
## 4 Local Metric Dimension of Andrasfai Graph
**Theorem 4.1**: _The local metric dimension of \(And(k)\) is 2 for all k\(\geq\)2 and the local metric basis for \(And(k)\) is \(\{v_{i},v_{i+1}\}\), where \(0\leq i\leq 3k-2\)._
Proof. Consider, G=\(And(k)\) where \(n=3k-1\). Let \(V(G)=\{v_{0},v_{1},v_{2},v_{3},\ldots,v_{3k-2}\}\) be the vertex set. It is easy to see that the value of the local metric dimension of \(And(1)\) which is equal to 1.
Now assume that \(k\geq 2\), we have to show that W=\(\{v_{i},v_{i+1}\}\) is a local metric basis for G.
From [32] and by lemma 2.2, for any two vertices \(v_{i},v_{j}\in V(And(k))\), we have
\[d(v_{i},v_{j})=\begin{cases}0&if\ i=j\\ 1&if\ |i-j|=1\ mod\ 3\\ 2&if\ |i-j|=2\ mod\ 3\ and\ |i-j|=0\ mod\ 3,\ where\ i\neq j\end{cases} \tag{14}\]
From [32] and by lemma 2.1, \(v_{i}\) and \(v_{j}\) are connected by an edge if and only if either \(|i-j|\equiv 1\ mod\ 3\) or \(|i-j|\equiv 2\ mod\ 3\).
Now, we have to find the metric representations of the vertices of G as follows.
For W=\(\{v_{i},v_{i+1}\}\), we have
\[r(v_{i}|W)=(0,1)\]
\[r(v_{i+1}|W)=(1,0)\]
Then for \(0\leq j\leq 3k-2\), and \(j\neq i,i+1\), we have
\[r(v_{j}|W)=\begin{cases}(1,2)&if\ |i-j|\equiv 1\ mod\ 3\\ (2,2)&if\ |i-j|\equiv 2\ mod\ 3\\ (2,1)&if\ |i-j|\equiv 0\ mod\ 3\end{cases}\]
Suppose that there exist two vertices \(v_{l}\) and \(v_{m}\) in And(k) which are adjacent and have the same W-metric representation. Then the following cases should exist.
**Case 1** Assume that \(r(v_{l}|W)=r(v_{m}|W)=(1,2)\)
We have
\[|i-l|\equiv 1\ mod\ 3\quad and\quad|i-m|\equiv 1\ mod\ 3\]
**Case 2** Assume that \(r(v_{l}|W)=r(v_{m}|W)=(2,2)\)
We have
\[|i-l|\equiv 2\ mod\ 3\quad and\quad|i-m|\equiv 2\ mod\ 3\]
**Case 3** Assume that \(r(v_{l}|W)=r(v_{m}|W)=(2,1)\)
We have
\[|i-l|\equiv 0\ mod\ 3\quad and\quad|i-m|\equiv 0\ mod\ 3\]
From the above three cases, if \(v_{l}\) is adjacent to \(v_{m}\) and if both \(v_{l}\) and \(v_{m}\) are adjacent to \(v_{i}\), then it forms a triangle which is a contradiction that the \(And(k)\) is a triangle-free graph.
Now we can conclude that all the neighbouring vertices in \(And(k)\) have unique metric representations for W=\(\{v_{i},v_{i+1}\}\) and \(dim_{L}(And(k))\leq 2\).
Now suppose that \(dim_{L}(And(k))=1\) such that W=\(\{v_{i}\}\) is a local metric basis for \(And(k)\), then the two adjacent vertices \(v_{i-2}\ and\ v_{i+2}\) will have the same metric representation as
\[r(v_{i-2}|W)=(1)\]
\[r(v_{i+2}|W)=(1)\]
This implies that the local metric dimension of \(And(k)\) is equal to 2.
## 5 Conclusion
In this paper, we have found out the number of different eigenvalues, the least eigenvalue and the second largest eigenvalue of the Andrasfai Graph \(And(k)\). We have also determined the exact value of the local metric dimension of \(And(k)\).
## Statements and Declarations
The authors declare that no funds, grants or other support were received during the preparation of this manuscript.
## Availability of Data and Materials
Data sharing does not apply to this article as no datasets were generated or analyzed during the current study.
## ORCID
Radha S [https://orcid.org/0000-0003-4634-3239](https://orcid.org/0000-0003-4634-3239)
Bharani Dharan K [https://orcid.org/0009-0001-8620-8050](https://orcid.org/0009-0001-8620-8050)
|
2309.06647 | Composing Control Barrier Functions for Complex Safety Specifications | The increasing complexity of control systems necessitates control laws that
guarantee safety w.r.t. complex combinations of constraints. In this letter, we
propose a framework to describe compositional safety specifications with
control barrier functions (CBFs). The specifications are formulated as Boolean
compositions of state constraints, and we propose an algorithmic way to create
a single continuously differentiable CBF that captures these constraints and
enables safety-critical control. We describe the properties of the proposed
CBF, and we demonstrate its efficacy by numerical simulations. | Tamas G. Molnar, Aaron D. Ames | 2023-09-13T00:02:41Z | http://arxiv.org/abs/2309.06647v2 | # Composing Control Barrier Functions for Complex Safety Specifications
###### Abstract
The increasing complexity of control systems necessitates control laws that guarantee safety w.r.t. complex combinations of constraints. In this letter, we propose a framework to describe compositional safety specifications with control barrier functions (CBFs). The specifications are formulated as Boolean compositions of state constraints, and we propose an algorithmic way to create a single continuously differentiable CBF that captures these constraints and enables safety-critical control. We describe the properties of the proposed CBF, and we demonstrate its efficacy by numerical simulations.
## I Introduction
Control designs with formal safety guarantees have long been of interest in engineering. Safety is often captured as constraints on the system's states that must be enforced for all time by the controller. To enable the satisfaction of state constraints with formal guarantees of safety, control barrier functions (CBFs) [1] have become a popular tool in nonlinear control design. As the complexity of safety-critical control systems increases, complex combinations of multiple safety constraints tend to arise, which creates a need for controllers incorporating multiple CBFs.
The literature contains an abundance of studies on multiple safety constraints. Some approaches directly used multiple CBFs in control design. For example, [2, 3] directly imposed multiple CBF constraints on the control input in optimization-based controllers; [4] synthesized controllers by switching between multiple CBFs whose superlevel set boundaries do not intersect; [5] investigated the compatibility of CBFs; [6] ensured feasible controllers with multiple CBFs; and [7, 8] addressed multi-objective constraints via barrier Lyapunov functions. These works usually linked safety constraints with AND logic: they maintained safety w.r.t. constraint 1 AND constraint 2, etc. Other approaches combined multiple constraints into a single CBF. These include versatile combinations, such as Boolean logic with both AND, OR and negation operations, which was established in [9, 10] by nonsmooth barrier functions. Similarly, [11] used Boolean logic to create a smooth CBF restricted to a safe set in the state space; [12] combined CBFs with AND logic via parameter adaptation; while [13, 14] used signal temporal logic to combine CBFs in a smooth manner.
In this letter, we propose a framework to capture complex safety specifications by CBFs. We combine multiple safety constraints via Boolean logic, and propose an algorithmic way to establish a single CBF for nontrivial safety specifications. Our method leverages both the Boolean logic from [9] and the smooth combination idea from [13], while merging the benefits of these approaches. We address multiple levels of logical compositions of safety constraints, i.e., arbitrary combinations of AND and OR logic, which was not established in [13], while we create a continuously differentiable CBF to avoid discontinuous systems like in [9]. Meanwhile, as opposed to [11], the stability of the safe set is guaranteed.
In Section II, we introduce CBFs and motivate multiple safety constraints. In Section III, we propose a single CBF candidate to address the compositions of multiple constraints. We also characterize its properties, and we use simulations to demonstrate its ability to address safety-critical control with nontrivial constraints. Section IV closes with conclusions.
## II Control Barrier Functions
We consider affine control systems with state \(x\in\mathbb{R}^{n}\), control input \(u\in\mathbb{R}^{m}\), and dynamics:
\[\dot{x}=f(x)+g(x)u, \tag{1}\]
where \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) and \(g:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times m}\) are locally Lipschitz. Our goal is to design a controller \(k:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), \(u=k(x)\) such that the closed-loop system:
\[\dot{x}=f(x)+g(x)k(x), \tag{2}\]
satisfies certain safety specifications.
If \(k\) is locally Lipschitz, then for any initial condition \(x(0)=x_{0}\in\mathbb{R}^{n}\) system (2) has a unique solution \(x(t)\), which we assume to exist for all \(t\geq 0\). We say that the system is safe if the solution \(x(t)\) evolves inside a _safe set_\(\mathcal{C}\). Specifically, we call (2) _safe w.r.t._\(\mathcal{C}\) if \(x_{0}\in\mathcal{C}\implies x(t)\in\mathcal{C}\)\(\forall t\geq 0\). We define the safe set as the 0-superlevel set of a continuously differentiable function \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\):
\[\mathcal{C}=\{x\in\mathbb{R}^{n}:h(x)\geq 0\}, \tag{3}\]
assuming it is non-empty and has no isolated points. Later we extend this definition to more complex safety specifications.
The input \(u\) affects safety through the derivative of \(h\):
\[\dot{h}(x,u)=\underbrace{\nabla h(x)f(x)}_{L_{fh}(x)}+\underbrace{\nabla h(x) g(x)}_{L_{g}h(x)}u, \tag{4}\]
where \(L_{fh}\) and \(L_{g}h\) are the Lie derivatives of \(h\) along \(f\) and \(g\). By leveraging this relationship, _control barrier functions (CBFs)_[1] provide controllers with formal safety guarantees.
**Definition 1** ([1]).: Function \(h\) is a _control barrier function_ for (1) on \(\mathbb{R}^{n}\) if there exists \(\alpha\in\mathcal{K}_{\infty}^{\mathrm{c}}\)1 such that for all \(x\in\mathbb{R}^{n}\):
Footnote 1: Function \(\alpha:(-b,a)\rightarrow\mathbb{R}\), \(a,b>0\) is of extended class-\(\mathcal{C}\) (\(\alpha\in\mathcal{K}^{\mathrm{c}}\)) if it is continuous, strictly increasing and \(\alpha(0)=0\). Function \(\alpha:\mathbb{R}\rightarrow\mathbb{R}\) is of extended class-\(\mathcal{K}_{\infty}\) (\(\alpha\in\mathcal{K}_{\infty}^{\mathrm{c}}\)) if \(\alpha\in\mathcal{K}^{\mathrm{c}}\) and \(\lim_{r\rightarrow+\infty}\alpha(r)=\pm\infty\).
\[\sup_{u\in\mathbb{R}^{m}}\dot{h}(x,u)\geq-\alpha\big{(}h(x)\big{)}. \tag{5}\]
Note that the left-hand side of (5) is \(L_{f}h(x)\) if \(L_{g}h(x)=0\) and it is \(\infty\) otherwise. Thus, (5) is equivalent to2:
Footnote 2: In (5)-(6), strict inequality (\(>\)) can also be required rather than non-strict inequality (\(\geq\)) to ensure the continuity of the underlying controllers [15].
\[L_{g}h(x)=0\implies L_{f}h(x)+\alpha\big{(}h(x)\big{)}\geq 0. \tag{6}\]
Given a CBF, [1] established safety-critical control.
**Theorem 1** ([1], [16]).: _If \(h\) is a CBF for (1) on \(\mathbb{R}^{n}\), then any locally Lipschitz controller \(k\) that satisfies:_
\[\dot{h}\big{(}x,k(x)\big{)}\geq-\alpha\big{(}h(x)\big{)} \tag{7}\]
_for all \(x\in\mathcal{C}\) renders (2) safe w.r.t. \(\mathcal{C}\). Furthermore, if (7) holds for all \(x\in\mathbb{R}^{n}\), then \(\mathcal{C}\) is asymptotically stable._
Accordingly, if the controller \(k\) is synthesized such that (7) holds for all \(x\in\mathcal{C}\), then the closed-loop system evolves in the safe set: \(x_{0}\in\mathcal{C}\implies x(t)\in\mathcal{C}\ \forall t\geq 0\). Moreover, even if the initial condition is outside \(\mathcal{C}\), i.e., \(x_{0}\notin\mathcal{C}\), the system converges towards \(\mathcal{C}\) if (7) is enforced for all \(x\in\mathbb{R}^{n}\)[16].
Condition (7) is often used as constraint in optimization to synthesize safe controllers. For example, a desired but not necessarily safe controller \(k_{\mathrm{d}}:\mathbb{R}^{n}\to\mathbb{R}^{m}\) can be modified to a safe controller via the _quadratic program (QP)_:
\[k(x)=\operatorname*{argmin}_{u\in\mathbb{R}^{m}} \|u-k_{\mathrm{d}}(x)\|^{2}\] (8) s.t. \[\dot{h}(x,u)\geq-\alpha\big{(}h(x)\big{)},\]
also known as _safety filter_, which has explicit solution [17]:
\[k(x)=\begin{cases}k_{\mathrm{d}}(x)+\max\{0,\eta(x)\}\frac{L_{g} h(x)^{\top}}{\|L_{g}h(x)\|^{2}},&\text{if }L_{g}h(x)\neq 0,\\ k_{\mathrm{d}}(x),&\text{if }L_{g}h(x)=0,\\ \eta(x)=-L_{f}h(x)-L_{g}h(x)k_{\mathrm{d}}(x)-\alpha\big{(}h(x)\big{)}.\end{cases} \tag{9}\]
### _Motivation: Multiple CBFs_
Controller (9) guarantees safety w.r.t. a single safe set \(\mathcal{C}\). However, there exist more complex safety specifications in practice that involve compositions of multiple sets. Such general specifications are discussed in the next section. As motivation, we first consider the case of enforcing multiple safety constraints simultaneously, given by the sets:
\[\mathcal{C}_{i}=\{x\in\mathbb{R}^{n}:h_{i}(x)\geq 0\}, \tag{10}\]
and CBF candidates \(h_{i}\), with \(i\in I=\{1,2,\ldots,N\}\). Our goal is to maintain \(x(t)\in\mathcal{C}_{i}\ \forall t\geq 0\) and \(\forall i\in I\), that corresponds to rendering the _intersection_ of sets \(\mathcal{C}_{i}\) safe.
One may achieve this goal by enforcing multiple constraints on the input simultaneously, for example, by the QP:
\[k(x)=\operatorname*{argmin}_{u\in\mathbb{R}^{m}} \|u-k_{\mathrm{d}}(x)\|^{2}\] (11) s.t. \[\dot{h}_{i}(x,u)\geq-\alpha_{i}\big{(}h_{i}(x)\big{)}\quad\forall i \in I.\]
However, (11) may not be feasible (its solution may not exist) for arbitrary number of constraints. Even if each \(h_{i}\) is CBF and consequently each individual constraint in (11) could be satisfied by a control input, the same input may not satisfy all constraints. For the feasibility of (11) we rather require:
\[\sup_{u\in\mathbb{R}^{n}}\,\min_{i\in I}\Big{(}\dot{h}_{i}(x,u)+ \alpha_{i}\big{(}h(x)\big{)}\Big{)}\geq 0, \tag{12}\]
cf. (5), that can also be stated in a form like (6) as follows.
**Theorem 2**.: _The QP (11) is feasible if and only if:_
\[\sum_{i\in I}\!\lambda_{i}L_{g}h_{i}(x)\!=\!0\implies\!\!\sum_{i \in I}\!\lambda_{i}\Big{(}L_{f}h_{i}(x)\!\!+\!\!\alpha_{i}\big{(}h_{i}(x)\big{)} \Big{)}\!\geq\!0 \tag{13}\]
_holds for all \(x\in\mathbb{R}^{n}\) and \(\lambda_{i}\geq 0\)._
The proof is given in the Appendix.
This highlights that multiple CBFs are more challenging to use than a single one. With this as motivation, next we propose to encode all safety specifications into a single CBF.
## III Complex Safety Specifications
We propose a framework to construct a single CBF candidate that captures complex safety specifications, wherein safety is given by Boolean logical operations between multiple constraints. For example, the motivation above involves logical AND operation: \(x(t)\in\mathcal{C}_{1}\) AND... AND \(x(t)\in\mathcal{C}_{N}\) must hold. Next, we discuss arbitrary logical compositions (with AND, OR and negation) of safety constraints.
### _Operations Between Sets_
Consider multiple safety constraints, each given by a set \(\mathcal{C}_{i}\) in (10). These may be combined via the following Boolean logical operations to capture complex safety specifications.
#### Iii-A1 Identity / class-\(\mathcal{K}^{e}\) function
The 0-superlevel set \(\mathcal{C}_{i}\) of \(h_{i}\) is the same as that of \(\gamma_{i}\circ h_{i}\) for any \(\gamma_{i}\in\mathcal{K}^{e}\):
\[\mathcal{C}_{i}=\{x\in\mathbb{R}^{n}:\gamma_{i}\big{(}h_{i}(x)\big{)} \geq 0\}. \tag{14}\]
#### Iii-A2 Complement set / negation
The complement3\(\overline{\mathcal{C}_{i}}\) of the 0-superlevel set of \(h_{i}\) is the 0-superlevel set of \(-h_{i}\):
Footnote 3: More precisely, \(\overline{\mathcal{C}_{i}}\) is the closure of the complement of \(\mathcal{C}_{i}\), i.e., it includes the boundary \(\partial\mathcal{C}_{i}\) (where \(h_{i}(x)=0\)).
\[\overline{\mathcal{C}_{i}}=\{x\in\mathbb{R}^{n}:-h_{i}(x)\geq 0\}. \tag{15}\]
#### Iii-A3 Union of sets / maximum / OR operation
The union of multiple 0-superlevel sets:
\[\bigcup_{i\in I}\!\mathcal{C}_{i}=\{x\in\mathbb{R}^{n}:\exists i \in I\text{ s.t. }h_{i}(x)\geq 0\} \tag{16}\]
can be given by a single inequality with the \(\max\) function [9]:
\[\bigcup_{i\in I}\!\mathcal{C}_{i}=\Big{\{}x\in\mathbb{R}^{n}:\max_{i \in I}h_{i}(x)\geq 0\Big{\}}. \tag{17}\]
The union describes logical OR relation between constraints:
\[x\!\in\!\bigcup_{i\in I}\!\mathcal{C}_{i}\iff x\!\in\!\mathcal{C}_{1}\text{ OR }x\!\in\!\mathcal{C}_{2}\ \ldots\ \text{OR }x\!\in\!\mathcal{C}_{N}. \tag{18}\]
#### Iii-A4 Intersection of sets / minimum / AND operation
The intersection of multiple 0-superlevel sets:
\[\bigcap_{i\in I}\!\mathcal{C}_{i}=\{x\in\mathbb{R}^{n}:h_{i}(x)\geq 0\ \ \forall i\in I\} \tag{19}\]
can be compactly expressed using the \(\min\) function [9]:
\[\bigcap_{i\in I}\!\mathcal{C}_{i}=\Big{\{}x\in\mathbb{R}^{n}:\min_{i \in I}h_{i}(x)\geq 0\Big{\}}. \tag{20}\]
As in the motivation above, the intersection of sets captures logical AND relation between multiple safety constraints:
\[x\!\in\!\bigcap_{i\in I}\!\mathcal{C}_{i}\iff x\!\in\!\mathcal{C}_{1}\text{ AND }x\!\in\!\mathcal{C}_{2}\ \ldots\ \text{AND }x\!\in\!\mathcal{C}_{N}. \tag{21}\]
Further operations between sets can be decomposed into applications of identity, complement, union and intersection, which are represented equivalently by class-\(\mathcal{K}^{\mathrm{c}}\) functions, negation, \(\max\) and \(\min\) operations, respectively.
**Remark 1**.: Note that \(h_{i}\) may have various physical meanings and orders of magnitude for different \(i\). Thus, for numerical conditioning (especially when we use exponentials later on), one may scale \(h_{i}\) to \(\gamma_{i}\circ h_{i}\) with continuously differentiable \(\gamma_{i}\in\mathcal{K}^{\mathrm{c}}\). For example, \(\gamma_{i}(r)=\tanh(r)\) scales to the interval \(\gamma_{i}(h_{i}(x))\in[-1,1]\) that may help numerics. Next, we assume that the definitions of \(h_{i}\) already include any necessary scaling and we omit \(\gamma_{i}\). Likewise, we do not discuss negation further by assuming that \(h_{i}\) are defined with proper sign.
### _Smooth Approximations to Construct a Single CBF_
While the union and intersection of sets are described by a single function in (17) and (20), the resulting expressions, \(\max_{i\in I}h_{i}(x)\) and \(\min_{i\in I}h_{i}(x)\), may not be continuously differentiable in \(x\)[9], and they are not CBFs. As main result, we propose a CBF candidate by smooth approximations of \(\max\) and \(\min\), and describe its properties. This enables us to enforce complex safety specifications as a single constraint.
#### Iii-B1 Union of Sets
To capture the union of sets in (17), we propose a CBF candidate via a smooth over-approximation of the \(\max\) function using a log-sum-exp expression [13]:
\[h(x)=\frac{1}{\kappa}\ln\bigg{(}\sum_{i\in I}\mathrm{e}^{\kappa h_{i}(x)} \bigg{)} \tag{22}\]
with smoothing parameter \(\kappa>0\). The Lie derivatives are:
\[L_{f}h(x)\!\!=\!\!\!\sum_{i\in I}\!\!\lambda_{i}(x)\!\!L_{f}h_{i}(x),\;L_{g}h( x)\!\!=\!\!\!\sum_{i\in I}\!\!\lambda_{i}(x)\!\!L_{g}h_{i}(x), \tag{23}\]
with the coefficients:
\[\lambda_{i}(x)=\mathrm{e}^{\kappa(h_{i}(x)-h(x))}, \tag{24}\]
that satisfy \(\sum_{i\in I}\lambda_{i}(x)=1\). The proposed CBF candidate in (22) has the properties below; see proof in the Appendix.
**Theorem 3**.: _Consider sets \(\mathcal{C}_{i}\) in (10) given by functions \(h_{i}\), and the union \(\bigcup_{i\in I}\mathcal{C}_{i}\) in (17). Function \(h\) in (22) over-approximates the \(\max\) expression in (17) with bounds:_
\[\max_{i\in I}h_{i}(x)\leq h(x)\leq\max_{i\in I}h_{i}(x)+\frac{\ln N}{\kappa} \quad\forall x\in\mathbb{R}^{n}, \tag{25}\]
_such that \(\lim_{\kappa\to\infty}h(x)=\max_{i\in I}h_{i}(x)\). The corresponding set \(\mathcal{C}\) in (3) encapsulates the union, \(\mathcal{C}\supseteq\bigcup_{i\in I}\mathcal{C}_{i}\), such that \(\lim_{\kappa\to\infty}\mathcal{C}=\bigcup_{i\in I}\mathcal{C}_{i}\). Moreover, if (13) holds for all \(x\in\mathbb{R}^{n}\) with \(\lambda_{i}\) in (24), then \(h\) is a CBF for (1) on \(\mathbb{R}^{n}\) with any \(\alpha\in\mathcal{K}^{\mathrm{c}}_{\infty}\) that satisfies \(\alpha(r)\geq\alpha_{i}(r)\)\(\forall r\in\mathbb{R}\) and \(\forall i\in I\)._
**Remark 2**.: A set \(\mathcal{C}\) that _lies inside_ the union of the individual sets can also be built by using a buffer \(b\) when defining \(h\):
\[h(x)=\frac{1}{\kappa}\ln\bigg{(}\sum_{i\in I}\mathrm{e}^{\kappa h_{i}(x)} \bigg{)}-\frac{b}{\kappa}. \tag{26}\]
For example, based on the upper bound in (25), \(b=\ln N\) leads to \(h(x)\leq\max_{i\in I}h_{i}(x)\) and \(\mathcal{C}\subseteq\bigcup_{i\in I}\mathcal{C}_{i}\). Alternatively, buffers from problem-specific bounds that are tighter than (25) can give better inner-approximation \(\mathcal{C}\) of \(\bigcup_{i\in I}\mathcal{C}_{i}\).
**Example 1**.: Consider Fig. 1, where a rectangular agent with planar position \(x\in\mathbb{R}^{2}\), velocity \(u\in\mathbb{R}^{2}\), and dynamics:
\[\dot{x}=u \tag{27}\]
is controlled to reach a desired position \(x_{\mathrm{d}}\in\mathbb{R}^{2}\) while avoiding a rectangular obstacle4. To reach the goal, we use a proportional controller with gain \(K_{\mathrm{p}}>0\) and saturation:
Footnote 4: Matlab codes for each example are available at: [https://github.com/molnartamasg/CBFs-for-complex-safety-specs](https://github.com/molnartamasg/CBFs-for-complex-safety-specs).
\[k_{\mathrm{d}}(x)=\mathrm{sat}\big{(}K_{\mathrm{p}}(x_{\mathrm{d}}-x)\big{)}, \tag{28}\]
where \(\mathrm{sat}(u)=\min\{1,u_{\max}/\|u\|_{2}\}u\) with some \(u_{\max}>0\). We modify this desired controller to a safe controller using the safety filter (9) and the proposed CBF construction.
To avoid the obstacle, the agent's center must be outside a rectangle that has the combined size of the obstacle and the agent; see Fig. 1(a). This means \(N=4\) constraints linked with OR logic: keep the center left to OR above OR right to OR below the rectangle. Accordingly, the safe set is given by the union \(\bigcup_{i\in I}\mathcal{C}_{i}\) of four individual sets \(\mathcal{C}_{i}\) described by four barriers at location \(x_{i}\in\mathbb{R}^{2}\) with normal vector \(n_{i}\in\mathbb{R}^{2}\):
\[h_{i}(x)=n_{i}^{\top}(x-x_{i}), \tag{29}\]
\(i\in I=\{1,2,3,4\}\). We combine the four barriers with (26). The resulting safe set \(\mathcal{C}\) is plotted in Fig. 1(b) for \(\kappa=2\) and various buffers \(b\). Set \(\mathcal{C}\) encapsulates \(\bigcup_{i\in I}\mathcal{C}_{i}\) for \(b=0\), whereas set \(\mathcal{C}\) lies inside \(\bigcup_{i\in I}\mathcal{C}_{i}\) for \(b=\ln N\); cf. Remark 2. For the problem-specific buffer \(b=\ln 2\) (where \(N\) is replaced by \(2\) since two barriers meet at each corner), the approximation \(\mathcal{C}\) gets very close to the corners of \(\bigcup_{i\in I}\mathcal{C}_{i}\).
We executed controller (9) with \(K_{\mathrm{p}}=0.5\), \(u_{\max}=1\), \(\kappa=2\), \(b=\ln 2\) and \(\alpha(h)=h\); see solid lines in Fig. 1(c).
Fig. 1: Numerical results for Example 1, where a reach-avoid task is safely executed. (a) Safe set, (b) 0-superlevel set of the proposed CBF (26), (c)-(e) simulation of safety-critical control by (9).
The reach-avoid task is successfully accomplished by keeping the agent within set \(\mathcal{C}\). Fig. 1(d) highlights that safety is maintained w.r.t. a smooth under-approximation \(h\) (red) of the maximum \(\max_{i\in I}h_{i}\) (black) of the individual barriers \(h_{i}\) (dashed). Fig. 1(e) indicates the underlying control input. We also demonstrate by dashed lines in Fig. 1(c)-(e) the case of increasing the smoothing parameter to \(\kappa\to\infty\). The sharp corner is recovered and the input becomes discontinuous (\(u_{2}\) jumps). While discontinuous inputs can be addressed by nontrivial nonsmooth CBF theory [9], they may be difficult to realize accurately by actuators in engineering systems.
#### Iv-B2 Intersection of Sets
To capture the intersection of sets in (20), we propose to use a smooth under-approximation of the \(\min\) function as CBF candidate [13], analogously to (22):
\[h(x)=-\frac{1}{\kappa}\ln\bigg{(}\sum_{i\in I}\mathrm{e}^{-\kappa h_{i}(x)} \bigg{)}. \tag{30}\]
The Lie derivatives of \(h\) are expressed by (23) with:
\[\lambda_{i}(x)=\mathrm{e}^{-\kappa(h_{i}(x)-h(x))}, \tag{31}\]
that satisfy \(\sum_{i\in I}\lambda_{i}(x)=1\). The proposed CBF candidate in (30) has the properties below, as proven in the Appendix.
**Theorem 4**.: _Consider sets \(\mathcal{C}_{i}\) in (10) given by functions \(h_{i}\), and the intersection \(\bigcap_{i\in I}\mathcal{C}_{i}\) in (20). Function \(h\) in (30) under-approximates the \(\min\) expression in (20) with bounds:_
\[\min_{i\in I}h_{i}(x)-\frac{\ln N}{\kappa}\leq h(x)\leq\min_{i\in I}h_{i}(x) \quad\forall x\in\mathbb{R}^{n}, \tag{32}\]
_such that \(\lim_{\kappa\to\infty}h(x)=\min_{i\in I}h_{i}(x)\). The corresponding set \(\mathcal{C}\) in (3) lies inside the intersection, \(\mathcal{C}\subseteq\bigcap_{i\in I}\mathcal{C}_{i}\), such that \(\lim_{\kappa\to\infty}\mathcal{C}=\bigcap_{i\in I}\mathcal{C}_{i}\)._
### _Single CBF for Arbitrary Safe Set Compositions_
Having discussed the union and intersection of sets, we extend our framework to arbitrary combinations of unions and intersections. These include e.g. two-level or three-level compositions, like \(\bigcup\bigcap_{i}\mathcal{C}_{i}\) or \(\bigcap\bigcup_{i}\mathcal{C}_{i}\), etc. We propose an algorithmic way to capture these by a single CBF candidate.
Specifically, consider \(M\) levels of safety specifications that establish a single safe set by composing \(N\) individual sets. The individual sets are \(\mathcal{C}_{i}\) in (10), \(i\in I=\{1,\ldots,N\}\). The specification levels are indexed by \(\ell\in L=\{1,\ldots,M\}\). At each level, the union or intersection of sets is taken, resulting in \(N_{\ell}\) new sets, denoted by \(\mathcal{C}_{i}^{\ell}\), \(i\in I_{\ell}=\{1,\ldots,N_{\ell}\}\). This is repeated until a single safe set, called \(\mathcal{C}_{\mathrm{c}}\), is obtained:
\[\mathcal{C}_{i}^{0} =\mathcal{C}_{i},\quad i\in I,\] \[\mathcal{C}_{i}^{\ell} =\begin{cases}\bigcup_{j\in J_{i}^{\ell}}\mathcal{C}_{j}^{\ell-1} &\mathrm{if}\ \ell\in L_{\cap},\\ \bigcap_{j\in J_{i}^{\ell}}\mathcal{C}_{j}^{\ell-1}&\mathrm{if}\ \ell\in L_{\cap},\end{cases} i\in I_{\ell}, \tag{33}\] \[\mathcal{C}_{\mathrm{c}} =\mathcal{C}_{1}^{M},\]
where \(J_{i}^{\ell}\subseteq I_{\ell-1}\) is the indices of sets that combine into \(C_{i}^{\ell}\), while \(L_{\cup}\) and \(L_{\cap}\) are the indices of levels with union and intersection (\(L=L_{\cup}\cup L_{\cap}\)). Unions and intersections imply the maximum and minimum of the individual barriers \(h_{i}\), respectively, resulting in the combined CBF candidate \(h_{\mathrm{c}}\)[9]:
\[h_{i}^{0}(x) =h_{i}(x),\quad i\in I,\] \[h_{i}^{\ell}(x) =\begin{cases}\max_{j\in J_{i}^{\ell}}h_{j}^{\ell-1}(x)&\mathrm{ if}\ \ell\in L_{\cup},\\ \min_{j\in J_{i}^{\ell}}h_{j}^{\ell-1}(x)&\mathrm{if}\ \ell\in L_{\cap},\end{cases} i\in I_{\ell}, \tag{34}\] \[h_{\mathrm{c}}(x) =h_{1}^{M}(x).\]
This describes the safe set (that is assumed to be non-empty):
\[\mathcal{C}_{\mathrm{c}}=\{x\in\mathbb{R}^{n}:h_{\mathrm{c}}(x)\geq 0\}. \tag{35}\]
While the combined function \(h_{\mathrm{c}}\) is nonsmooth [9], we propose a continuously differentiable function \(h\), by extending the smooth approximations (22) and (30) of \(\min\) and \(\max\):
\[H_{i}^{0}(x) =\mathrm{e}^{\kappa h_{i}(x)},\quad i\in I,\] \[H_{i}^{\ell}(x) =\begin{cases}\frac{\sum_{j\in J_{i}^{\ell}}H_{j}^{\ell-1}(x)}{ \sum_{j\in J_{i}^{\ell}}\overline{\overline{\overline{\overline{\overline{ \overline{\overline{\overline{\overline{\overline{\overline{\overline{\overline{\cdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdot }}}}}}}}}}}}&\ell \in L_{\cap},\\ h(x)=\frac{1}{\kappa}\ln H_{1}^{M}(x)-\frac{b}{\kappa}.\end{cases} \tag{36}\]
Note that we included a buffer \(b\), according to Remark 2, to be able to adjust whether the resulting set \(\mathcal{C}\) encapsulates \(\mathcal{C}_{\mathrm{c}}\) or lies inside it. The derivative of the CBF candidate \(h\) is:
\[\dot{H}_{i}^{0}(x,u) =\kappa H_{i}^{0}(x)\dot{h}_{i}(x,u),\quad i\in I,\] \[\dot{H}_{i}^{\ell}(x,u) =\begin{cases}\sum_{j\in J_{i}^{\ell}}\dot{H}_{j}^{\ell-1}(x,u)& \mathrm{if}\ \ell\in L_{\cup},\\ H_{i}^{\ell}(x)^{2}\sum_{j\in J_{i}^{\ell}}\overline{\dot{H}_{j}^{\ell-1}(x,u)} \overline{\dot{H}_{j}^{\ell-1}(x)^{2}}&\mathrm{if}\ \ell\in L_{\cap},\end{cases} i\in I_{\ell},\] \[\dot{h}(x,u) =\frac{\dot{H}_{1}^{M}(x,u)}{\kappa H_{1}^{M}(x)}. \tag{37}\]
The proposed function \(h\) approximates \(h_{\mathrm{c}}\) with the following properties that are proven in the Appendix.
**Theorem 5**.: _Consider sets \(\mathcal{C}_{i}\) in (10) given by functions \(h_{i}\), and the composition \(\mathcal{C}_{\mathrm{c}}\) in (33) given by \(h_{\mathrm{c}}\) in (34)-(35). Function \(h\) in (36) approximates \(h_{\mathrm{c}}\) with the error bound:_
\[-\frac{b_{\cap}+b}{\kappa}\leq h(x)-h_{\mathrm{c}}(x)\leq\frac{b_{\cup}-b}{ \kappa}\quad\forall x\in\mathbb{R}^{n}, \tag{38}\]
_where \(b_{\cap}\!=\!\sum_{\ell\in L_{\cap}}\!\ln b_{\ell}\), \(b_{\cup}\!=\!\sum_{\ell\in L_{\cup}}\!\ln b_{\ell}\), \(b_{\!}\!=\!\max_{i\in I_{\ell}}\!|J_{i}^{\ell}|\), and \(|J_{i}^{\ell}|\) is the number of elements in \(J_{i}^{\ell}\). If \(b\!\geq\!b_{\cup}\), the corresponding set \(\mathcal{C}\) in (3) lies inside \(\mathcal{C}_{\mathrm{c}}\), i.e., \(\mathcal{C}\subseteq\mathcal{C}_{\mathrm{c}}\), whereas if \(b\leq-b_{\cap}\), set \(\mathcal{C}\) encapsulates \(\mathcal{C}_{\mathrm{c}}\), i.e., \(\mathcal{C}\supseteq\mathcal{C}_{\mathrm{c}}\). Furthermore, we have \(\lim_{\kappa\to\infty}h(x)=h_{\mathrm{c}}(x)\) and \(\lim_{\kappa\to\infty}\mathcal{C}=\mathcal{C}_{\mathrm{c}}\)._
The proposed approach in (36) captures complex safety specifications algorithmically by a single CBF candidate \(h\), via the recursive use of (22) and (30) such that exponentials and logarithms are computed only once. Safety is then interpreted w.r.t. set \(\mathcal{C}\), which can be tuned to approximate the specified set \(\mathcal{C}_{\mathrm{c}}\) as desired. Based on the error bound (38), increasing \(\kappa\) makes the approximation tighter, while \(b\) affects whether \(\mathcal{C}\subseteq\mathcal{C}_{\mathrm{c}}\) or \(\mathcal{C}\supseteq\mathcal{C}_{\mathrm{c}}\). Note that \(h\) is a valid CBF only if it satisfies (5). This is not guaranteed by Theorem 5, and it would require additional conditions like (13) in Theorem 3. If \(h\) is a CBF, formal safety guarantees can be maintained,
for example, by QP (8) that has a single constraint and the explicit solution (9). If the constraint is enforced outside set \(\mathcal{C}\), then \(\mathcal{C}\) is asymptotically stable; cf. Theorem 1. We remark that, potentially, the log-sum-exp formulas could be replaced by other smooth approximations of \(\max\) and \(\min\). Furthermore, note that computing exponentials may cause numerical issues if \(\kappa\) is too large. These may be alleviated by scaling CBF candidates by class-\(\mathcal{K}^{\mathrm{c}}\) functions; see Remark 1.
**Example 2**.: Consider the reach-avoid task of Example 1, with dynamics (27), desired controller (28), safety filter (9), and multiple obstacles shown in Fig. 2. Like in Example 1, each of the three obstacles yields four safety constraints, leading to \(N=12\) sets \(\mathcal{C}_{i}\) and functions \(h_{i}\), given by (29). The four constraints of each obstacle are linked with OR logic, like in Example 1, while the constraints of different obstacles are linked with AND: safety is maintained w.r.t. obstacle 1 AND obstacle 2 AND obstacle 3. Thus, the safe set:
\[\mathcal{C}_{\mathrm{c}}\!=\!(\mathcal{C}_{1}\!\cup\!\mathcal{C}_{2}\!\cup \!\mathcal{C}_{3}\!\cup\!\mathcal{C}_{4})\!\cap\!(\mathcal{C}_{5}\!\cup\! \mathcal{C}_{6}\!\cup\!\mathcal{C}_{7}\!\cup\!\mathcal{C}_{8})\!\cap\!( \mathcal{C}_{9}\!\cup\!\mathcal{C}_{10}\!\cup\!\mathcal{C}_{11}\!\cup\! \mathcal{C}_{12}) \tag{39}\]
is given by a \(M=2\) level specification, combining \(N=12\) sets to \(N_{1}=3\) sets (\(\mathcal{C}_{1}^{1}\) from sets given by \(J_{1}^{1}=\{1,2,3,4\}\), \(\mathcal{C}_{2}^{1}\) from \(J_{2}^{1}=\{5,6,7,8\}\) and \(\mathcal{C}_{3}^{1}\) from \(J_{3}^{1}=\{9,10,11,12\}\)), and then to a single set \(\mathcal{C}_{\mathrm{c}}\) (via sets given by \(J_{1}^{2}=\{1,2,3\}\)).
The behavior of controller (9) with the proposed CBF candidate (36) is shown in Fig. 2 for \(K_{\mathrm{p}}=0.5\), \(u_{\max}=1\), \(\kappa=10\), \(b=\ln 2\) and \(\alpha(h)=h\). The reach-avoid task is successfully accomplished with formal guarantees of safety. Remarkably, the controller is continuous and explicit, since the control law (9) and CBF formulas (36)-(37) are in closed form. Such explicit controllers are easy to implement and fast to execute. Note that controller (11) could also handle multiple obstacles if each obstacle was given by a single CBF candidate. Yet, (11) cannot address multi-level safety specifications like (39), while the proposed method can.
**Example 3**.: Consider the setup of Fig. 3 where a point agent is driven to a desired location while staying on a road network, with dynamics (27), desired controller (28) and safety-critical controller (9). Safety is determined by the road geometry. Each road boundary is related to a set, which is given for straight roads by (29) and for ring roads by:
\[h_{i}(x)=\pm\big{(}\|x-x_{i}\|-R_{i}\big{)}. \tag{40}\]
Here plus and minus signs stand for the inner and outer circles, respectively, \(R_{i}\) is their radius, and \(x_{i}\) is their center. Safety must be ensured w.r.t. boundary 1 AND boundary 2 of each road, while the agent must stay on road 1 OR road 2 OR road 3 OR road 4. Thus, the combined safe set becomes:
\[\mathcal{C}_{\mathrm{c}}=(\mathcal{C}_{1}\cap\mathcal{C}_{2})\cup(\mathcal{C} _{3}\cap\mathcal{C}_{4})\cup(\mathcal{C}_{6}\cap\mathcal{C}_{5})\cup(\mathcal{ C}_{7}\cap\mathcal{C}_{8}). \tag{41}\]
That is, we have a \(M=2\) level specification with \(N=8\) sets combined first to \(N_{1}=4\) sets (as intersections of sets given by \(J_{1}^{1}=\{1,2\}\), \(J_{2}^{1}=\{3,4\}\), \(J_{3}^{1}=\{5,6\}\), \(J_{4}^{1}=\{7,8\}\)), and then to a single set (as union via \(J_{1}^{2}=\{1,2,3,4\}\)).
The execution of the reach-avoid task with the proposed CBF candidate (36) and controller (9) is shown in Fig. 3 for \(K_{\mathrm{p}}=0.5\), \(u_{\max}=1\), \(\kappa=10\), \(b=0\) and \(\alpha(h)=h\). The end result is guaranteed safety (see solid lines). Moreover, the safe set is attractive: in case of an unsafe, off-road initial condition the agent returns to to the safe set on the road and continues to be safe (see thick dashed lines). Remarkably, this property was not provided by earlier works like [11].
## IV Conclusion
We established a framework to capture complex safety specifications by control barrier functions (CBFs). The specifications are combinations of state constraints by Boolean logic. We proposed an algorithmic way to create a single CBF candidate that encodes these constraints and enables efficient safety-critical controllers. We described the properties of this CBF candidate, and we used simulations to show its ability to tackle nontrivial safety-critical control problems.
## Appendix
Proof of Theorem 2.: Consider the Lagrangian of the feasibility problem [18] corresponding to the QP (11):
\[L(x,u,\lambda)=-\sum_{i\in I}\lambda_{i}\Big{(}\dot{h}_{i}(x,u)+\alpha_{i} \big{(}h(x)\big{)}\Big{)}, \tag{42}\]
with the Lagrange multipliers \(\lambda\!=\!\big{[}\lambda_{1}\ \lambda_{2}\ \ldots\ \lambda_{N}\big{]}^{\top}\), \(\lambda_{i}\geq 0\)\(\forall i\in I\). The QP (11) is feasible if and only if \(\exists u\in\mathbb{R}^{m}\) such that \(L(x,u,\lambda)\leq 0\)\(\forall\lambda_{i}\geq 0\). With the Lagrange dual function, \(g_{L}(x,\lambda)\!=\!\inf_{u\in\mathbb{R}^{m}}L(x,u,\lambda)\), this means \(g_{L}(x,\lambda)\leq 0\)\(\forall\lambda_{i}\geq 0\). Since \(g_{L}(x,\lambda)\!=\!-\sum_{i\in I}\lambda_{i}\Big{(}L_{f}h_{i}(x)\!+\!\alpha_{i} \big{(}h_{i}(x)\big{)}\Big{)}\) if \(\sum_{i\in I}\lambda_{i}L_{g}h_{i}(x)\!=\!0\) and \(g_{L}(x,\lambda)\!=\!-\infty\) otherwise, (13) is equivalent to \(g_{L}(x,\lambda)\leq 0\) and provides feasibility.
Fig. 3: Numerical results for Example 3, where an agent is driven safely along a road network via controller (9) with the proposed CBF (36).
Fig. 2: Numerical results for Example 2, where a reach-avoid task with multiple obstacles is executed by controller (9) with the proposed CBF (36).
Proof of Theorem 3.: Since the exponential function is monotonous and gives positive value, we have:
\[\mathrm{e}^{\kappa\max_{i\in I}h_{i}(x)}\leq\sum_{i\in I}\mathrm{e}^{\kappa h_{i }(x)}\leq N\mathrm{e}^{\kappa\max_{i\in I}h_{i}(x)}, \tag{43}\]
that yields (25) via (22) and the monotonicity of \(\ln\). The limit on both sides of (25) yields \(\lim_{\kappa\to\infty}h(x)=\max_{i\in I}h_{i}(x)\), and consequently \(\lim_{\kappa\to\infty}\mathcal{C}=\bigcup_{i\in I}\mathcal{C}_{i}\) holds. Due to (25), \(\max_{i\in I}h_{i}(x)\geq 0\implies h(x)\geq 0\), therefore \(x\in\bigcup_{i\in I}\mathcal{C}_{i}\implies x\in\mathcal{C}\), and \(\mathcal{C}\supseteq\bigcup_{i\in I}\mathcal{C}_{i}\) follows.
We prove that \(h\) is a CBF by showing that (6) holds. We achieve this by relating \(L_{g}h(x)\) and \(L_{f}h(x)+\alpha\big{(}h(x)\big{)}\) to \(L_{g}h_{i}(x)\) and \(L_{f}h_{i}(x)+\alpha_{i}\big{(}h_{i}(x)\big{)}\). The Lie derivatives are related by (23), while the following bound holds for all \(i\in I\):
\[\alpha\big{(}h(x)\big{)}\geq\alpha\big{(}h_{i}(x)\big{)}\geq\alpha_{i}\big{(} h_{i}(x)\big{)}, \tag{44}\]
where we used (25) and \(\alpha(r)\geq\alpha_{i}(r)\). Consequently, since \(\sum_{i\in I}\lambda_{i}(x)=1\) and \(\lambda_{i}(x)>0\) hold via (24), we have:
\[L_{f}h(x)\!+\!\alpha\big{(}h(x)\big{)}\geq\sum_{i\in I}\lambda_{i}(x)\Big{(}L_ {f}h_{i}(x)\!+\!\alpha_{i}\big{(}h_{i}(x)\big{)}\Big{)}. \tag{45}\]
If \(L_{g}h(x)\!=\!0\), we get \(\sum_{i\in I}\lambda_{i}(x)L_{g}h_{i}(x)\!=\!0\) based on (23), and since (13) is assumed to hold, (45) finally yields \(L_{f}h(x)+\alpha\big{(}h(x)\big{)}\geq 0\). Thus, (6) holds and \(h\) is a CBF.
Proof of Theorem 4.: The proof follows that of Theorem 3, with the following modifications. We replace (43) by:
\[\mathrm{e}^{-\kappa\min_{i\in I}h_{i}(x)}\leq\sum_{i\in I}\mathrm{e}^{-\kappa h _{i}(x)}\leq N\mathrm{e}^{-\kappa\min_{i\in I}h_{i}(x)}, \tag{46}\]
that gives the bound (32) via (30). The remaining properties follow from the limit on both sides of (32) and from \(h(x)\geq 0\implies\min_{i\in I}h_{i}(x)\geq 0\) according to (32).
Proof of Theorem 5.: By leveraging that the exponential function is monotonous, we write (34) equivalently as:
\[H^{0}_{\mathrm{c},i}(x) =\mathrm{e}^{\kappa h_{i}(x)},\quad i\in I, \tag{47}\] \[H^{\ell}_{\mathrm{c},i}(x)\] \[h_{\mathrm{c}}(x) =\frac{1}{\kappa}\ln H^{M}_{\mathrm{c},1}(x).\]
We compare this with the definition (36) of \(h\). First, by using the middle row of (36), we establish that for all \(x\in\mathbb{R}^{n}\):
\[H^{\ell-1}_{j}(x)\leq H^{\ell}_{i}(x)\leq|J^{\ell}_{i}|\max_{j \in J^{\ell}_{i}}H^{\ell-1}_{j}(x)\quad\text{if }\ell\in L_{\cup}, \tag{48}\] \[\frac{1}{|J^{\ell}_{i}|}\min_{j\in J^{\ell}_{i}}H^{\ell-1}_{j}(x) \leq H^{\ell}_{i}(x)\leq H^{\ell-1}_{j}(x)\quad\text{if }\ell\in L_{\cap}.\]
\(\forall j\!\in\!J^{\ell}_{i}\) and \(\forall i\!\in\!I_{\ell}\). Then, we relate \(H^{\ell}_{\mathrm{c},i}\) to \(H^{\ell}_{i}\) by induction. For \(\ell\!\geq\!1\) we assume that there exist \(\underline{c}_{\ell-1},\overline{c}_{\ell-1}>0\) such that:
\[\underline{c}_{\ell-1}H^{\ell-1}_{\mathrm{c},i}(x)\leq H^{\ell-1}_{i}(x)\leq \overline{c}_{\ell-1}H^{\ell-1}_{\mathrm{c},i}(x) \tag{49}\]
\(\forall x\in\mathbb{R}^{n}\) and \(\forall i\in I_{\ell-1}\). This is true for \(\ell\!=\!1\) with \(\underline{c}_{0},\overline{c}_{0}=1\) since \(H^{0}_{i}(x)=H^{0}_{\mathrm{c},i}(x)\). By substituting (49) into (48), using the middle row of (47) and \(|J^{\ell}_{i}|\leq\max_{i\in I_{\ell}}|J^{\ell}_{i}|\), we get:
\[\underline{c}_{\ell}H^{\ell}_{\mathrm{c},i}(x)\leq H^{\ell}_{i}(x)\leq \overline{c}_{\ell}H^{\ell}_{\mathrm{c},i}(x) \tag{50}\]
with \(b_{\ell}=\max_{i\in I_{\ell}}|J^{\ell}_{i}|\) and:
\[\underline{c}_{\ell}=\begin{cases}\underline{c}_{\ell-1}&\text{if }\ell\in L_{\cup},\\ \frac{\underline{c}_{\ell-1}}{b_{\ell}}&\text{if }\ell\in L_{\cap},\end{cases} \quad\overline{c}_{\ell}=\begin{cases}b_{\ell}\overline{c}_{\ell-1}& \text{if }\ell\in L_{\cup},\\ \overline{c}_{\ell-1}&\text{if }\ell\in L_{\cap}.\end{cases} \tag{51}\]
By induction, (50) holds for \(\ell=M\) with \(\underline{c}_{M}\!=\!\prod_{\ell\in L_{\cap}}\!\frac{1}{b_{\ell}}\) and \(\overline{c}_{M}\!=\!\prod_{\ell\in L_{\cup}}\!b_{\ell}\). Taking the logarithm of (50) with \(\ell\!=\!M\) and using the last rows of (36) and (47) result in (38).
|
2310.12989 | Enhancing Health Data Interoperability with Large Language Models: A
FHIR Study | In this study, we investigated the ability of the large language model (LLM)
to enhance healthcare data interoperability. We leveraged the LLM to convert
clinical texts into their corresponding FHIR resources. Our experiments,
conducted on 3,671 snippets of clinical text, demonstrated that the LLM not
only streamlines the multi-step natural language processing and human
calibration processes but also achieves an exceptional accuracy rate of over
90% in exact matches when compared to human annotations. | Yikuan Li, Hanyin Wang, Halid Yerebakan, Yoshihisa Shinagawa, Yuan Luo | 2023-09-19T20:09:35Z | http://arxiv.org/abs/2310.12989v1 | # Enhancing Health Data Interoperability with Large Language Models: A FHIR Study
###### Abstract
The integration and exchange of health data across diverse platforms and systems remain challenging due to the absence of standardized formats and a shared semantic understanding. This challenge becomes more significant when critical health information is embedded in unstructured data rather than well-organized structured formats. Standardizing unstructured health data, such as clinical notes, into FHIR resources can alleviate ambiguity across different health providers and, therefore, improve interoperability. However, it is by no means an easy task. Previous studies [1, 2] have attempted to transform clinical notes into FHIR resources using a combination of natural language processing and machine learning tools through multi-step processes involving clinical named entity recognition, terminology coding, mathematical calculations, structural formatting, and human calibrations. However, these approaches require additional human effort to consolidate the results from multiple tools and have achieved only moderate performances, with F1 scores ranging from 0.7 to 0.9 in different elements. To this end, we intend to harness Large Language Models (LLMs) to directly generate FHIR-formatted resources from free-text input. The utilization of LLMs is expected to simplify the previously multi-step processes, enhance the efficiency and accuracy of automatic FHIR resource generation, and ultimately improve health data interoperability.
**Methods**
**Data Annotation** To the best of our knowledge, there is no largely publicly available dataset in the FHIR standard that is generated from contextual data. Therefore, we have chosen to annotate a dataset containing both free-text input and structured output in FHIR formats. The free-text input was derived from the discharge summaries of the MIMIC-III dataset. [3] Thanks to the 2018 n2c2 medication extraction challenge [4], which essentially involves named entity recognition tasks, elements in medication statements have been identified. Our annotations built upon these n2c2 annotations and standardized the free text into multiple clinical terminology coding systems, such as NDC, RxNorm, and SNOMED. We organized the contexts and codes into FHIR medicationStatement resources. The converted FHIR resources underwent validation by the official FHIR validator ([https://validator.fhir.org/](https://validator.fhir.org/)) to ensure compliance with FHIR standards, including structure, datatype, code sets, display names, and more. These validated results were considered the gold standard transformation results and could be used to test against the LLMs. No ethical concerns exist regarding data usage, as both the MIMIC and n2c2 datasets are publicly available to authorized users.
**Large Language Model** We used OpenAI's GPT-4 model as the LLM for FHIR format transformation. We used five separate prompts to instruct the LLM to transform input free text into medication (including medicationCode, strength, and form), route, schedule, dosage, and, reason, respectively. All prompts adhered to a template with the following strucuture: task instructions, expected output FHIR templates in.JSON format, 4-5 conversion examples, a comprehensive list of codes from which the model can make selections, and then the input text. As there was no fine-tuning or domain-specific adaptation in our experiments, we initially had the LLM generate a small subset (N=100). Then, we manually reviewed the discrepancies between the LLM-generated FHIR output and our human annotations. Common mistakes were identified and used to refine the prompts. It's important to note that we did not have the access to the whole lists of NDC, RxNorm, and SNOMED Medication codes for drug names, as well as SNOMED Finding codes for reasons. Additionally, even if we had such comprehensive lists, they would have exceeded the token limits for LLMs. Thus, we did not task LLMs with coding these entities; instead, we instructed them to identify the contexts mentioned in the input text. For other elements, e.g. drug routes and forms, numbering in the hundreds, we allowed LLMs to directly code them. When evaluating the LLM-generated output, our primary criterion was the exact match rate, which necessitates precise alignment with human annotations in all aspects, including codes, structures, and more. Additionally, we reported precision, recall, and F1 scores for specific element occurrences. We accessed the GPT-4 APIs through the Azure OpenAI service, aligning with responsible use guidelines for MIMIC data. The specific model we used was 'gpt-4-32k' in its '2023-05-15' version. Each text input was individually transformed into a MedicationStatement resource. To optimize efficiency, we made multiple asynchronous API calls. |
2308.16796 | Spectroscopic r-Process Abundance Retrieval for Kilonovae II:
Lanthanides in the Inferred Abundance Patterns of Multi-Component Ejecta from
the GW170817 Kilonova | In kilonovae, freshly-synthesized $r$-process elements imprint features on
optical spectra, as observed in AT2017gfo, the counterpart to the GW170817
binary neutron star merger. However, measuring the $r$-process compositions of
the merger ejecta is computationally challenging. Vieira et al. (2023)
introduced Spectroscopic $r$-Process Abundance Retrieval for Kilonovae (SPARK),
a software tool to infer elemental abundance patterns of the ejecta, and
associate spectral features with particular species. Previously, we applied
SPARK to the 1.4 day spectrum of AT2017gfo and inferred its abundance pattern
for the first time, characterized by electron fraction $Y_e=0.31$, a
substantial abundance of strontium, and a dearth of lanthanides and heavier
elements. This ejecta is consistent with wind from a remnant hypermassive
neutron star and/or accretion disk. We now extend our inference to spectra at
2.4 and 3.4 days, and test the need for multicomponent ejecta, where we
stratify the ejecta in composition. The ejecta at 1.4 and 2.4 days is described
by the same single blue component. At 3.4 days, a new redder component with
lower $Y_e=0.16$ and a significant abundance of lanthanides emerges. This new
redder component is consistent with dynamical ejecta and/or neutron-rich ejecta
from a magnetized accretion disk. As expected from photometric modelling, this
component emerges as the ejecta expands, the photosphere recedes, and the
earlier bluer component dims. At 3.4 days, we find an ensemble of lanthanides,
with the presence of cerium most concrete. This presence of lanthanides has
important implications for the contribution of kilonovae to the $r$-process
abundances observed in the Universe. | Nicholas Vieira, John J. Ruan, Daryl Haggard, Nicole M. Ford, Maria R. Drout, Rodrigo Fernández | 2023-08-31T15:16:52Z | http://arxiv.org/abs/2308.16796v2 | Spectroscopic \(r\)-Process Abundance Retrieval for Kilonovae II: Lanthanides in the Inferred Abundance Patterns of Multi-Component Ejecta from the GW170817 Kilonova
###### Abstract
In kilonovae, freshly-synthesized \(r\)-process elements imprint features on optical spectra, as observed in AT2017gfo, the counterpart to the GW170817 binary neutron star merger. However, measuring the \(r\)-process compositions of the merger ejecta is computationally challenging. Vieira et al. (2023) introduced Spectroscopic \(r\)-Process Abundance Retrieval for Kilonovae (SPARK), a software tool to infer elemental abundance patterns of the ejecta, and associate spectral features with particular species. Previously, we applied SPARK to the 1.4 day spectrum of AT2017gfo and inferred its abundance pattern for the first time, characterized by electron fraction \(Y_{e}=0.31\), a substantial abundance of strontium, and a dearth of lanthanides and heavier elements. This ejecta is consistent with wind from a remnant hypermassive neutron star and/or accretion disk. We now extend our inference to spectra at 2.4 and 3.4 days, and test the need for multi-component ejecta, where we stratify the ejecta in composition. The ejecta at 1.4 and 2.4 days is described by the same single blue component. At 3.4 days, a new redder component with lower \(Y_{e}=0.16\) and a significant abundance of lanthanides emerges. This new redder component is consistent with dynamical ejecta and/or neutron-rich ejecta from a magnetized accretion disk. As expected from photometric modelling, this component emerges as the ejecta expands, the photosphere recedes, and the earlier bluer component dims. At 3.4 days, we find an ensemble of lanthanides, with the presence of cerium most concrete. This presence of lanthanides has important implications for the contribution of kilonovae to the \(r\)-process abundances observed in the Universe.
Nuclear abundances (1128) -- R-process (1324) -- Radiative transfer simulations (1967) -- Spectral line identification (2073) +
Footnote †: journal: ApJ
0000-0002-8880-7885]Nicholas Vieira
0000-0002-4880-0888]John J. Ruan
0000-0002-4880-7888]Daryl Haggard
0000-0002-4880-0888]Nicole M. Ford
0000-0002-0788-0888]Maria R. Drout
0000-0002-0788-0888]Rodrigo Fernandez
## 1 Introduction
Approximately half of the elements in the Universe heavier than iron are synthesized by rapid neutron capture nucleosynthesis: the \(r\)-process (see Cowan et al. 2021 for a review). Extreme astrophysical environments--namely, mergers of neutron stars (NS-NS) or a neutron star and black hole (NS-BH) and other proposed sources like collapsars or magnetotational supernovae--offer leading candidate sites for this \(r\)-process nucleosynthesis due to their exceptionally high densities of free neutrons (Lattimer and Schramm 1974; Symbalisty and Schramm 1982; Eichler et al. 1989; Freiburghaus et al. 1999; Goriely et al. 2011; Korobkin et al. 2012; Bauswein et al. 2013). However, it is still unclear which of these channels dominates. The NS-NS merger GW170817, first detected in gravitational waves and then across the electromagnetic spectrum (Abbott
et al., 2017, 2017; Li et al., 2017, 2017), has provided some insight. Both photometry (Andreoni et al., 2017; Arcavi et al., 2017; Coulter et al., 2017; Diaz et al., 2017; Drout et al., 2017; Evans et al., 2017; Hu et al., 2017; Kasliwal et al., 2017; Lipunov et al., 2017; Tanvir et al., 2017; Troja et al., 2017; Utsumi et al., 2017; Valenti et al., 2017)1 and spectroscopy (Chornock et al., 2017; Kasen et al., 2017; Pian et al., 2017; Shappee et al., 2017; Smartt et al., 2017) of the optical/near-infrared counterpart, AT2017gfo, matched theoretical expectations for a kilonova: an explosive transient event powered by radioactive decay of freshly-synthesized \(r\)-process elements. However, we do not yet know the precise abundance pattern of the \(r\)-process elements in the ejecta, nor whether GW170817-like kilonovae could yield the \(r\)-process abundances seen across the Universe (Ji et al., 2019; Cowan et al., 2021).
Footnote 1: See Villar et al. (2017) for a compilation of this photometry considering inter-instrument variation.
The spectra of kilonovae in particular are marked by absorption and emission features from a suite of \(r\)-process elements, and are key for determining the detailed composition of the merger ejecta. Insights gained from spectra are independent of and complementary to light curve modelling, which has served mostly to infer macroscopic properties of the kilonova such as the heating rates, total ejecta masses, average ejecta velocities, and temperatures of the ejecta (_e.g._, Villar et al., 2017; Almualla et al., 2021; Breschi et al., 2021; Ristic et al., 2022). By modelling the spectra, we can directly infer the abundances and the conditions of \(r\)-process nucleosynthesis. These insights further allow us to assess the importance of mergers versus other proposed sites as the source of these elements, and, the physical ejection mechanisms at play during these mergers.
Spectral modelling has already provided evidence that the GW170817 kilonova ejecta contained \(r\)-process elements, and, has enabled associations of certain absorption and/or emission features with individual species. Watson et al. (2019) analyzed the early time, optically thick spectra of AT2017gfo, and found the imprint of a P Cygni feature arising from Sr ii (strontium, \({}_{38}\)Sr) at \(\sim\)8000 A, at 1.4, 2.4, and 3.4 days post-merger. Domoto et al. (2021, 2022) similarly ascribe this feature to Sr ii, and find tentative evidence for doubly-ionized lanthanides La iii and Ce iii (lanthanum, \({}_{57}\)La and cerium, \({}_{58}\)Ce) in the near-infrared (\(\sim\)12,000-14,000 A). Similarly, Gillanders et al. (2022) argues for the presence of Sr ii, but also ions of adjacent first \(r\)-process peak elements Y ii and Zr ii (yttrium, \({}_{39}\)Y and zirconium, \({}_{39}\)Zr) at wavelengths \(\lesssim\) 6000 A. Sneppen and Watson (2023) find that Y ii is present at 4.4 and 5.4 days, producing a P Cygni feature at \(\sim\)7600 A. Gillanders et al. (2022) also suggests the presence of a modest amount of lanthanide material at these times. At later times, when the ejecta enters an optically thin regime, the spectrum is dominated by emission features. Gillanders et al. (2023) find that Ce iii may produce emission at \(\sim\)15,800 A and \(\sim\)20,700 A beyond 3.4 days, up to 10.4 days. At \(\gtrsim\) 7 days, these may instead arise from intrinsically weak lines, with ions Te iii and Te i (tellurium, \({}_{52}\)Te) and I ii (iodine, \({}_{53}\)I) given as the most likely candidates (Gillanders et al., 2023; Hotokezaka et al., 2023).
While these studies have shed valuable light on some of the species present in the ejecta of AT2017gfo, the abundances of _all_ elements in the ejecta are not known. In Vieira et al. (2023) (hereafter V23), we fit the spectrum of AT2017gfo at 1.4 days post-merger, using our inference approach and software tool Spectroscopic \(r\)-Process Abundance Retrieval for Kilonovae (SPARK). With SPARK, we inferred the complete elemental abundance pattern of the ejecta. We found that the ejecta was dominated by lighter \(r\)-process elements, generating a bluer (relatively lower opacity in the UV and optical) kilonova. This ejecta had electron fraction \(Y_{e}=0.311^{+0.013}_{-0.011}\) and specific entropy per nucleon \(s/k_{\rm B}=13.6^{+4.1}_{-3.0}\), which yielded an extremely low lanthanide fraction \(\log_{10}X_{\rm lan}=-7.03^{+0.46}_{-0.47}\). This lanthanide fraction is inconsistent with the \(r\)-process abundance pattern seen in the Solar system and beyond (Ji et al., 2019).
We have not yet inferred the abundances at later epochs: 2.4 days, 3.4 days, and beyond. At later epochs, as the kilonova ejecta expands and becomes more optically thin, we expect that the photosphere recedes into the ejecta. This may uncover additional components which power the kilonova at later times, after being physically hidden underneath the photosphere at early times and/or out-shined by the early blue emission. The emergence of new components at later times would be consistent with results from light curve modeling (_e.g._, Villar et al., 2017), which has indicated the presence of multiple ejecta components; in particular, a redder component emerging at \(\sim\)3 days post-merger. These different components may originate from different ejection mechanisms during the merger, and are characterized by different masses, velocities, and heating rates as a function of time. In a NS-NS merger, we expect a redder component mostly confined to the plane of the initial binary from the tidal ejecta, with little neutrino reprocessing, a bluer squeezed polar component from the collisional interface between the two NSs, and a more isotropic red/blue disk wind from an accretion disk around a merger remnant. Our inferred \(Y_{e}\) and
\(s\) at 1.4 days are consistent with an outflow produced over timescales longer than the dynamical time which has been substantially reprocessed by neutrinos, _e.g._, winds from a hypermassive neutron star remnant or an accretion disk. At later times, the spectrum might be dominated or better described by a different ejecta or a multi-component ejecta configuration.
Here, we explore the time evolution of the inferred abundance pattern and the need for multi-component ejecta models to fit the spectra of AT2017gfo, at 1.4, 2.4, and 3.4 days post-merger. We first extend our single-component fitting to 2.4 and 3.4 days to obtain the best single-component models for the ejecta. We then develop a model in the radiative transfer code where the ejecta is stratified and multi-component. We compare our best multi-component fits at 1.4, 2.4, and 3.4 days to their single-component equivalents. The abundances of these single- and multi-component models are then examined to paint a picture of the inferred abundance pattern as a function of time.
This paper is organized as follows. In Section 2, we briefly review SPARK and present the upgrades which enable fitting the later epochs of AT2017gfo with both single- and multi-component models. In Section 3, we present our fits. In Section 4, we explore the time evolution of the inferred abundance pattern, the species present in the ejecta, and the physical origin of different components. We briefly conclude in Section 5.
## 2 Methods
### Spectroscopic \(r\)-Process Abundance Retrieval for Kilonovae (Spark)
We briefly describe our tool, SPARK, and refer the reader to V23 for more detail. SPARK (Spectroscopic \(r\)-Process Abundance Retrieval for Kilonovae) is designed as a modular inference engine for extracting key kilonova parameters from optical spectra, determining the element-by-element abundance pattern of the ejecta, and associating absorption features in the spectra with particular species.
In SPARK, we use the 1D TARDIS (Kerzendorf and Sim, 2014; Kerzendorf et al., 2023) radiative transfer code to generate a set of synthetic spectra. In TARDIS, photon packets are propagated through shell(s) of plasma, where they may undergo either bound-bound processes or electron scattering. To handle bound-bound (matter-radiation) interactions in the ejecta, we require a list of lines for the species in the ejecta. We use a line list of observed lines, obtained through the Vienna Atomic Line Database (VALD; Ryabchikova et al., 2015; Pakhomov et al., 2019). Each spectrum is parameterized by a set \(\theta_{i}\) of parameters, including a luminosity, density, inner/outer computational boundary velocities, and three key parameters which set the abundances in the ejecta: the electron fraction \(Y_{e}\), expansion velocity \(v_{\rm exp}\), and specific entropy per nucleon \(s/k_{\rm B}\). These latter three parameters describe different abundance patterns output by the nuclear reaction network calculations of Wanajo (2018), which use parametric outflow trajectories, allowing us to infer the abundance pattern of the ejecta.
To perform this inference, we express our likelihood function using the full formalism of Czekala et al. (2015) for likelihoods involving spectroscopic data. However, because of the considerable computational cost of spectral synthesis with TARDIS, we do not use more common methods such as Markov chain Monte Carlo (MCMC) or nested sampling for inference--rather, we couple TARDIS to the approximate posterior estimation scheme of approxposterior(Fleming and VanderPlas, 2018; Fleming et al., 2020). In this scheme, we introduce a Gaussian Process (GP) surrogate for the posterior \(L_{p}(\theta)\) and employ Bayesian Active Posterior Estimation (BAPE; Kandasamy et al., 2017). BAPE is a form of active learning in which we maximize an acquisition function with terms including both the mean \(\mu(\theta)\) and variance \(\sigma^{2}(\theta)\) of the GP. This acquisition function thus balances exploration (of the parameter space) and exploitation (sampling around the peak(s) of the posterior). The GP is iteratively retrained as new points (\(\theta,L_{p}(\theta)\)) are added to a training set, and this GP converges to an approximation of the posterior.
In all, inference is dramatically accelerated, and we obtain (among the other parameters) the \(Y_{e}\), \(v_{\rm exp}\), and \(s/k_{\rm B}\) which best describe the ejecta, with relatively few forward model evaluations. In V23, we fit the VLT/X-shooter spectrum of AT2017gfo (Pian et al., 2017; Smartt et al., 2017) at 1.4 days with a base set of 1500 Latin Hypercube samples + 1140 BAPE active learning samples. This is a factor of \(\sim 10^{3}\) fewer samples than might be required with a standard MCMC for a similar 6-dimensional fit.
### Multi-component, stratified ejecta with TARDIS
In V23, we modeled the kilonova ejecta as a single shell with a uniform abundance pattern. The plasma in this shell is also described by a single temperature and mass/electron density. This configuration is fully described by the luminosity at the outer boundary \(L_{\rm outer}\), (which in fact sets the initial guess for the temperature at the inner boundary), the normalization in the density power law \(\rho_{0}\), the inner and outer boundary ve
locities \(v_{\rm inner}\) and \(v_{\rm outer}\)2, and three parameters which set the abundance pattern: electron fraction \(Y_{e}\), expansion velocity \(v_{\rm exp}\), and specific entropy \(s/k_{\rm B}\). A single-component fit is thus 7-dimensional, unless one or more of the parameters are fixed. For example, we fixed \(v_{\rm outer}=0.35c\) in our fit to the 1.4 day spectrum. This setup can describe a single ejecta component such as dynamical ejecta or some outflow. It may also describe a kilonova in which one component significantly dominates (by mass or by the strength of the absorption/emission features) over the other(s).
Footnote 2: TARDIS assumes homologous expansion, in which \(v\propto r\), such that velocity can be understood as a spatial coordinate. Given a density profile, the velocity range sets the mass of the ejecta.
Here, we implement multi-component ejecta. TARDIS allows for ejecta composed of stratified radial shells, each with a specific temperature, density, plasma conditions, and composition. In this configuration, each shell can have a specific abundance pattern. In single-shell runs, we compute the plasma state only once given \(L_{\rm outer}\). For these multi-shell runs, we must employ multiple TARDIS iterations when generating synthetic spectra. At each iteration, the plasma conditions are updated, and converge towards an ejecta where the luminosity emitted at the outer boundary matches the user-requested \(L_{\rm outer}\). We use 10 such shells and 30 such iterations in all runs.
We begin with a simple two-component ejecta. As with our single-component model, this two-component model has an outer boundary luminosity \(L_{\rm outer}\) and density \(\rho_{0}\). Each of the components is then described by inner and outer boundary velocity (\(v_{\rm inner}\) and \(v_{\rm outer}\)) and a \(Y_{e}\), \(v_{\rm exp}\), and \(s/k_{\rm B}\). Two-component fits are thus \(2+5+5=12\)-dimensional. The two components necessarily overlap in physical space because TARDIS cannot simulate a gap between them. The abundance in each shell is then determined by the component(s) which are in a given shell. For shells where there is overlap between two components, the abundance is taken as a sum of the two abundance patterns and renormalized to unity.
Multiple components allow for additional complexity in the spectral synthesis.3 In particular, we can produce the effect of reprocessing, where emission from one component is absorbed and re-emitted/scattered by another. We can also produce the effect of lanthanide curtaining, in which some outer lower-\(Y_{e}\) ejecta containing the lanthanides masks an inner, bluer, lighter-element ejecta, generating a redder kilonova due to the considerable opacity of the lanthanides in the near-UV and optical. Some kilonova spectra may be better-described by these multi-component ejecta models. In V23, we find that the 1.4 day of AT2017gfo is well-described by a single-component ejecta with \(Y_{e}=0.31\). However, at later epochs, as the ejecta expands and becomes more optically thin, the photosphere recedes into the ejecta and we may unmask additional components which were hidden or out-shined at early times. Light curve modelling (_e.g._, Villar et al., 2017) has shown that the kilonova may indeed be better-described by multiple components of different opacities, and some spectral modelling (_e.g._, Kasen et al., 2017) also invokes multiple components. This motivates our introduction of multi-component ejecta into SPARK.
Footnote 3: See Kawaguchi et al. (2020) for an exploration of the diversity of kilonovae which may be produced when multiple components are present.
### Inference setup
All approxposterior/BAPE hyperparameters and optimizers used in this work are the same as those used in V23. We again produce a base set of \(m_{0}=1500\) Latin Hypercube sampled points at the beginning of each SPARK run. However, the parameter space allowed by our priors differs for the 1.4, 2.4, and 3.4 day fits. Table 1 includes our (all uniform) priors for each of our single-component fits. The bounds on the density \(\rho_{0}\) and abundance-setting parameters \(Y_{e}\), \(v_{\rm exp}\), and \(s\) are the same at all epochs. The priors differ in the bounds on the luminosity, as expected given the cooling of the ejecta over time as it expands. We further allow for wider priors on the inner and outer boundary velocities for the fits at later epochs. In V23, we fixed \(v_{\rm outer}=0.35c\) during our 1.4 day fit but noted that we observed similar results for \(v_{\rm outer}\) in the range \(0.35-0.38c\). Here, we allow for greater flexibility in \(v_{\rm outer}\) later epochs.
Our priors for our multi-component fits are given in Table 2. These are broad, and identical, at 1.4 and 2.4 days. Aside from requiring \(v_{\rm inner,i}<v_{\rm outer,i}\), our priors also require some overlap between the two components, _i.e._, \(v_{\rm outer,2}>v_{\rm inner,1}\) or \(v_{\rm outer,1}>v_{\rm inner,2}\). To allow for even sampling of parameter space for the \(m_{0}=1500\) points in the base training set with these conditional constraints, we use constrained Latin Hypercube Sampling (Petelet et al., 2009). At 3.4 days, we encounter challenges with the convergence of the posterior for these broad, conditional priors in velocities. We thus use tighter priors on the velocities at this epoch, but all other priors are the same as those at 1.4 and 2.4 days.
When attempting to fit the full 3.4 day spectrum, we find that the fit converges to a blackbody with little to no absorption to fit the continuum of the spectrum,
especially at the shortest wavelengths \(\lesssim\)6400 A. Since we are interested in determining the species involved in the absorption, and especially the prominent feature at \(\sim\)8000 A, we prioritize fitting the \(\geq\)6400 A region of the spectrum by excluding shorter wavelengths in the computation of the likelihood. We perform this exclusion for both single- and multi-component fits.
Finally, when expressing the likelihood using the Czekala et al. (2015) formalism, we use a global covariance term with amplitude \(a_{\rm G}=10^{-34}\) (erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\))\({}^{2}\) and a correlation length scale \(\ell=0.025c\) for all 1.4 and 2.4 day fits. For 3.4 day fits, after some trial and error, we find that a smaller \(a_{\rm G}=10^{-35}\) (erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\))\({}^{2}\) better matches the uncertainties on the observed spectrum at this epoch.
the best fit "purple + warm" model here, and refer the reader to [23] for details. The purple + warm model is characterized by a photospheric velocity of \(v_{\rm inner}/c=0.313^{+0.013}_{-0.014}\); while the outer boundary velocity was fixed to \(v_{\rm outer}=0.35c\). For abundance-setting parameters, we found an electron fraction \(Y_{e}=0.311^{+0.013}_{-0.011}\), expansion velocity of \(v_{\rm exp}/c=0.240^{+0.055}_{-0.082}\), and entropy of \(s/k_{\rm B}=13.6^{+4.1}_{-3.0}\). In [23], we referred to this model as "purple + warm", but emphasize that this is nonetheless substantially blue ejecta within the broader context of kilonova components. These parameters correspond to an abundance pattern that is poor in lanthanides, but rich in strontium (\({}_{38}\)Sr), allowing for a good fit to the prominent \(\sim\)8000 A feature. The fit struggled at shorter wavelengths: over \(3500-4500\) A, we overestimated the absorption from yttrium (\({}_{39}\)Y) and zirconium (\({}_{40}\)Zr), while at \(\lesssim\)3500 A, we underestimated. Nonetheless, the single-component model remains our preferred model at 1.4 days, as we elaborate on below.
In our 1.4 day multi-component SPARK run, we require \(m_{0}+m_{\rm active}=1500+1440\) points to obtain a good fit and converged posterior (see Figure 7). More active learning samples are required than for single-component due to the increased dimensionality of the multi-component fits. The resultant fit generally captures the shape of the spectrum, and the broad absorption at \(\sim\)8000 A, like the single-component equivalent. However, the fit does not capture the continuum at wavelengths \(\gtrsim\)10,500 A nor \(\sim\) 3500 - 5000 A. The fit is marginally closer to the observed spectrum at the shortest wavelengths (\(\lesssim\) 3500 A) but this region lies at the edge of the X-shooter spectrograph sensitivity, and overall the single-component model is favored by visual inspection.
We infer an outer boundary luminosity of \(\log_{10}(L_{\rm outer}/L_{\odot})=7.854^{+0.012}_{-0.017}\). This is brighter than is inferred in the single-component equivalent. This is as expected: due to the use of 30 TARDIS iterations
Figure 1: **Compilation of all best fit models to the spectrum of the GW170817 kilonova from this work: 1.4, 2.4, and 3.4 days post-merger, for both single- and multi-component models.** The preferred models are highlighted; disfavored are more translucent. The single-component models are favored at 1.4 and 2.4 days (solid blue and purple lines). At 3.4 days, we require a multi-component ejecta to adequately fit the spectrum (dashed red line). The abundance patterns corresponding to these preferred models are included in Figure 2. Zoomed in versions of these best fits are included in Figures 6 and 7 (Appendix A). Best fits are obtained as the median of the posterior, or the median of some mode of the posterior; these full posteriors are also included in Appendix A.
(rather than 1), \(L_{\rm outer}\) is updated at each iteration. Hence, in our multi-component models, the meaning of \(L_{\rm outer}\) is different and cannot be mapped directly to \(T_{\rm inner}\). (\(L_{\rm outer}\) is also higher in the 2.4 and 3.4 day multi-component models, compared to the single-component equivalents).
This multi-component run yields two components with substantially different abundance patterns. The first component has a lower electron fraction \(Y_{e,1}=0.139^{+0.028}_{-0.114}\) (and specific entropy \(s_{1}/k_{\rm B}=24.6^{+3.4}_{-5.1}\)), yielding. The second has higher \(Y_{e,2}=0.340^{+0.022}_{-0.021}\) (and \(s_{2}/k_{\rm B}=20.1^{+3.8}_{-3.6}\)), distinctly bluer. Indeed, the \(Y_{e,2}\) of this second, bluer component is consistent with the single-component fit at 1.4 days. As in the single-component fit, expansion velocities (\(v_{\rm exp,1}\) and \(v_{\rm exp,2}\)) are poorly constrained. Interestingly, the better-constrained entropies (\(s_{1}\) and \(s_{2}\)) are both higher than and inconsistent with the single-component equivalent.
However, despite the substantial differences between the two components, one component dominates over the other in producing the emergent spectrum. A physical picture for the components is included in Figure 3. We see that the red component is confined to only 2 shells, while the blue component extends over a greater radius. The mass contained in these two components is \(M_{\rm phot,1}/M_{\odot}=6.9^{+22.1}_{-6.0}\times 10^{-7}\) and \(M_{\rm phot,2}/M_{\odot}=7.0^{+0.8}_{-1.0}\times 10^{-5}\), respectively. We emphasize that this is the mass above the photosphere, and not the total ejecta mass. Nonetheless, the bluer component contains \(\sim\)100\(\times\) as much mass as the redder, and dominates the absorption/emission in the spectrum. In this sense, despite the different compositions of the two components (Figure 13, Appendix B), this multi-component SPARK run has effectively converged to a single-component model.
### Single-and multi-component modelling at 2.4 days
In our 2.4 day single-component SPARK run, We require \(m_{0}+m_{\rm active}=1500+600\) points to obtain a good fit and converged posterior (see Figure 6). The resultant posterior shows some bimodality, most evident in the \(s/k_{\rm B}\) dimension. Discarding samples from the posterior with \(s/k_{\rm B}\geqslant 25.0\), we obtain a superior fit, and use this as our preferred model. This fit captures most of the continuum of the observed spectrum, and most of the \(\sim\)8000 A absorption. If this absorption belongs to a P Cygni feature (as has been argued in Watson et al., 2019; Sneppen et al., 2023), we partially miss the red wing of this feature. Furthermore, we overestimate the absorption, or underestimate the continuum, at wavelengths \(\lesssim\)4500 A.
\begin{table}
\begin{tabular}{c c c} \hline \hline parameter & 1.4 days & 2.4 days & 3.4 days \\ \hline \(\log_{10}(\frac{L_{\rm outer}}{L_{\odot}})\) & \(7.854^{+0.012}_{-0.017}\) & \(7.700^{+0.065}_{-0.073}\) & \(7.605^{+0.049}_{-0.040}\) \\ \(\log_{10}(\frac{c}{c_{\rm cm}^{-3}})\) & \(-15.095^{+0.127}_{-0.401}\) & \(-15.440^{+0.813}_{-0.467}\) & \(-14.505^{+0.233}_{-0.372}\) \\ \hline \(v_{\rm inner,1}/c\) & \(0.323^{+0.007}_{-0.020}\) & \(0.260^{+0.061}_{-0.039}\) & \(0.213^{+0.056}_{-0.035}\) \\ \(v_{\rm outer,1}/c\) & \(0.326^{+0.009}_{-0.012}\) & \(0.335^{+0.054}_{-0.065}\) & \(0.344^{+0.035}_{-0.038}\) \\ \(v_{\rm exp,1}/c\) & \(0.118^{+0.029}_{-0.025}\) & \(0.170^{+0.108}_{-0.100}\) & \(0.198^{+0.092}_{-0.102}\) \\ \(Y_{e,1}\) & \(0.139^{+0.028}_{-0.114}\) & \(0.288^{+0.129}_{-0.187}\) & \(0.228^{+0.073}_{-0.088}\) \\ \(s_{1}\) [\(k_{\rm B}\)/nuc] & \(24.6^{+3.4}_{-5.1}\) & \(21.1^{+1.05}_{-9.5}\) & \(14.9^{+8.0}_{-4.5}\) \\ \hline \(v_{\rm inner,2}/c\) & \(0.281^{+0.011}_{-0.013}\) & \(0.268^{+0.049}_{-0.045}\) & \(0.232^{+0.038}_{-0.027}\) \\ \(v_{\rm outer,2}/c\) & \(0.355^{+0.015}_{-0.015}\) & \(0.335^{+0.045}_{-0.070}\) & \(0.334^{+0.037}_{-0.022}\) \\ \(v_{\rm exp,2}/c\) & \(0.195^{+0.034}_{-0.036}\) & \(0.183^{+0.089}_{-0.110}\) & \(0.134^{+0.091}_{-0.069}\) \\ \(Y_{e,2}\) & \(0.340^{+0.022}_{-0.021}\) & \(0.261^{+0.163}_{-0.124}\) & \(0.161^{+0.149}_{-0.104}\) \\ \(s_{2}\) [\(k_{\rm B}\)/nuc] & \(20.1^{+3.8}_{-3.6}\) & \(22.0^{+10.0}_{-10.1}\) & \(21.5^{+8.3}_{-5.5}\) \\ \hline \hline \(M_{\rm phot,1}\) [\(M_{\odot}\)] & \(6.9^{+22.1}_{-6.0}\times 10^{-7}\) & \(2.3^{+12.5}_{-0.7}\times 10^{-5}\) & \(5.3^{+1.8}_{-1.3}\times 10^{-4}\) \\ \(\log_{10}X_{\rm lan,1}\) & \(-1.08^{+0.34}_{-0.34}\) & \(-0.63^{+0.26}_{-1.11}\) & \(-0.97^{+0.23}_{-0.32}\) \\ \hline \(M_{\rm phot,2}\) [\(M_{\odot}\)] & \(7.0^{+0.8}_{-1.0}\times 10^{-5}\) & \(4.0^{+4.8}_{-0.9}\times 10^{-5}\) & \(3.8^{+1.1}_{-0.9}\times 10^{-4}\) \\ \(\log_{10}X_{\rm lan,2}\) & \(-12.99^{+3.15}_{-3.45}\) & \(-4.53^{+2.61}_{-3.53}\) & \(-0.88^{+0.32}_{-0.55}\) \\ \hline \(\log_{10}X_{\rm lan,total}\) & \(-12.99^{+8.82}_{-3.45}\) & \(-3.39^{+1.34}_{-1.75}\) & \(-0.97^{+0.23}_{-0.32}\) \\ \hline \end{tabular}
\end{table}
Table 4: Best fit parameters for multi-component fits to the GW170817 kilonova at 1.4, 2.4, and 3.4 days. The ejecta mass above the photosphere \(M_{\rm phot,1;2}\) (not the total ejecta mass), lanthanide mass fractions \(X_{\rm lan,1;2}\), and the total \(X_{\rm lan,total}\) are derived parameters.
We infer an \(\log_{10}(L_{\rm outer}/L_{\odot})=7.594^{+0.040}_{-0.061}\). In the case of single-shell, single-iteration TARDIS runs, we can use these values to determine an inner boundary temperature of \(T_{\rm inner}=3050^{+126}_{-223}\) K. As expected, the kilonova has dimmed and cooled since the previous epoch. The inferred velocities also match expectations: we obtain inner and outer boundary velocities \(v_{\rm inner}/c=0.249^{+0.017}_{-0.032}\) and \(v_{\rm outer}=0.342^{+0.047}_{-0.050}\). This \(v_{\rm outer}\) is consistent with the fixed \(v_{\rm outer}=0.35c\) from 1.4 days, while \(v_{\rm inner}\) (effectively the photospheric velocity) has receded into the ejecta, compared to \(0.31c\) at 1.4 days. This is evidence for the ejecta expanding, cooling, and become optically thinner, as expected.
Finally, we obtain \(Y_{e}=0.306^{+0.055}_{-0.204}\) and \(s/k_{\rm B}=17.6^{+7.1}_{-6.3}\). Similar to our results at 1.4 days, \(v_{\rm exp}\) is poorly constrained, while \(s\) is better-constrained. This \(s\) is also consistent with the previous epoch. The posterior distribution in all dimensions is wider at 2.4 days; in particular, \(Y_{e}\) exhibits a tail extending to smaller electron fractions. The posterior is nonetheless peaked at \(Y_{e}=0.306^{+0.055}_{-0.204}\), which is slightly lower than at 1.4 days, but consistent to within the uncertainties.
In our 2.4 day, multi-component SPARK run, we require \(m_{0}+m_{\rm active}=1500+1020\) points to obtain a good fit and converged posterior (see Figure 7). The resultant fit once again captures the broad shape and \(\sim\)8000 A absorption, but the depth of the \(\sim\)8000 A absorption feature is overestimated. The multi-component fit does, however, achieve a better fit to the continuum at wavelengths \(\lesssim\)7000 A.
In contrast with the 1.4 day, multi-component run, we see that the two inferred components are remarkably similar. The first component is described by \(Y_{e,1}=0.288^{+0.129}_{-0.187}\) and \(s_{1}/k_{\rm B}=21.1^{+10.5}_{-9.5}\), while the latter has \(Y_{e,2}=0.261^{+0.163}_{-0.124}\) and \(s_{2}/k_{\rm B}=22.0^{+10.0}_{-10.1}\). Again, expansion velocities are poorly constrained, while the better-constrained entropies are higher than and inconsistent with the single-component equivalent. Both entropies are consistent with the entropies inferred in the 1.4 day, multi-component model. Finally, \(Y_{e,1}\) and \(Y_{e,2}\) are both lower than, but still consistent with, the dominant blue component in the 1.4 day, multi-component model and the 1.4 day, single-component (purple+warm) model.
Our inferred geometry for the two components is shown in Figure 3. As expected, the photosphere recedes into the ejecta. The two components almost completely overlap in physical space; indeed, the mass contained in these two components is roughly equally split into \(M_{\rm phot,1}/M_{\odot}=2.3^{+12.5}_{-0.7}\times 10^{-5}\) and \(M_{\rm phot,2}/M_{\odot}=4.0^{+4.8}_{-0.9}\times 10^{-5}\), respectively. Furthermore, the similarity between (\(Y_{e,1},~{}v_{\rm exp,1},~{}s_{1}\)) and (\(Y_{e,2},~{}v_{\rm exp,2},~{}s_{2}\)) yields
Figure 2: **Best fit abundance patterns at 1.4, 2.4, and 3.4 days.** The abundances at 1.4 and 2.4 days are taken from the preferred single-component fits, while the abundance at 3.4 days is taken from the preferred multi-component fit (see Figure 1). The overall abundance of the multi-component ejecta at 3.4 days is the mass-weighted sum over all 10 shells in the stratified ejecta. A new, redder kilonova component emerges at 3.4 days post-merger. Uncertainty bands are obtained by taking additional samples from the posterior, effectively propagating the uncertainty on \(Y_{e}\), \(v_{\rm exp}\), and \(s\) into the abundances. We also show the Solar \(r\)-process pattern, computed using data from Lodders et al. (2009) subtracted by the \(s\)-process residuals of Bisterzo et al. (2014). The best fit abundances are evidently non-Solar at the first two epochs, but are closer to Solar at the third.
similar compositions for both components (Figure 13, Appendix B). Given the two components' similar compositions and equal contributions to the emergent spectrum, this multi-SPARK run has also converged to an effectively single-component model.
### Single-and multi-component modelling at 3.4 days
Our best single-component model for 3.4 days is in general a poor fit to AT2017gfo. Nonetheless, we review it here for completeness. As at 2.4 days, we require \(m_{0}+m_{\rm active}=1500+600\) points (see Figure 6) to obtain a converged posterior. The resultant posterior shows a high degree of multi-modality, most evident in the \(\rho_{0}\), \(Y_{e}\), and \(s/k_{\rm B}\) dimensions. Of the multiple modes, we obtain our best (but still poor) fit with a higher-\(\rho_{0}\), mid-\(Y_{e}\), lower-\(s\) mode. This fit produces some, but not all, of the dominant absorption feature at \(\sim\)8000 A. This fit also overestimates the absorption, or underestimates the continuum, for wavelengths \(\lesssim\)5500 A.
We find \(\log_{10}(\rho_{0}/{\rm g~{}cm}^{-3})=-14.586^{+0.313}_{-0.384}\). This density is larger than that inferred at earlier epochs. We also infer \(v_{\rm outer}=0.309^{+0.032}_{-0.023}\). This \(v_{\rm outer}\) is much smaller than the fixed value at 1.4 days or the inferred at 2.4 days. This may explain the insufficient absorption at \(\sim\)8000 A, if the photon packets are not interacting with enough matter to be absorbed in large numbers before escaping the ejecta. Finally, we find \(Y_{e}=0.226^{+0.062}_{-0.067}\) and \(s/k_{\rm B}=15.4^{+7.2}_{-3.1}\) at this epoch. \(Y_{e}\) is lower and inconsistent with both previous epochs. Interestingly, despite the lower \(Y_{e}\), the inferred abundance of Sr is within 1% of that of our single-component model at 1.4 days, and in fact \(\sim\)25% larger than that of our single-component model at 2.4 days. Nonetheless, Sr does not yield the expected prominent absorption feature at \(\sim\)8000 A. This suggests that the shortcomings of this model may not be due to the inferred abundance pattern, but rather the density, velocity structure, and/or some inherent shortcoming of the single-component model.
For our preferred, multi-component model, we require \(m_{0}+m_{\rm active}=1500+1140\) points to obtain a good fit and converged posterior at 3.4 days (see Figure 7). The resultant fit is better than the single-component equivalent. This fit better captures the \(\sim\)8000 A absorption, and the red wing of the nominal P Cygni feature. However, this fit also overestimates the absorption, or underestimates the continuum, for wavelengths \(\lesssim\)5500 A. This over/underestimation is more severe than in the single-component model.
Interestingly, we find \(\log_{10}(\rho_{0}/{\rm g~{}cm}^{-3})=-14.505^{+0.323}_{-0.372}\), which is higher than, and inconsistent with, the density inferred at 1.4 and 2.4 days. This
Figure 3: **Inferred physical ejecta configuration, and the composition-setting parameters, for the multi-component fits.**_Top to bottom:_ 1.4, 2.4, and 3.4 days. At 1.4 days, a thin red component containing \(\sim 1\%\) of the total mass is inferred. This component has a negligible impact on the emergent spectrum, indicating that this model is effectively single-component. At 2.4 days, two components with highly similar compositions completely overlap in space, also effectively indicating a single-component ejecta. At 3.4 days, two components with different compositions are present; the ejecta is multi-component. The 1.4 and 2.4 day fits are disfavored compared to the single-component equivalents, while the multi-component fit shown here is favored for 3.4 days.
inconsistency (also present in the disfavored single-component model at 3.4 days) may arise from the emergence of a new component, with a different density profile (either in functional form or in normalization), at this epoch.
Most significantly, we infer two components with substantially different properties. The first component is described by \(Y_{e,1}=0.228^{+0.073}_{-0.088}\) and \(s_{1}/k_{\rm B}=14.9^{+8.0}_{-4.5}\), while the latter has \(Y_{e,2}=0.161^{+0.149}_{-0.104}\) and \(s_{2}/k_{\rm B}=21.5^{+8.3}_{-9.5}\). Again, expansion velocities are poorly constrained. Both entropies are better-constrained and consistent with the entropies inferred in the single-component (and multi-component) models at 1.4 and 2.4 days. The \(Y_{e,1}\) and \(Y_{e,2}\) are both substantially lower than, and inconsistent with, those of the single- and multi-component fits at 1.4 and 2.4 days. This leads to an ejecta that is substantially more lanthanide-rich: The two components have \(\log_{10}X_{\rm lan,1}=-0.97^{+0.23}_{-0.32}\) and \(\log_{10}X_{\rm lan,2}=-0.88^{+0.32}_{-0.55}\), respectively. This yields a total lanthanide fraction of \(\log_{10}X_{\rm lan,total}=-0.97^{+0.23}_{-0.32}\) (the same as \(\log_{10}X_{\rm lan,1}\) after rounding). Figure 13 (Appendix B) shows the abundances of the two components, with uncertainties. Both components contain some substantial abundance of lanthanides, but the higher-\(Y_{e}\) of the two has \(\sim\)10\(\times\) more of the crucial element Sr.
In Figure 3, we show a physical picture of the inferred ejecta. The photosphere further recedes compared to 1.4 and 2.4 days. Both components mostly overlap in space and the mass is roughly equally split: they have masses \(M_{\rm phot,1}/M_{\odot}=5.3^{+1.8}_{-1.3}\times 10^{-4}\) and \(M_{\rm phot,2}/M_{\odot}=3.8^{+1.1}_{-0.9}\times 10^{-4}\), respectively. Given the roughly equal contributions from a component rich in lanthanides and another with a similar abundance of lanthanides but \(\sim\)10\(\times\) as much Sr, and the fact that the multi-component model is a substantially better fit than the single-component at 3.4 days, we interpret the ejecta as being genuinely two-component at this epoch. As we will see, the equal contribution from two components with distinct properties shapes the emergent spectrum.
### Favored models
Thus far, we have selected our favored models by visual inspection, seeing which produces a better fit to the observed spectrum at a given epoch. Here, we explore the physical consistency of our favored models.
We have found that the 1.4 and 2.4 day spectra are well-fit by a single bluer component. Even when allowed to yield multiple components, the multi-component SPARK runs yield effectively single-component best fits at 1.4 and 2.4 days. At 1.4 days, the bluer component contains \(\sim\)100\(\times\) the mass of the redder, and the presence of the red component is thus inconsequential. At 2.4 days, the two components are roughly equal in mass and characterized by similar abundance patterns; this fit is thus also effectively single-component.
In contrast, at 3.4 days, the two components in the multi-component fit have different abundance patterns. In particular, an additional lower-\(Y_{e}\) component rich in lanthanides is required to fit the observed spectrum. The lower-\(Y_{e}\) component contains as much as \(\sim\)10\(\times\) more lanthanides than the higher-\(Y_{e}\) component, given the broadness of the posterior. The remaining higher-\(Y_{e}\) component, however, contains \(\sim\)10\(\times\) as much Sr. Given our atomic data, line list, and (most importantly) abundance patterns from the reaction network calculations of Wanajo (2018), no single component is able to simultaneously yield the abundance of Sr and lanthanides needed to reproduce the observed 3.4 day spectrum of AT2017gfo. As we will see in the following sections, the role of the higher-\(Y_{e}\) component is to provide enough Sr to yield the \(\sim\)8000 A absorption, while the lower-\(Y_{e}\) yields most of the lanthanides needed to produce the short-wavelength \(\lesssim\)7500 A absorption.
Aside from asking whether a single- or multi-component ejecta is individually preferred at each
Figure 4: **Hypothetical models at 1.4, 2.4, and 3.4 days, where the higher-\(Y_{e}\) component of each multi-component fit has been replaced by that of the single-component, 1.4 days (purple + warm) model of V23**. Here, we take the best fit parameters from our multi-component runs (Table 4), and _after_ fitting, swap out the higher-\(Y_{e}\) component with the purple + warm (\(Y_{e}=0.311,\ \mathrm{t_{exp}}/c=0.240,\ s/k_{\rm B}=13.6\)). \(L_{\rm outer}\), \(\rho_{0}\), and the velocities are left unchanged, and thus the mass in each component is also unchanged. Comparing to the original multi-component models (dashed lines, Figure 1), the differences are negligible at all epochs.**
epoch, we also ask: does the _addition_ of a new component to the original, purple + warm ejecta at 1.4 days improve the fit? To test this, we replace the higher-\(Y_{e}\) components of the multi-component models at 1.4, 2.4, and 3.4 days with this purple + warm component. (These higher-\(Y_{e}\) components have \(Y_{e}=0.340^{+0.022}_{-0.021}\), \(0.288^{+0.129}_{-0.187}\), and \(0.228^{+0.073}_{-0.088}\) at 1.4, 2.4, and 3.4 days, respectively). Figure 4 shows the spectra which result from replacing these components with the parameters (\(Y_{e}=0.311,\ v_{\rm exp}/c=0.240,\ s/k_{\rm B}=13.6\)) at each epoch. At all epochs, there is minimal impact on the resultant spectrum and we retain a good fit. This is not surprising at 1.4 and 2.4 days, where the inferred \(Y_{e}\) is consistent with the purple + warm model. It is, however, surprising at 3.4 days. Crucially, replacing the higher-\(Y_{e}\) component at 3.4 days changes the overall abundance pattern, but the overall abundance of Sr remains unchanged due to the presence of substantial Sr in the purple + warm ejecta. This further suggests that the role of the higher-\(Y_{e}\) component at 3.4 days is primarily to provide enough Sr to produce the \(\sim\)8000 A absorption, with the lower-\(Y_{e}\) providing sufficient lanthanides.
Overall, we infer that a new, redder, lanthanide-rich component emerges at 3.4 days. This new red ejecta component has photospheric velocity \(0.21c\) and extends out to \(0.35c\), suggesting that this component was in fact present at earlier epochs, but was partially buried underneath the photosphere and/or out-shined by the brighter blue component.
## 4 Discussion
### Abundances of the favored models
Given our favored inferred models, and their best fit (\(Y_{e}\), \(v_{\rm exp}\), \(s\)), we study the inferred abundance pattern of the ejecta as a function of time. We favor the single-component models at 1.4 and 2.4 days, and multi-component model at 3.4 days. We show the abundance patterns of these favored models in Figure 2. The overall multi-component abundance pattern at 3.4 days is computed as the mass-weighted sum over all shells, including both components.
The 1.4 and 2.4 day abundance patterns are remarkably similar, but the 2.4 day model has greater uncertainties due to the broader overall posterior. These 2.4 day abundances show evidence for \(\sim 10^{-9}-10^{-6}\) mass fractions for some of the lighter lanthanides. However, as we see in the following section, lanthanides do not leave any clear imprint on the spectrum at 2.4 days. Overall, the 1.4 and 2.4 day spectra are well-described by a single component dominated by light \(r\)-process elements around the first \(r\)-process peak.
In contrast, 3.4 day abundance pattern indicates the clear presence of heavier elements and especially lanthanides in the ejecta. The total lanthanide fraction is \(\log_{10}X_{\rm lan,total}=-0.97^{+0.23}_{-0.32}\), several orders of magnitude larger than inferred at 1.4 and 2.4 days. The higher-\(Y_{e}\) component of this multi-component ejecta then provides Sr, the role of which is crucial, as we will see in the the following section. Indeed, we infer abundances of \(Y_{\rm Sr,1.4}=0.069^{+0.021}_{-0.010}\) and \(Y_{\rm Sr,2.4}=0.087^{+0.060}_{-0.020}\) at 1.4 and 2.4 days, and \(Y_{\rm Sr,3.4}=0.067^{+0.035}_{-0.002}\) (mass-weighted sum over all shells, including both components) at 3.4 days. These are remarkably consistent with each other.
The inferred abundance patterns at 1.4 and 2.4 days are markedly non-Solar in Figure 2, where the Solar \(r\)-process abundance pattern is taken using the data of Lodders et al. (2009) with the \(s\)-process residuals subtracted according to Bisterzo et al. (2014). If all NS-NS/BH mergers produced such blue ejecta, this would point to the inability of these mergers to yield the \(r\)-process abundances seen in several astrophysical settings. The 3.4 day abundance pattern, in contrast, is much closer to Solar up to \(Z\sim 80\). In particular, the presence of lanthanides improves this agreement.
Our abundance results are in broad agreement with inference on the light curves and spectral fitting of AT2017gfo (see Ji et al., 2019, and references therein). While light curve modelling typically uses grey (wavelength-independent) opacities, this modelling generally recovers the emergence of multiple components, including a higher-opacity redder component at \(\gtrsim\)3 days, as we find here. Our results for lanthanide fractions deviate somewhat from previous works. In particular, previous works find lanthanide fractions \(\log_{10}X_{\rm lan}\sim-6\) to \(\sim-4\) for a blue component, and \(\sim-2\) for a red component. We find a more lanthanide-poor bluer component, and a marginally more lanthanide-rich (\(\log_{10}X_{\rm lan}\sim-1\)) redder component. This is notable, as Ji et al. (2019) find that the \(\log_{10}X_{\rm lan}\sim-2\) inferred in previous works falls on the low end of the lanthanide fractions measured in metal-poor stars. If our higher inferred lanthanide fraction is correct, we relieve some of the tension between the lanthanide fractions of the AT2017gfo ejecta and metal-poor stars. However, we caution that we do not probe all of the ejecta, and in particular are not sensitive to the ejecta below the photosphere. This ejecta could be more or less lanthanide-rich, complicating these comparisons.
### Elements & ions present in the favored models
To assess the impact of different elements on the best fits, we generate leave-one-out spectra. In each, we it
eratively leave out a single element by setting its abundance to 0 and transferring that original abundance to a filler element. We use helium (He) as our filler element, since it should not have a marked impact on the emergent spectrum when the ejecta remains optically thick and local thermodynamic equilibrium (LTE) is a valid approximation (Perego et al., 2022; Tarumi et al., 2023).
In Figure 5, we show leave-one-out spectra for the new favored models at 2.4 and 3.4 days, and the 1.4 day model from V23. We see the clear imprint of Sr at \(\sim\)8000 A at all epochs. This is consistent with Watson et al. (2019); Gillanders et al. (2022); Domoto et al. (2021, 2022); Sneppen et al. (2023); Sneppen and Watson (2023), which all argue for the importance of Sr in the spectra. In our new model for 2.4 days, we also see some light evidence for absorption from an adjacent first \(r\)-process peak element, yttrium (\({}_{39}\)Y), at short wavelengths \(\lesssim\)4500 A. Considering our previous identification of Y at 1.4 days in V23, this identification at 2.4 days strengthens our previous claim of the importance of Y, and is in agreement with Gillanders et al. (2022). Interestingly, Y does not have the same pronounced impact at 3.4 days. This is in agreement with Sneppen and Watson (2023), which finds that absorption from Y is less prominent at 3.4 days, before a P Cygni feature emerges at \(\sim\)7600 A at 4.4 and 5.4 days. In our favored multi-component model, the absorption from the lanthanides is much stronger at these (and shorter) wavelengths.
At 3.4 days, we identify two new, heavier elements: barium (\({}_{56}\)Ba), an open \(s\)-shell element in the same periodic table group as Sr, and cerium (\({}_{58}\)Ce), a lanthanide. Ba is actually responsible for some of the _overestimated_ absorption at \(\lesssim\)5000 A. However, recall that we neglect wavelengths \(\leq\)6400 A in our computation of the likelihood at 3.4 days to obtain a fit which accurately captures the Sr absorption. Nonetheless, the omission of Ba improves the fit, suggesting that we overestimate the abundance of Ba in our best fit abundance pattern. The other new element, Ce, produces absorption at \(\sim\)7000 A which is reprocessed and emitted at \(\sim\)8000 A, and its omission worsens the fit. Ce also introduces some absorption at \(\sim\)12,000 A. Interestingly, this absorption may originate from astrophysically-measured Ce II lines with rest wavelengths in the range of \(\sim\)15,200 \(-\) 16,700 A (Cunha et al., 2017; Majewski et al., 2017), blueshifted due to the expansion of the ejecta. Domoto et al. (2021) similarly noted that these Ce II lines may be prominent in kilonova spectra. The observation of Ce is also consistent with the kilonova parameter space clustering analysis of Ford et al. (2023), which finds that Ce II may broadly be an important ion in kilonova spec
Figure 5: **Leave-one-out spectra for the favored models: single-component for 1.4 (_top_) and 2.4 (_center_) days, and multi-component for 3.4 days (_bottom_). All models show clear absorption from strontium (\({}_{38}\)Sr) at \(\sim\)8000 Å. At 3.4 days, when the ejecta is richer in heavier elements, we also see absorption (at \(\sim\)7000 Å and \(\sim\)12,000 Å) and emission (at \(\sim\)8000 Å) from the lanthanide cerium (\({}_{58}\)Ce). However, we also see over-absorption from barium (\({}_{56}\)Ba) at \(\sim\)4500Å. Spectral DEComposition (SDEC) plots in Figure 14 (Appendix C), provide a complementary view of the dominant species.**
tra from 1.4 to 3.4 days. Ce iii may also be important at later epochs (Gillanders et al., 2023).
We complement these leave-one-out analyses using the Spectral DEComposition (SDEC) tool of TARDIS. SDEC allows us to measure which elements or ions absorb and/or emit the greatest luminosity during a given TARDIS run. All SDEC plots, for favored and disfavored models, are compiled in Figure 14 (Appendix C). In both models at 1.4 days, we see that the absorption of photon packets is dominated by just three singly-ionized species: Sr ii, Y ii, and Zr ii. Indeed, 98.9% of all absorbed luminosity is absorbed by just these three species in the favored single-component model. These ions remain important at 2.4 days, though the impact of Zr ii is less clear. We also see some minor absorption from Ba ii at 2.4 days, at the shortest wavelengths.
At 3.4 days, while Sr ii remains important, the SDEC plots reveal the presence of several new heavier ions. In the favored multi-component model, we see clearer evidence for Ba ii, with the caveat that the abundance of Ba is likely overestimated. More interesting, we see absorption from four singly-ionized lanthanides: Ce ii, Nd ii, Sm ii, and Eu ii (cerium, \({}_{58}\)Ce; neodymium, \({}_{60}\)Nd; samarium, \({}_{62}\)Sm; europium, \({}_{63}\)Eu). The absorption from Ce is strongest. Altogether, Ce ii is responsible for 27.5% of the absorbed luminosity at this epoch, compared to 20.4% from Nd ii + Sm ii + Eu ii, and 30.3% from Sr ii. This is a clear indication of the presence of lanthanides in the ejecta. Interestingly, Eu is a "pure" \(r\)-process element, in that it is produced in negligible quantities by the \(s\)-process (_e.g._, Bisterzo et al., 2014). The presence of Eu is thus further proof for the operation of the \(r\)-process in NS-NS merger ejecta. Finally, at 3.4 days, we also see some light (5.2% of the total) absorption from an ion of bismuth (\({}_{83}\)Bi), Bi ii. This is the heaviest element detected in any of our models, but we caution that its detection is marginal (we measure \(Y_{\rm Bi,3.4}=7.84^{+24.6}_{-7.43}\times 10^{-6}\)), and it does not leave a clear imprint on the leave-one-out spectra.
Considering both the leave-one-out and SDEC analyses, we confidently identify Sr at all epochs. We solidify our identification of Y at 1.4 and 2.4 days. Ba may also be present in the ejecta, but its abundance is likely overestimated at 3.4 days, complicating this claim. Finally, at 3.4 days, we infer the presence of singly-ionized species of the lanthanides Ce, Nd, Sm, and Eu, with the detection of Ce being most concrete. This ensemble of lanthanides, which was not present at 1.4 and 2.4 days, emerges at 3.4 days to significantly shape the emergent spectrum.
### Physical origin of the ejecta components
Inferring the parameters of the ejecta component(s) also allows us to map these components to different ejection mechanisms in the NS-NS merger which generated the kilonova. Given the inferred \(Y_{e}\) and \(s\) (we neglect \(v_{\rm exp}\), since it is poorly constrained), the ejecta at 1.4 and 2.4 days can be attributed to matter that has undergone strong neutrino reprocessing. This ejecta can arise from the neutrino-reprocessed wind of a hypermassive neutron star remnant plus disk, either magnetized (_e.g._, Combi & Siegel, 2023; Curtis et al., 2023) or unmagnetized (_e.g._, Fujibayashi et al., 2023; Just et al., 2023). Alternatively, accretion disks around black holes--evolved in MHD--also yield ejecta with a broad distribution of electron fractions (_e.g._, Siegel & Metzger, 2018; Fernandez et al., 2019; Christie et al., 2019; Miller et al., 2019; Just et al., 2022; Fahlman & Fernandez, 2022; Hayashi et al., 2023; Curtis et al., 2023). An outflow from such an accretion disk could in principle provide both the ejecta at 1.4 and 2.4 days, and, the neutron-rich component which emerges at 3.4 days.
The lanthanide-bearing component at 3.4 days (\(Y_{e,2}=0.161^{+0.149}_{-0.104}\), \(s_{2}/k_{\rm B}=21.5^{+8.3}_{-9.5}\)) can also be accounted for by dynamical ejecta. Numerical relativity simulations that include neutrino absorption typically find \(Y_{e}\sim 0.15-0.25\) and \(s/k_{\rm B}\sim 20\)(_e.g._, Zappa et al., 2023), consistent with our inferred parameters. The slightly higher-\(Y_{e}\) component at 3.4 days (\(Y_{e,1}=0.228^{+0.073}_{-0.088}\), \(s_{1}/k_{\rm B}=14.9^{+8.0}_{-4.5}\)) is also consistent with such dynamical ejecta. However, the dynamical ejecta alone is unlikely to account for most of the mass generating the kilonova, as numerical relativity simulations that employ parameters consistent with GW170817 produce \(\lesssim 10^{-2}M_{\odot}\) in this dynamical component (_e.g._, Shibata et al., 2017; Most et al., 2019; Nedora et al., 2021). Our inferred masses \(M_{\rm phot,1}\sim M_{\rm phot,2}\sim 10^{-4}M_{\odot}\) are consistent with this bound, but recall that we are only sensitive to the mass above the photosphere. Moreover, light curve modelling has found that more mass is contained in the red than the blue component, which is challenging to accomplish if our red component is indeed dynamical ejecta. Modelling the later spectra of AT2017gfo, when we are sensitive to more of the ejecta mass, will be important for understanding whether this new, redder, lanthanide-rich component is more consistent with dynamical ejecta or an outflow from an accretion disk.
## 5 Conclusions
We fit single- and multi-component ejecta models to the spectra of the GW170817 kilonova, AT2017gfo, during the early, optically thick phase at 1.4, 2.4, and 3.4 days post-merger. With these fits, we infer the element-by-element abundance patterns at each of these epochs.
We find that a single-component model is favored at 1.4 and 2.4 days, while a multi-component model is favored at 3.4 days.
This single component at 1.4 and 2.4 days is characterized by a high electron fraction \(Y_{e}\sim 0.3\) and moderate specific entropy \(s/k_{\rm B}\sim 13-18\), yielding an ejecta dominated by lighter \(r\)-process elements and a blue kilonova. This ejecta is consistent with material which has undergone substantial neutrino reprocessing, _e.g._, winds from a remnant hypermassive neutron star and/or an accretion disk. The multi-component ejecta at 3.4 days contains a higher \(Y_{e}\sim 0.23\) component and lower \(Y_{e}\sim 0.16\) component, with entropies in the range \(s/k_{\rm B}\sim 15-22\). These new components contain heavier elements, and especially the lanthanides, yielding a redder kilonova. The most substantial contribution of lanthanides comes from the lower-\(Y_{e}\) component, which is consistent with either dynamical ejecta or a neutron-rich outflow from a remnant accretion disk.
The emergence of a new red component at 3.4 days is broadly in agreement with modelling of the light curves of AT2017gfo. Physically, we infer that the photosphere recedes into the ejecta over time: from \(0.31c\) at 1.4 days, to \(0.25c\) at 2.4 days, to \(0.21c\) at 3.4 days. This recession of the photosphere, and the dimming of the earlier, blue component, reveals this new red component that was not inferred at earlier epochs.
Using both leave-one-out and Spectral DEComposition analyses, we assess the contributions of individual elements to the emergent spectra. We find that strontium Sr produces the \(\sim\)8000 A absorption at each epoch. We also strengthen our identification of yttrium Y at short wavelengths \(\lesssim\)4500 A at 1.4 and 2.4 days. At 3.4 days, this absorption from Y is less clear, as absorption from an ensemble of lanthanides (cerium Ce, neodymium Nd, samarium Sm, and europium Eu) dominates the absorption at these short wavelengths. The identification of Ce is most concrete, producing absorption at \(\sim\)7000 A and \(\sim\)12,000 A.
The abundance patterns at 1.4 and 2.4 days show a dearth of lanthanides and heavier elements which makes these inconsistent with the Solar \(r\)-process abundance pattern. However, at 3.4 days, the emergence of ejecta with lanthanide fraction \(X_{\rm lan}\sim-1\) substantially improves the agreement between the inferred abundance pattern and the Solar \(r\)-process, and the distribution of lanthanide fractions measured in metal-poor stars. The better agreement between the Solar \(r\)-process/metal-poor stars and inferred abundance pattern at 3.4 days, and the possible presence of multiple components (remnant/disk wind and dynamical ejecta), lends more credence to the ability of NS-NS/BH mergers to dominate the \(r\)-process in the Universe.
While we have fit spectra of AT2017gfo at 1.4, 2.4, and 3.4 days, the observed spectra of AT2017gfo extend to 10.4 days. At later times, the ejecta is expected to leave the photospheric phase and enter the optically-thin nebular phase, when non-LTE effects become non-negligible. Given the modular nature of SPARK, we could swap out TARDIS for a code specifically suited to non-LTE radiative transfer, or, use the non-LTE functionality already available in TARDIS. Our Bayesian inference framework would also enable quantitative model comparison between models with and without non-LTE effects at epochs where their relative importance is not yet understood (_e.g._, 4.4 and 5.4 days). Fitting later epochs will also be crucial for inferring the abundances and masses of all of the ejecta components, including those hidden below the photosphere at 3.4 days. Beyond the later epochs of AT2017gfo, SPARK can perform inference on the growing zoo of kilonova yielded by NS-NS/BH binaries with different parameters, allowing us to understand the connections between binary parameters, ejection mechanisms, elemental compositions, and fundamental \(r\)-process conditions in the ejecta.
## Acknowledgments
NV works in Tihotia:ke / Mooniyang, also known as Montreal, which lies on the unceded land of the Haudenosaunee and Anishinaabeg nations. This work made use of high-performance computing resources in Tihotia:ke / Mooniyang and in Burnaby, British Columbia, the unceded land of the Coast Salish peoples, including the Tseil-Waututh, Kwikwetlem, Squamish, and Musqueam nations. We acknowledge the ongoing struggles of Indigenous peoples on this land, and elsewhere on Turtle Island, and hope for a future marked by true reconciliation.
We thank the attendees of an April 2023 workshop at University of California--Santa Cruz, for fruitful discussions on kilonovae and the \(r\)-process. We are also grateful to Shinya Wanajo for kindly sharing their reaction network calculations. Finally, we thank Jessica Birky and David Fleming for useful discussions on approximate Bayesian inference and the use of approxposterior.
This work made extensive use of the Narval and Cedar clusters of the Digital Research Alliance of Canada at the Ecole de technologie superieure and Simon Fraser University (with regional partner WestGrid), respectively. We thank the support staff of Calcul
Quebec in particular for their assistance at various steps in this project.
This work made use of the Vienna Atomic Line Database (VALD), operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna. We thank Nikolai Piskunov and Eric Stempels for help in obtaining the VALD data.
This research also made use of TARDIS, a community-developed software package for spectral synthesis in supernovae (Kerzendorf and Sim, 2014). The development of TARDIS received support from the Google Summer of Code initiative and from the European Space Agency (ESA)'s Summer of Code in Space program. TARDIS is a fiscally sponsored project of NumFOCUS. TARDIS makes extensive use of astropy and PyNE. We thank Andrew Fullard, Wolfgang Kerzendorf, and the entire TARDIS development team for their assistance and their commitment to the development and maintenance of the code.
N.V. acknowledges funding from the Natural Sciences and Engineering Research Council of Canada (NSERC) Canada Graduate Scholarship - Doctoral (CGS-D), the Murata Family Fellowship, and the the Bob Wares Science Innovation Prospercors Fund. J.J.R. and D.H. acknowledge support from the Canada Research Chairs (CRC) program, the NSERC Discovery Grant program, the FRQNT Nouveaux Chercheurs Grant program, and the Canadian Institute for Advanced Research (CIFAR). J.J.R. acknowledges funding from the Canada Foundation for Innovation (CFI), and the Quebec Ministere de l'Economie et de l'Innovation. N.M.F. acknowledges funding from the Fondes de Recherche Nature et Technologies (FRQNT) Doctoral research scholarship number 328732. M.R.D. acknowledges support from the NSERC through grant RGPIN-2019-06186, the Canada Research Chairs Program, and the Dunlap Institute at the University of Toronto. R.F. acknowledges support from NSERC of Canada through Discovery Grant RGPIN-2022-03463.
approx posterior: Fleming and VanderPlas (2018); astropy: Astropy Collaboration et al. (2018); cmasher: van der Velden (2020); corner: Foreman-Mackey (2016); dynesty: Speagle (2020); george: Ambikasaran et al. (2015); TARDIS: Kerzendorf and Sim (2014); Kerzendorf et al. (2023); UltraNest: Buchner (2021) |
2309.12801 | The role of electron capture decay in the precision era of Galactic
cosmic-ray data | Electron capture (EC) decay relies on attachment and stripping
cross-sections, that in turn, depend on the atomic number of the nucleus. We
revisit the impact of EC decay in the context of the high-precision cosmic-ray
fluxes measured by the AMS-02 experiment. We derive the solution of the
steady-state fluxes in a 1D thin disk model including EC decay. We compare our
results with relevant elemental and isotopic fluxes and evaluate the impact of
this process, given the precision of recent AMS-02, ACE-CRIS, SuperTIGER, and
Voyager data. We find this impact to be at the level or larger than the
precision of recently collected data for several species, e.g. $_{31}$Ga and
$_{33}$As, indicating that EC decay must be properly taken into account in the
calculation. | M. Borchiellini, D. Maurin, M. Vecchi | 2023-09-22T11:29:47Z | http://arxiv.org/abs/2309.12801v1 | # The role of electron capture decay in the precision era of Galactic cosmic-ray data
###### Abstract:
Electron capture (EC) decay relies on attachment and stripping cross-sections, that in turn, depend on the atomic number of the nucleus. We revisit the impact of EC decay in the context of the high-precision cosmic-ray fluxes measured by the AMS-02 experiment. We derive the solution of the steady-state fluxes in a 1D thin disk model including EC decay. We compare our results with relevant elemental and isotopic fluxes and evaluate the impact of this process, given the precision of recent AMS-02, ACE-CRIS, SuperTIGER, and Voyager data. We find this impact to be at the level or larger than the precision of recently collected data for several species, e.g. \({}_{31}\)Ga and \({}_{33}\)As, indicating that EC decay must be properly taken into account in the calculation.
Introduction
The study of Galactic Cosmic Rays (GCRs) can provide information not only on their propagation and the properties of their sources but also on new physics phenomena in the Universe (e.g. dark matter). Direct measurements currently provide high-precision data on GCR fluxes and the isotopic composition of heavy elements: AMS-02 measured GCR top-of-atmosphere (TOA) fluxes from H up to Si and Fe, at \(\sim 2\) GV \(-\) 2 TV, with unprecedented precision [1], SuperTIGER released TOA elemental ratios at 3.1 GeV/n for \(26\leq Z\leq 40\)[2], whereas Voyager published interstellar (IS) fluxes at \(\sim 50-200\) MeV/n for H to Ni [3]; ACE-CRIS also recently extended the measurements of the TOA isotopic composition at a few hundred of MeV/n for elements \(29<Z<38\)[4].
For this reason, it becomes increasingly important to model the processes that contribute to GCR transport as accurately as possible to obtain models for GCR fluxes precise enough to compare them with the available data and search for new (astro)physics phenomena. One process that has not been often discussed in the literature is electron capture (EC) decay, which has been interpreted in the context of the leaky-box model in [5]. It consists of the decay of a nuclide after the capture of a K-shell electron. Hence it does not occur freely in the Interstellar Medium (ISM), since GCRs are usually fully ionised. This implies that the effectiveness of EC decay depends heavily on the cross sections for attachment and stripping of electrons for the different GCR nuclei, hence on their atomic numbers, but also on their decay time which ranges from ms to Myr. In particular, a higher impact of EC decay is expected for heavy GCRs [6].
This work aims to assess the impact of EC decay in the context of the high-precision GCR elemental fluxes measured by the ACE-CRIS, AMS-02, SuperTIGER, and Voyager experiment, and the isotopic ratios measured by ACE-CRIS. In Sect. 2, we discuss the general framework for GCR transport and the methods used in this analysis. In Sect. 3, we present our results, while in Sect. 4, we summarise our findings.
## 2 Methodology
The transport of GCRs is described by a diffusion-advection equation which has been extensively discussed in [7]. The differential density \(n_{\alpha}\) of a GCR species \(\alpha\) is given by
\[\begin{split}-\vec{\nabla}_{\mathbf{x}}\,\left\{D(E)\vec{\nabla} _{\mathbf{x}}n_{\alpha}-\vec{V}_{c}n_{\alpha}\right\}+&\frac{ \partial}{\partial E}\,\left\{b_{\text{tot}}(E)n_{\alpha}-\beta^{2}K_{PP}\, \frac{\partial n_{\alpha}}{\partial E}\right\}+\sigma_{\alpha}^{\text{inel}} \,v_{\alpha}\,n_{\text{ISM}}\,n_{\alpha}\,+\Gamma_{\alpha}\,n_{\alpha}\\ &=q_{\alpha}\,+\sum_{\beta>\,\alpha}\,\left\{\sigma_{\beta\to \alpha}\,v_{\beta}\,n_{\text{ISM}}\,+\Gamma_{\beta\to\alpha}\right\}\,n_{\beta }\,,\end{split} \tag{1}\]
where the source term (right-hand side of the equation) is given by a primary injection rate \(q_{\alpha}\), and a secondary injection rate from inelastic interactions of heavier species \(\beta\) on the ISM (production cross-section \(\sigma_{\beta\to\alpha}\)) or from nuclear decay (rate \(\Gamma_{\beta\to\alpha}\)). The other terms are, respectively, from left to right: the diffusion coefficient \(D\) describing the scattering of CRs off magnetic turbulence, which depends on the rigidity \(R=pc/Ze\); the galactic wind \(V_{c}\); the rate for energy losses \(b_{\text{tot}}(E)\equiv dE/dt\) that includes ionisation and Coulomb processes as well as adiabatic losses induced by convection; the energy-dependent coefficient \(K_{PP}\) used to model reacceleration; the rate of inelastic interactions on gas \(\sigma_{\alpha}^{\text{inel}}\,v_{\alpha}\,n_{\text{ISM}}\) and the nuclear decay rate \(\Gamma_{\alpha}\).
In this work, we incorporate in Eq. (1) electron capture nuclides by treating the different charge states separately, following [6]. In this preliminary analysis, to study the interplay between the different processes analytically, we neglect energy losses, and we also neglect convection for simplicity. We perform our calculations in the two-zone (thin disk/thick halo) 1D propagation model, as used, for instance, in [7], in which GCR fluxes only depend on the vertical coordinate \(z\). The ISM gas (with density \(n_{\rm ISM}\)) and astrophysical sources are localised in a thin disc of half-height \(h=0.1\) kpc, and the thin disc is embedded in a thick halo, where GCR are confined, and they diffuse by scattering on magnetic fields irregularities. The halo is modelled as a slab in the radial direction with half-height \(L=5\) kpc, and the observer is located at \(z=0\). For practical calculations, we model the diffusion coefficient as a power law with breaks both at low and high rigidities [7], and the parameter values are taken from the combined analysis [8] of AMS-02 Li/C, Be/C, and B/C data.
In this geometry and with the above approximations, the steady-state transport equation for an EC-unstable species takes the form of the following system of two coupled equations:
\[\begin{cases}-D(E)\,\partial_{z}^{2}n_{0}+2h\delta(z)\left\{\Gamma^{\rm inel}n _{0}+\Gamma^{a}n_{0}-\Gamma^{\rm strip}n_{1}\right\}=2h\delta(z)q\;;\\ -D(E)\,\partial_{z}^{2}n_{1}+2h\delta(z)\left\{\Gamma^{\rm inel}n_{1}+\Gamma^ {\rm strip}n_{1}-\Gamma^{\rm att}n_{0}\right\}+\Gamma^{\rm EC}n_{1}=0\,.\end{cases} \tag{2}\]
These two equations describe the spatial and energy evolution of the differential density of the fully ionised GCR (\(n_{0}\)) and the same GCR with one electron attached (\(n_{1}\)). The transition from one state to another is described by the electron stripping and attachment rates, denoted \(\Gamma^{\rm strip}=n_{\rm ISM}\,v\,\sigma_{\rm strip}\) and \(\Gamma^{\rm att}=n_{\rm ISM}\,v\,\sigma_{\rm att}\) respectively; we take the cross-section parametrisations \(\sigma_{\rm att}\) and \(\sigma_{\rm strip}\) from [6]. We assume here that higher charge states are almost not populated [6], and in order to have a close system (constant number density of the species considered), we do not allow \(n_{1}\) to attach electrons. As EC-unstable species decay by capturing a K-shell electron, the EC decay rate, \(\Gamma^{\rm EC}=1/(\tau_{\rm EC}\,\gamma)\), is implemented for \(n_{1}\) only.
\begin{table}
\begin{tabular}{l l l} Process & Timescale & Dependencies \\ \hline Diffusion & \(t_{D}=\frac{L^{2}}{2D}\) & \(D\propto E^{0.5}\) \\ Inelastic scattering & \(t_{\rm inel}=\frac{1}{n_{\rm ISM}\,v\,\sigma_{\rm inel}}\) & \(\sigma_{\rm inel}\propto A^{2/3}\) \\ Attachment & \(t_{\rm att}=\frac{1}{n_{\rm ISM}\,v\,\sigma_{\rm att}}\) & \(\sigma_{\rm att}\propto\sigma(E)Z^{2}\) \\ Stripping & \(t_{\rm strip}=\frac{1}{n_{\rm ISM}\,v\,\sigma_{\rm strip}}\) & \(\sigma_{\rm strip}\propto\sigma(E)Z^{-2}\) \\ EC decay & \(t_{\rm EC}=\tau_{\rm EC}\,\gamma\) & \(t_{\rm EC}\propto E\) \\ \hline \end{tabular}
\end{table}
Table 1: The five competing processes that have been taken into account to model GCR propagation, with their corresponding timescales and dependencies.
## 3 Results
### Timescales
Before showing the solutions of Eq. (2), it is interesting to discuss the timescales in our 1D model since their interplay affects the final isotopic and elemental fluxes. Five propagation processes have been considered, as reported in Table 1, where we highlight the main dependencies on energy or atomic number. The associated timescales are shown in Fig. 1 as a function of the kinetic energy per nucleon, for \({}^{7}_{4}\)Be (left panel) and \({}^{205}_{82}\)Pb (right panel); these two species are representative of light and heavy EC decaying nuclides respectively.
First, we recover the standard result (e.g. [9]) that diffusion dominates at high energy (smallest timescale), while inelastic scattering is relevant mostly at low energies, especially for heavy nuclei. Then, for EC decay to dominate over the other propagation processes, a first condition is that \(t_{\rm EC}\) (orange dotted lines) has to be lower than \(t_{D}\) (blue solid lines) and \(t_{\rm inf}\) (green dashed lines), which is always more likely to happen at low energy, as \(t_{\rm EC}\propto E\), while \(t_{D}\propto 1/\sqrt{E}\) and \(t_{\rm inf}\) is roughly constant. However, the net effect of EC decay also relies on the interplay between attachment (magenta dash-dotted lines) and stripping processes (red dash-dotted lines), which depend on the kinetic energy and the atomic number through their cross-sections: as seen from Fig. 1, attachment only overcomes stripping for low energy and heavy nuclei, as \(t_{\rm att}\propto Z^{2}\) while \(t_{\rm strip}\propto Z^{-2}\). As a result, the impact of EC decay will depend on the specific ordering of these three times, and a species will disappear via EC decay only if both \(t_{\rm att}\lesssim t_{D}\) and \(t_{\rm EC}\lesssim(t_{D},t_{\rm strip})\).
Figure 1: Timescales for the processes listed in Table 1 as a function of kinetic energy per nucleon, for the species \({}^{7}_{4}\)Be with \(t_{1/2}=1.46\ 10^{-7}\) Myr (left panel) and \({}^{205}_{82}\)Pb with \(t_{1/2}=1.4\ 10^{7}\) Myr (right panel).
### Impact of EC decay on isotopic and elemental fluxes
We solve the coupled system of Eq. (2) following [10], and we obtain for the differential density at \(z=0\):
\[\begin{cases}n_{1}=\frac{\Gamma^{\rm att}n_{0}}{\sqrt{\frac{D(E)\,\Gamma^{\rm EC }}{h^{2}}\,\coth\left(\sqrt{\frac{\Gamma^{EC}}{D(E)}L}\right)+\Gamma^{\rm inel} +\Gamma^{\rm strip}}}\\ n_{0}=\frac{q}{\frac{D(E)}{hL}+\Gamma^{\rm inel}+\Gamma^{\rm att}-\Gamma^{\rm strip }\Gamma^{\rm att}\left[\sqrt{\frac{D(E)\,\Gamma^{\rm EC}}{h^{2}}\,\coth\left( \sqrt{\frac{\Gamma^{\rm EC}}{D(E)}L}\right)+\Gamma^{\rm inel}+\Gamma^{\rm strip }}\right]^{-1}}\.\end{cases} \tag{3}\]
Since the balance between attachment and stripping plays such a critical role in the effectiveness of EC decay, in Fig. 2 we examine, disregarding EC decay (i.e. considering \(\tau_{\rm EC}\rightarrow\infty\) in Eq. 3), the fraction of GCRs that do not attach an electron (\(n_{0}\)) with respect to the total number density (\(n_{0}+n_{1}\)), for growing elements \(Z\) (from thin to thick lines). The above conclusions from the study of characteristic timescales can explain the trend shown by the different lines: no electrons are attached above a few GeV/n, coherently with a scenario in which \(t_{\rm att}\gg t_{D}\); secondly, heavier GCRs attach more electrons than light GCRs due to the interplay between stripping and attachment cross sections. Overall, the fraction of attached electrons is at most \(\gtrsim 0.5\) for \(Z\leq 40\) at \(E_{k/n}\sim 10\) MeV/n.
Taking into account the half-life of EC-unstable species, we can now evaluate the impact of EC decay on the relevant isotopes and associated elements. We selected a subset of species \(Z\leq 40\) with both short and intermediate half-lives, which are listed in Table 2. These values have been used to compute the final isotopic and elemental abundances and derive the results presented in Fig. 3. The top panel of Fig. 3 shows the percentage of GCR isotopes that decay by EC.
Unsurprisingly, EC
Figure 2: Fraction \(n_{0}/(n_{0}+n_{1})\) of GCRs that do not attach an electron, where \(n_{0}\) and \(n_{1}\) are defined in Eq. (3). Different values of \(Z\) are shown as different shades of blue and line thicknesses.
decay has no impact on isotopic fluxes above a few GeV/n per nucleon. At lower \(E_{k/n}\), there is almost no visible effect for intermediate-lived isotopes (orange dashed lines), while short-lived GCRs (solid blue lines) exhibit different behaviours depending on their atomic number. In particular, the heavier nuclei (\({}^{67}_{31}\)Ga and \({}^{73}_{33}\)As) decay almost completely below 100 MeV/n.
It is interesting to compare the impact of EC decay (observed in our simplified model) to the precision of recent data -- we recall that the flux \(J\) is related to the differential density \(n\) by \(J=vn/(4\pi)\), so that the relative differences (considered below) on \(n\) and \(J\) are one and the same. Experimentally, light nuclei are more abundant than heavier ones, with a strong suppression of elements heavier than Fe. For this reason, light isotopes have been measured with better precision. However, light EC-unstable isotopes are rare, and because of the large attachment time for light nuclei, the abundance of \({}^{7}_{4}\)Be (thinnest blue line in the top panel of Fig. 3) does not show any change with respect to a model without EC decay. Abundances for GCR isotopes in the range \(Z=15-40\) have been measured by the ACE-CRIS experiment [4, 12]. Their precision is dominated by statistical uncertainties and strongly isotope-dependent: at a few hundreds of MeV/n it has a typical value \(\lesssim 10\%\) for \(Z=15-30\), reaching a precision \(\lesssim 50\%\) for \(Z=30-40\). We predict the impact of EC decay on \({}^{37}_{18}\)Ar flux to be \(\geq 10\%\) for \(E_{k/n}\lesssim 400\) MeV/n, which is higher than ACE-CRIS precision for the same isotope and energy range. On the other hand, the precision for ACE-CRIS on \({}^{67}_{31}\)Ga and \({}^{73}_{33}\)As for at a few MeV/n is \(\sim 50\%\), of the order of the impact of EC decay on the modelled fluxes at the same energies (in practice, Solar modulation shifts data TOA energy towards higher IS ones, i.e. energies with even smaller EC impact in our IS calculations).
The impact of EC-decay on elemental fluxes is shown in the bottom panel of Fig. 3. This impact is calculated by assuming EC-unstable species constitute a fraction of the elemental flux. In practice, we set this isotopic fraction to a constant value with energy based on the GCR measured one; the associated numbers are reported in Table 1. The impact of EC-decay is thus diluted in the elemental fluxes, but the latter are easier to measure than isotopic ones due to intrinsic experimental challenges in isotopic separation. AMS-02 has already published the elemental flux of all species from H to S and Fe, and the flux of He isotopes, with a precision reaching at best a few percent [1], for energies typically \(\gtrsim 500\) MeV/n. At these energies, the impact on \({}_{18}\)Ar is slightly larger than the
\begin{table}
\begin{tabular}{c c c} \hline \hline Isotope & \(t_{1/2}\) (Myr) & Isotopic fraction \\ \hline \({}^{7}_{4}\)Be & \(1.46\ 10^{-7}\) & 0.55 \\ \({}^{37}_{18}\)Ar & \(9.58\ 10^{-8}\) & 0.30 \\ \({}^{41}_{20}\)Ca & \(1.00\ 10^{-1}\) & 0.07 \\ \({}^{44}_{22}\)Ti & \(4.70\ 10^{-5}\) & 0.04 \\ \({}^{53}_{25}\)Mn & \(3.70\) & 0.35 \\ \({}^{67}_{31}\)Ga & \(8.93\ 10^{-9}\) & 0.07 \\ \({}^{73}_{33}\)As & \(2.20\ 10^{-7}\) & 0.36 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Sample of EC-unstable GCRs \(Z\leq 40\), with their EC half lives [6] and GCR isotopic fractions at low energy (from data extracted from the CR Data Base [11]).
expected AMS-02 data precision for its flux. The precision for \({}_{22}\)Ti Voyager data is \(\sim 20\%\) between 100 and 200 MeV/n, of the order of the impact of EC decay on the same elemental flux in that energy range. The impact of EC decay on \({}_{31}\)Ga and \({}_{33}\)As fluxes for \(E_{k/n}\sim 100\) MeV/n correspond to the value of the whole isotopic fraction (\(\sim 7\%\) and \(\sim 36\%\), respectively), since the corresponding EC-unstable isotopes fully decay at low energies per nucleon. SuperTIGER precision, on the other hand, is \(\sim 16\%\) for \({}_{31}\)Ga and \(\sim 18\%\) for \({}_{33}\)As at \(E_{k/n}\) above 700 MeV/n, where the impact of EC decay is already strongly suppressed. The precision of ACE-CRIS measurements for \({}_{31}\)Ga and \({}_{33}\)As abundances at a few hundreds of MeV/n is at most of the order of the impact of EC decay on the same elemental fluxes, with values of \(\sim 7\%\) and \(\sim 28\%\) for \({}_{31}\)Ga and \({}_{33}\)As respectively.
Figure 3: Decaying fraction of a sample of EC-unstable species in GCR isotopic fluxes (top) and associated elemental fluxes (bottom) computed as \((n^{\rm EC}-n^{\rm noEC})/n^{\rm noEC}\), where \(n=n_{0}+n_{1}\) from Eq. (3) with \((n^{\rm EC})\) and without \((n^{\rm noEC})\) the EC decay term. In the bottom panel, the abundances of the different isotopes have been weighted by their isotopic GCR fractions. The shaded areas correspond to the precision (right-hand side axis) and energy range of recent experimental data.
## 4 Conclusions
In the context of recent high-precision data, we have revisited the impact of EC-decay on GCR fluxes. In a 1D diffusion model with parameters tuned to recent secondary-to-primary data, we found that EC decay impacts isotopic fluxes at most at the level of \(\lesssim 50\%\) at a few hundreds of MeV/n, and \(\lesssim 20\%\) for elemental fluxes in the same energy range.
These numbers are of the order of ACE-CRIS precision for isotopic fluxes and slightly larger than AMS-02 precision for elemental fluxes. The impact of EC decay at very low energies is of the same order of Voyager precision for \({}_{22}\)Ti flux, while at energies higher than 700 MeV/n it is lower than SuperTIGER precision for elemental abundances. Overall, this shows that this effect has to be taken properly into account in the calculation.
The analysis presented here will be improved in several directions. First, the analytical solution can be further exploited to assess whether the attachment of several electrons needs to be taken into account. In particular, as \(Z\gtrsim 30\) data have so far been interpreted in a leaky-box model only, it is important to compare the impact of EC decay in the leaky-box and in a more realistic diffusion model. To do so, energy losses, Solar modulation, and the full source terms and fragmentation terms need to be accounted for, and we are implementing species \(Z>30\) in the USINE code [13].
|
2310.20708 | Unexpected Improvements to Expected Improvement for Bayesian
Optimization | Expected Improvement (EI) is arguably the most popular acquisition function
in Bayesian optimization and has found countless successful applications, but
its performance is often exceeded by that of more recent methods. Notably, EI
and its variants, including for the parallel and multi-objective settings, are
challenging to optimize because their acquisition values vanish numerically in
many regions. This difficulty generally increases as the number of
observations, dimensionality of the search space, or the number of constraints
grow, resulting in performance that is inconsistent across the literature and
most often sub-optimal. Herein, we propose LogEI, a new family of acquisition
functions whose members either have identical or approximately equal optima as
their canonical counterparts, but are substantially easier to optimize
numerically. We demonstrate that numerical pathologies manifest themselves in
"classic" analytic EI, Expected Hypervolume Improvement (EHVI), as well as
their constrained, noisy, and parallel variants, and propose corresponding
reformulations that remedy these pathologies. Our empirical results show that
members of the LogEI family of acquisition functions substantially improve on
the optimization performance of their canonical counterparts and surprisingly,
are on par with or exceed the performance of recent state-of-the-art
acquisition functions, highlighting the understated role of numerical
optimization in the literature. | Sebastian Ament, Samuel Daulton, David Eriksson, Maximilian Balandat, Eytan Bakshy | 2023-10-31T17:59:56Z | http://arxiv.org/abs/2310.20708v2 | # Unexpected Improvements to Expected Improvement
###### Abstract
Expected Improvement (EI) is arguably the most popular acquisition function in Bayesian optimization and has found countless successful applications, but its performance is often exceeded by that of more recent methods. Notably, EI and its variants, including for the parallel and multi-objective settings, are challenging to optimize because their acquisition values vanish numerically in many regions. This difficulty generally increases as the number of observations, dimensionality of the search space, or the number of constraints grow, resulting in performance that is inconsistent across the literature and most often sub-optimal. Herein, we propose LogEI, a new family of acquisition functions whose members either have identical or approximately equal optima as their canonical counterparts, but are substantially easier to optimize numerically. We demonstrate that numerical pathologies manifest themselves in "classic" analytic EI, Expected Hypervolume Improvement (EHVI), as well as their constrained, noisy, and parallel variants, and propose corresponding reformulations that remedy these pathologies. Our empirical results show that members of the LogEI family of acquisition functions substantially improve on the optimization performance of their canonical counterparts and surprisingly, are on par with or exceed the performance of recent state-of-the-art acquisition functions, highlighting the understated role of numerical optimization in the literature.
## 1 Introduction
Bayesian Optimization (BO) is a widely used and effective approach for sample-efficient optimization of expensive-to-evaluate black-box functions [23; 26], with applications ranging widely between aerospace engineering [43], biology and medicine [44], materials science [3], and machine learning hyperparameter optimization [60; 66]. BO leverages a probabilistic _surrogate model_ in conjunction with an _acquisition function_ to determine where to query the underlying objective function. Improvement-based acquisition functions, such as Expected Improvement (EI) and Probability of Improvement (PI), are among the earliest and most widely used acquisition functions for efficient global optimization of non-convex functions [38; 52]. EI has been extended to the constrained [25; 27], noisy [47], and multi-objective [18] setting, as well as their respective batch variants [5; 11; 71], and is a standard baseline in the BO literature [23; 60]. While much of the literature has focused on developing new sophisticated acquisition functions, subtle yet critical implementation details of foundational BO methods are often overlooked. Importantly, the performance of EI and its variants is inconsistent even for _mathematically identical_ formulations and, as we show in this work, most often sub-optimal.
Although the problem of optimizing EI effectively has been discussed in various works, e.g. [23; 29; 71], prior focus has been on optimization algorithms and initialization strategies, rather than the fundamental issue of computing EI.
In this work, we identify pathologies in the computation of improvement-based acquisition functions that give rise to numerically vanishing values and gradients, which - to our knowledge - are present in _all existing implementations of EI_, and propose reformulations that lead to increases in the associated optimization performance which often match or exceed that of recent methods.
#### Contributions
1. We introduce LogEI, a new family of acquisition functions whose members either have identical or approximately equal optima as their canonical counterparts, but are substantially easier to optimize numerically. Notably, the analytic variant of LogEI, which _mathematically_ results in the same BO policy as EI, empirically shows significantly improved optimization performance.
2. We extend the ideas behind analytical LogEI to other members of the EI family, including constrained EI (CEI), Expected Hypervolume Improvement (EHVI), as well as their respective batch variants for parallel BO, qEI and qEHVI, using smooth approximations of the acquisition utilities to obtain non-vanishing gradients. All of our methods are available as part of BoTorch [5].
3. We demonstrate that our newly proposed acquisition functions substantially outperform their respective analogues on a broad range of benchmarks without incurring meaningful additional computational cost, and often match or exceed the performance of recent methods.
#### Motivation
Maximizing acquisition functions for BO is a challenging problem, which is generally non-convex and often contains numerous local maxima, see the lower right panel of Figure 1. While zeroth-order methods are sometimes used, gradient-based methods tend to be far more effective at optimizing acquisition functions on continuous domains, especially in higher dimensions.
In addition to the challenges stemming from non-convexity that are shared across acquisition functions, the values and gradients of improvement-based acquisition functions are frequently minuscule in large swaths of the domain. Although EI is never _mathematically_ zero under a Gaussian posterior distribution,1 it often vanishes, even becoming _exactly_ zero in floating point precision. The same
Figure 1: **Left: Fraction of points sampled from the domain for which the magnitude of the gradient of EI vanishes to \(<\!10^{-10}\) as a function of the number of randomly generated data points \(n\) for different dimensions \(d\) on the Ackley function. As \(n\) increases, EI and its gradients become numerically zero across most of the domain, see App. D.2 for details. Right: Values of EI and LogEI on a quadratic objective. EI takes on extremely small values on points for which the likelihood of improving over the incumbent is small and is numerically _exactly_ zero in double precision for a large part of the domain (\(\approx[5,13.5]\)). The left plot shows that this tends to worsen as the dimensionality of the problem and the number of data points grow, rendering gradient-based optimization of EI futile.**
applies to its gradient, making EI (and PI, see Appendix A) exceptionally difficult to optimize via gradient-based methods. The right panels of Figure 1 illustrate this behavior on a simple one-dimensional quadratic function.
To increase the chance of finding the global optimum of non-convex functions, gradient-based optimization is typically performed from multiple starting points, which can help avoid getting stuck in local optima [64]. For improvement-based acquisition functions however, optimization becomes increasingly challenging as more data is collected and the likelihood of improving over the incumbent diminishes, see our theoretical results in Section 3 and the empirical illustration in Figure 1 and Appendix D.2. As a result, gradient-based optimization with multiple random starting points will eventually degenerate into random search when the gradients at the starting points are numerically zero. This problem is particularly acute in high dimensions and for objectives with a large range.
Various initialization heuristics have been proposed to address this behavior by modifying the random-restart strategy. Rather than starting from random candidates, an alternative naive approach would be to use initial conditions close to the best previously observed inputs. However, doing that alone inherently limits the acquisition optimization to a type of local search, which cannot have global guarantees. To attain such guarantees, it is necessary to use an asymptotically space-filling heuristic; even if not random, this will entail evaluating the acquisition function in regions where no prior observation lies. Ideally, these regions should permit gradient-based optimization of the objective for efficient acquisition function optimization, which necessitates the gradients to be non-zero. In this work, we show that this can be achieved for a large number of improvement-based acquisition functions, and demonstrate empirically how this leads to substantially improved BO performance.
## 2 Background
We consider the problem of maximizing an expensive-to-evaluate black-box function \(\mathbf{f}_{\mathrm{true}}:\mathbb{X}\mapsto\mathbb{R}^{M}\) over some feasible set \(\mathbb{X}\subseteq\mathbb{R}^{d}\). Suppose we have collected data \(\mathcal{D}_{n}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{n}\), where \(\mathbf{x}_{i}\in\mathbb{X}\) and \(\mathbf{y}_{i}=\mathbf{f}_{\mathrm{true}}(\mathbf{x}_{i})+\mathbf{v}_{i}( \mathbf{x}_{i})\) and \(\mathbf{v}_{i}\) is a noise corrupting the true function value \(\mathbf{f}_{\mathrm{true}}(\mathbf{x}_{i})\). The response \(\mathbf{f}_{\mathrm{true}}\) may be multi-output as is the case for multiple objectives or black-box constraints, in which case \(\mathbf{y}_{i},\mathbf{v}_{i}\in\mathbb{R}^{M}\). We use Bayesian optimization (BO), which relies on a surrogate model \(\mathbf{f}\) that for any _batch_\(\mathbf{X}:=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{q}\}\) of candidate points provides a probability distribution over the outputs \(f(\mathbf{X}):=(f(\mathbf{x}_{1}),\ldots,f(\mathbf{x}_{q}))\). The acquisition function \(\alpha\) then utilizes this posterior prediction to assign an acquisition value to \(\mathbf{x}\) that quantifies the value of evaluating the points in \(\mathbf{x}\), trading off exploration and exploitation.
### Gaussian Processes
Gaussian Processes (GP) [59] are the most widely used surrogates in BO, due to their high data efficiency and good uncertainty quantification. For our purposes, it suffices to consider a GP as a mapping that provides a multivariate Normal distribution over the outputs \(f(\mathbf{x})\) for any \(\mathbf{x}\):
\[f(\mathbf{x})\sim\mathcal{N}(\mu(\mathbf{x}),\mathbf{\Sigma}(\mathbf{x})),\qquad \mathbf{\mu}:\mathbb{X}^{q}\to\mathbb{R}^{qM},\quad\mathbf{\Sigma}:\mathbb{X}^{q}\to \mathcal{S}_{+}^{qM}. \tag{1}\]
In the single-outcome (\(M=1\)) setting, \(f(\mathbf{x})\sim\mathcal{N}(\mu(\mathbf{x}),\Sigma(\mathbf{x}))\) with \(\mu:\mathbb{X}^{q}\to\mathbb{R}^{q}\) and \(\Sigma:\mathbb{X}^{q}\to\mathcal{S}_{+}^{q}\). In the sequential (\(q=1\)) case, this further reduces to a univariate Normal distribution: \(f(\mathbf{x})\sim\mathcal{N}(\mu(\mathbf{x}),\sigma^{2}(\mathbf{x}))\) with \(\mu:\mathbb{X}\to\mathbb{R}\) and \(\sigma:\mathbb{X}\to\mathbb{R}_{+}\).
### Improvement-based Acquisition Functions
Expected ImprovementFor the fully-sequential (\(q=1\)), single-outcome (\(M=1\)) setting, "classic" EI [53] is defined as
\[\text{EI}_{y^{*}}(\mathbf{x})=\mathbb{E}_{f(\mathbf{x})}\big{[}[f(\mathbf{x}) -y^{*}]_{+}\big{]}=\sigma(\mathbf{x})\;h\left(\frac{\mu(\mathbf{x})-y^{*}}{ \sigma(\mathbf{x})}\right), \tag{2}\]
where \([\cdot]_{+}\) denotes the \(\max(0,\cdot)\) operation, \(y^{*}=\max_{i}y_{i}\) is the best function value observed so far, also referred to as the _incumbent_, \(h(z)=\phi(z)+z\Phi(z)\), and \(\phi,\Phi\) are the standard Normal density and distribution functions, respectively. This formulation is arguably the most widely used acquisition function in BO, and the default in many popular software packages.
Constrained Expected Improvement_Constrained BO_ involves one or more black-box constraints and is typically formulated as finding \(\max_{\mathbf{x}\in\mathbb{X}}f_{\text{true},1}(\mathbf{x})\) such that \(f_{\text{true},i}(\mathbf{x})\leq 0\) for \(i\in\{2,\ldots,M\}\). Feasibility-weighting the improvement [25; 27] is a natural approach for this class of problems:
\[\text{CEI}_{y^{*}}(\mathbf{x})=\mathbb{E}_{\mathbf{f}(\mathbf{x})}\left[[f_{1} (\mathbf{x})-y^{*}]_{+}\ \prod_{i=2}^{M}\mathbb{1}_{f_{i}(\mathbf{x})\leq 0} \right], \tag{3}\]
where \(\mathbb{1}\) is the indicator function. If the constraints \(\{f_{i}\}_{i\geq 2}\) are modeled as conditionally independent of the objective \(f_{1}\) this can be simplified as the product of EI and the probability of feasibility.
Parallel Expected ImprovementIn many settings, one may evaluate \(f_{\text{true}}\) on \(q>1\) candidates in parallel to increase throughput. The associated parallel or batch analogue of EI [28; 69] is given by
\[\text{qEI}_{y^{*}}(\mathbf{X})=\mathbb{E}_{f(\mathbf{X})}\left[\max_{j=1, \ldots,q}\bigl{\{}[f(\mathbf{x}_{j})-y^{*}]_{+}\bigr{\}}\right]. \tag{4}\]
Unlike EI, qEI does not admit a closed-form expression and is thus typically computed via Monte Carlo sampling, which also extends to non-Gaussian posterior distributions [5; 69]:
\[\text{qEI}_{y^{*}}(\mathbf{X})\approx\sum_{i=1}^{N}\max_{j=1,\ldots,q}\bigl{\{} [\xi^{i}(\mathbf{x}_{j})-y^{*}]_{+}\bigr{\}}, \tag{5}\]
where \(\xi^{i}(\mathbf{x})\sim f(\mathbf{x})\) are random samples drawn from the joint model posterior at \(\mathbf{x}\).
Expected Hypervolume ImprovementIn multi-objective optimization (MOO), there generally is no single best solution; instead the goal is to explore the Pareto Frontier between multiple competing objectives, the set of mutually-optimal objective vectors. A common measure of the quality of a finitely approximated Pareto Frontier \(\mathcal{P}\) between \(M\) objectives with respect to a specified reference point \(\mathbf{r}\in\mathbb{R}^{M}\) is its _hypervolume_\(\text{HV}(\mathcal{P},\mathbf{r}):=\lambda\bigl{(}\bigcup_{\mathbf{y}_{j}\in \mathcal{P}}[\mathbf{r},\mathbf{y}_{i}]\bigr{)}\), where \([\mathbf{r},\mathbf{y}_{i}]\) denotes the hyperrectangle bounded by vertices \(\mathbf{r}\) and \(\mathbf{y}_{i}\), and \(\lambda\) is the Lebesgue measure. An apt acquisition function for multi-objective optimization problems is therefore the expected hypervolume improvement
\[\text{EHVI}(\mathbf{x})=\mathbb{E}_{\mathbf{f}(\mathbf{x})}\bigl{[}[\text{HV }(\mathcal{P}\cup\mathbf{f}(\mathbf{X}),\mathbf{r})-\text{HV}(\mathcal{P}, \mathbf{r})]_{+}\bigr{]}, \tag{6}\]
due to observing a batch \(\mathbf{f}(\mathbf{X}):=[\mathbf{f}(\mathbf{x}_{1}),\cdots,\mathbf{f}( \mathbf{x}_{q})]\) of \(q\) new observations. EHVI can be expressed in closed form if \(q=1\) and the objectives are modeled with independent GPs [74], but Monte Carlo approximations are required for the general case (qEHVI) [11].
### Optimizing Acquisition Functions
Optimizing an acquisition function (AF) is a challenging task that amounts to solving a non-convex optimization problem, to which multiple approaches and heuristics have been applied. These include gradient-free methods such as divided rectangles [37], evolutionary methods such as CMA-ES [30], first-order methods such as stochastic gradient ascent, see e.g., Daulton et al. [13], Wang et al. [69], and (quasi-)second order methods [23] such as L-BFGS-B [9]. Multi-start optimization is commonly employed with gradient-based methods to mitigate the risk of getting stuck in local minima. Initial points for optimization are selected via various heuristics with different levels of complexity, ranging from simple uniform random selection to BoTorch's initialization heuristic, which selects initial points by performing Boltzmann sampling on a set of random points according to their acquisition function value [5]. See Appendix B for a more complete account of initialization strategies and optimization procedures used by popular implementations. We focus on gradient-based optimization as often leveraging gradients results in faster and more performant optimization [11].
Optimizing AFs for parallel BO that quantify the value of a batch of \(q>1\) points is more challenging than optimizing their sequential counterparts due to the higher dimensionality of the optimization problem - \(qd\) instead of \(d\) - and the more challenging optimization surface. A common approach to simplify the problem is to use a _sequential greedy_ strategy that greedily solves a sequence of single point selection problems. For \(i=1,\ldots,q\), candidate \(\mathbf{x}_{i}\) is selected by optimizing the AF for \(q=1\), conditional on the previously selected designs \(\{\mathbf{x}_{1},...,\mathbf{x}_{i-1}\}\) and their unknown observations, e.g. by fantasizing the values at those designs [71]. For submodular AFs, including EI, PI, and EHVI, a sequential greedy strategy will attain a regret within a factor of \(1/e\) compared to the joint optimum, and previous works have found that sequential greedy optimization yields _improved_ BO performance compared to joint optimization [11; 71]. Herein, we find that our reformulations enable joint batch optimization to be competitive with the sequential greedy strategy, especially for larger batches.
### Related Work
While there is a substantial body of work introducing a large variety of different AFs, much less focus has been on the question of how to effectively implement and optimize these AFs. Zhan and Xing [75] provide a comprehensive review of a large number of different variants of the EI family, but do not discuss any numerical or optimization challenges. Zhao et al. [76] propose combining a variety of different initialization strategies to select initial conditions for optimization of acquisition functions and show empirically that this improves optimization performance. However, they do not address any potential issues or degeneracies with the acquisition functions themselves. Recent works have considered effective gradient-based approaches for acquisition optimization. Wilson et al. [71] demonstrates how stochastic first-order methods can be leveraged for optimizing Monte Carlo acquisition functions. Balandat et al. [5] build on this work and put forth sample average approximations for MC acquisition functions that admit gradient-based optimization using deterministic higher-order optimizers such as L-BFGS-B.
Another line of work proposes to switch from BO to local optimization based on some stopping criterion to achieve faster local convergence, using either zeroth order [54] or gradient-based [51] optimization. While McLeod et al. [51] are also concerned with numerical issues, we emphasize that those issues arise due to ill-conditioned covariance matrices and are orthogonal to the numerical pathologies of improvement-based acquisition functions.
## 3 Theoretical Analysis of Expected Improvement's Vanishing Gradients
In this section, we shed light on the conditions on the objective function and surrogate model that give rise to the numerically vanishing gradients in EI, as seen in Figure 1. In particular, we show that as a BO algorithm closes the optimality gap \(f^{*}-y^{*}\), where \(f^{*}\) is the global maximum of the function \(f_{\text{true}}\), and the associated GP surrogate's uncertainty decreases, EI is exceedingly likely to exhibit numerically vanishing gradients.
Let \(P_{\mathbf{x}}\) be a distribution over the inputs \(\mathbf{x}\), and \(f\sim P_{f}\) be an objective drawn from a Gaussian process. Then with high probability over the particular instantiation \(f\) of the objective, the probability that an input \(\mathbf{x}\sim P_{\mathbf{x}}\) gives rise to an argument \((\mu(\mathbf{x})-y^{*})/\sigma(\mathbf{x})\) to \(h\) in Eq. (2) that is smaller than a threshold \(B\) exceeds \(P_{\mathbf{x}}(f(\mathbf{x})<f^{*}-\epsilon_{n})\), where \(\epsilon_{n}\) depends on the optimality gap \(f^{*}-y^{*}\) and the maximum posterior uncertainty \(\max_{\mathbf{x}}\sigma_{n}(\mathbf{x})\). This pertains to EI's numerically vanishing values and gradients, since the numerical support \(\mathcal{S}_{\eta}(h)=\{\mathbf{x}:|h(\mathbf{x})|>\eta\}\) of a naive implementation of \(h\) in (2) is limited by a lower bound \(B(\eta)\) that depends on the floating point precision \(\eta\). Formally, \(\mathcal{S}_{\eta}(h)\subset[B(\eta),\infty)\) even though \(\mathcal{S}_{0}(h)=\mathbb{R}\) mathematically. As a consequence, the following result can be seen as a bound on the probability of encountering numerically vanishing values and gradients in EI using samples from the distribution \(P_{\mathbf{x}}\) to initialize the optimization of the acquisition function.
**Theorem 1**.: _Suppose \(f\) is drawn from a Gaussian process prior \(P_{f}\), \(y^{*}\leq f^{*}\), \(\mu_{n},\sigma_{n}\) are the mean and standard deviation of the posterior \(P_{f}(f|\mathcal{D}_{n})\) and \(B\in\mathbb{R}\). Then with probability \(1-\delta\),_
\[P_{\mathbf{x}}\left(\frac{\mu_{n}(\mathbf{x})-y^{*}}{\sigma_{n}(\mathbf{x})}< B\right)\geq P_{\mathbf{x}}\left(f(\mathbf{x})<f^{*}-\epsilon_{n}\right) \tag{7}\]
_where \(\epsilon_{n}=(f^{*}-y^{*})+\left(\sqrt{-2\log(2\delta)}-B\right)\max_{ \mathbf{x}}\sigma_{n}(\mathbf{x})\)._
For any given - and especially early - iteration, \(\epsilon_{n}\) does not have to be small, as both the optimality gap and the maximal posterior standard deviation can be large initially. Note that under certain technical conditions on the kernel function and the asymptotic distribution of the training data \(\mathcal{D}_{n}\), the maximum posterior variance vanishes guaranteedly as \(n\) increases, see (45, Corollary 3.2). On its own, Theorem 1 gives insight into the non-asymptotic behavior by exposing a dependence to the distribution of objective values \(f\). In particular, if the set of inputs that give rise to high objective values (\(\approx f^{*}\)) is concentrated, \(P(f(\mathbf{x})<f^{*}-\epsilon)\) will decay very slowly as \(\epsilon\) increases, thereby maintaining a lower bound on the probability of close to 1. As an example, this is the case for the Ackley function, especially as the dimensionality increases, which explains the behavior in Figure 1.
Unexpected Improvements
In this section, we propose re-formulations of analytic and MC-based improvement-based acquisition functions that render them significantly easier to optimize. We will use differing fonts, e.g. \(\log\) and \(\log\), to differentiate between the mathematical functions and their numerical implementations.
### Analytic LogEI
Implementations of "classic" analytic EI exhibit numerically vanishing values and gradients even though EI and its gradient are mathematically nonzero on the entire real line, except in the noiseless case for points that are perfectly correlated with previous observations. However, if implemented naively, \(h\) is numerically zero when \((\mu(\mathbf{x})-y^{*})/\sigma(\mathbf{x})\) is small, which happens when the model has high confidence that little improvement can be achieved at \(\mathbf{x}\).
We propose an implementation of \(\log\circ h\) that can be accurately computed for a much larger range of inputs than a naive implementation of \(\mathbf{h}\) or \(\mathtt{log}\circ\mathtt{h}\). Specifically, we compute analytic
\[\text{LogEI}_{y^{*}}(\mathbf{x})=\mathtt{log\_h}((\mu(\mathbf{x})-y^{*})/ \sigma(\mathbf{x}))+\mathtt{log}(\sigma(\mathbf{x})), \tag{8}\]
where \(\mathtt{log\_h}\) is mathematically equivalent to \(\log\circ h\) and can be stably and accurately computed by
\[\mathtt{log\_h}(z)=\begin{cases}\mathtt{log}(\phi(z)+z\Phi(z))&z>-1\\ -z^{2}/2-c_{1}+\mathtt{log\_lmexp}(\mathtt{logerfcx}(-z/\sqrt{2})|z|+c_{2})&z \leq-1\end{cases} \tag{9}\]
where \(c_{1}=\log(2\pi)/2\), and \(c_{2}=\log(\pi/2)/2\), and \(\mathtt{log\_lmexp}\), \(\mathtt{logerfcx}\) are numerically stable implementations of \(\log(1-\exp(z))\) and \(\log(\exp(z^{2})\text{erfc}(z))\), respectively (see [50] and App. A for details). Notably, the asymptotically quadratic behavior of \(\mathtt{log\_h}\) becomes apparent in the second case, making the function particularly amenable to gradient-based optimization. This has _significant_ practical implications for BO using EI, as evidenced by the empirical results in Section 5. Numerically vanishing values and gradients affect - as far as we are aware - all public implementations of EI.
### Monte Carlo Parallel LogEI
Monte Carlo formulations of parallel EI that perform differentiation on the level of MC samples, don't just exhibit numerically, but mathematically zero gradients for a significant proportion of practically relevant inputs. For qEI, the primary issue is the discrete maximum over the \(q\) outcomes for each MC sample in (5). In particular, the acquisition utility of expected improvement in Eq. 4 on a single sample \(\xi_{i}\) of \(f\) is \(\max_{j}[\xi_{i}(\mathbf{x}_{j})-y^{*}]_{+}\). Mathematically, we smoothly approximate the acquisition utility in two stages: 1) \(u_{ij}=\operatorname{softplus}_{\tau_{0}}(\xi_{i}(\mathbf{x}_{j})-y^{*})\approx [\xi_{i}(\mathbf{x}_{j})-y^{*}]_{+}\) and 2) \(\|u_{i}\|_{1/\tau_{\max}}\approx\max_{j}u_{ij}\). Notably, while we use canonical softplus and p-norm approximations here, specialized fat-tailed non-linearities are required to scale to large batches, see Appendix A.3. Since the resulting quantities are strictly positive, they can be transformed to log-space permitting an implementation of \(\mathtt{qLogEI}\) that is numerically stable and can be optimized effectively, similar to the analytic case. In particular,
\[\mathtt{qLogEI}_{y^{*}}(\mathbf{X}) =\log\int\Bigl{(}\sum_{j=1}^{q}\operatorname{softplus}_{\tau_{0}}( f(\mathbf{x}_{j})-y^{*})^{1/\tau_{\max}}\Bigr{)}^{\tau_{\max}}\;df \tag{10}\] \[\approx\mathtt{logsumexp}_{i}(\tau_{\max}\mathtt{logsumexp}_{j}( \mathtt{logsoftplus}_{\tau_{0}}(\xi^{i}(\mathbf{x}_{j})-y^{*}))/\tau_{\max})),\]
where \(i\) is the index of the Monte Carlo draws from the GP posterior, \(j=1,\ldots,q\) is the index for the candidate in the batch, and \(\mathtt{logsoftplus}\) is a numerically stable implementation of \(\log(\log(1+\exp(z)))\), see Appendix A for additional details.
While the smoothing in (10) approximates the original qEI formulation, the following result shows that the associated relative approximation error can be quantified and bounded tightly as a function of the temperature parameters \(\tau_{0},\tau_{\max}\) and the batch size \(q\). See Appendix C for the proof.
**Lemma 2**.: _[Relative Approximation Guarantee] Given \(\tau_{0},\tau_{\max}>0\), the approximation error of \(\mathtt{qLogEI}\) to \(\mathtt{qEI}\) is bounded by_
\[\left|e^{\mathtt{qLogEI}(\mathbf{X})}-\mathtt{qEI}(\mathbf{X})\right|\leq(q^{ \tau_{\max}}-1)\;\mathtt{qEI}(\mathbf{X})+\log(2)\tau_{0}q^{\tau_{\max}}. \tag{11}\]
In Appendix D.10, we show the importance of setting the temperatures sufficiently low for \(\mathtt{qLogEI}\) to achieve good optimization characteristics, something that only becomes possible by transforming all involved computations to log-space. Otherwise, the smoothed approximation to the acquisition utility (e.g., using a regular softplus function) would similarly exhibit numerically vanishing gradients, as is the case mathematically for the discrete \(\max\) operator.
### Constrained EI
Both analytic and Monte Carlo variants of LogEI can be extended for optimization problems with black-box constraints. For analytic CEI with independent constraints of the form \(f_{i}(\mathbf{x})\leq 0\), the constrained formulation in Eq. (3) simplifies to \(\text{LogCEI}(\mathbf{x})=\text{LogEI}(\mathbf{x})+\sum_{i}\log(P(f_{i}(\mathbf{ x})\leq 0))\), which can be readily and stably computed using LogEI in Eq. (8) and, if \(f_{i}\) is modelled by a GP, a stable implementation of the Gaussian log cumulative distribution function. For the Monte Carlo variant, we apply a similar strategy as for Eq. (10) to the constraint indicators in Eq. (3): 1) a smooth approximation and 2) an accurate and stable implementation of its log value, see Appendix A.
### Monte Carlo Parallel LogEHVI
The numerical difficulties of qEHVI in (6) are similar to those of qEI, and the basic ingredients of smoothing and log-transformations still apply, but the details are significantly more complex since qEHVI uses many operations that have mathematically zero gradients with respect to some of the inputs. Our implementation is based on the differentiable inclusion-exclusion formulation of the hypervolume improvement [11]. As a by-product, the implementation also readily allows for the differentiable computation of the expected log hypervolume, instead of the log expected hypervolume, note the order, which can be preferable in certain applications of multi-objective optimization [24].
## 5 Empirical Results
We compare standard versions of analytic EI (EI) and constrained EI (CEI), Monte Carlo parallel EI (qEI), as well as Monte Carlo EHVI (qEHVI), in addition to other state-of-the-art baselines like lower-bound Max-Value Entropy Search (GIBBON) [55] and single- and multi-objective Joint Entropy Search (JES) [33; 65]. All experiments are implemented using BoTorch [5] and utilize multi-start optimization of the AF with scipy's L-BFGS-B optimizer. In order to avoid conflating the effect of BoTorch's default initialization strategy with those of our contributions, we use 16 initial points chosen uniformly at random from which to start the L-BFGS-B optimization. For a comparison with other initialization strategies, see Appendix D. We run multiple replicates and report mean and error bars of \(\pm 2\) standard errors of the mean. Appendix D.1 contains additional details.
Single-objective sequential BOWe compare EI and LogEI on the 10-dimensional convex Sum-of-Squares (SoS) function \(f(\mathbf{x})=\sum_{i=1}^{10}\left(x_{i}-0.5\right)^{2}\), using 20 restarts seeded from 1024 pseudo-random samples through BoTorch's default initialization heuristic. Figure 2 shows that due to vanishing gradients, EI is unable to make progress even on this trivial problem.
In Figure 3, we compare performance on the Ackley and Michalewicz test functions [61]. Notably, LogEI substantially outperforms EI on Ackley as the dimensionality increases. Ackley is a challenging multimodal function for which it is critical to trade off local exploitation with global exploration, a task made exceedingly difficult by the numerically vanishing gradients of EI in a large fraction of the search space. We see a similar albeit less pronounced behavior on Michalewicz, which reflects the fact that Michalewicz is a somewhat less challenging problem than Ackley.
Figure 2: Regret and EI acquisition value for the candidates selected by maximizing EI and LogEI on the convex Sum-of-Squares problem. Optimization stalls out for EI after about 75 observations due to vanishing gradients (indicated by the jagged behavior of the acquisition value), while LogEI continues to make steady progress.
BO with Black Box ConstraintsFigure 4 shows results on four engineering design problems with black box constraints that were also considered in [20]. We apply the same bilog transform as the trust region-based SCBO method [20] to all constraints to make them easier to model with a GP. We see that LogCEI outperforms the naive CEI implementation and converges faster than SCBO. Similar to the unconstrained problems, the performance gains of LogCEI over CEI grow with increasing problem dimensionality and the number of constraints. Notably, we found that for some problems, LogCEI in fact _improved upon some of the best results quoted in the original literature_, while using three orders of magnitude fewer function evaluations, see Appendix D.7 for details.
Parallel Expected Improvement with qLogEIFigure 5 reports the optimization performance of parallel BO on the 16-dimensional Ackley function for both sequential greedy and joint batch optimization using the fat-tailed non-linearities of App. A.3. In addition to the apparent advantages of qLogEI over qEI, a key finding is that jointly optimizing the candidates of batch acquisition functions can yield highly competitive optimization performance, see App. D.3 for extended results.
High-dimensional BO with qLogEIFigure 6 shows the performance of LogEI on three high-dimensional problems: the \(6\)-dimensional Hartmann function embedded in a \(100\)-dimensional space, a \(100\)-dimensional rover trajectory planning problem, and a \(103\)-dimensional SVM hyperparameter tuning problem. We use a \(103\)-dimensional version of the \(388\)-dimensional SVM problem considered by Eriksson and Jankowiak [19], where the \(100\) most important features were selected using Xgboost.
Figure 4: Best feasible objective value as a function of number of function evaluations (iterations) on four engineering design problems with black-box constraints after an initial \(2d\) pseudo-random evaluations.
Figure 3: Best objective value as a function of iterations on the moderately and severely non-convex Michalewicz and Ackley problems for varying numbers of input dimensions. LogEI substantially outperforms both EI and GIBBON, and this gap widens as the problem dimensionality increases. JES performs slightly better than LogEI on Ackley, but for some reason fails on Michalewicz. Notably, JES is almost two orders of magnitude slower than the other acquisition functions (see Appendix D).
Figure 6 shows that the optimization exhibits varying degrees of improvement from the inclusion of qLogEI, both when combined with SAASBO [19] and a standard GP. In particular, qLogEI leads to significant improvements on the embedded Hartmann problem, even leading BO with the canonical GP to ultimately catch up with the SAAS-prior-equipped model. On the other hand, the differences on the SVM and Rover problems are not significant, see Section 6 for a discussion.
Multi-Objective optimization with qLogEHVIFigure 7 compares qLogEHVI and qEHVI on two multi-objective test problems with varying batch sizes, including the real-world-inspired cell network design for optimizing coverage and capacity [17]. The results are consistent with our findings in the single-objective and constrained cases: qLogEHVI consistently outperforms qEHVI and even JES [65] for all batch sizes. Curiously, for the largest batch size and DTLZ2, qLogNEHVI's improvement over the reference point (HV \(>0\)) occurs around three batches after the other methods, but dominates their performance in later batches. See Appendix D.5 for results on additional synthetic and real-world-inspired multi-objective problems such as the laser plasma acceleration optimization [34], and vehicle design optimization [49; 62]
## 6 Discussion
To recap, EI exhibits vanishing gradients 1) when high objective values are highly concentrated in the search space, and 2) as the optimization progresses. In this section, we highlight that these conditions are not met for all BO applications, and that LogEI's performance depends on the surrogate's quality.
On problem dimensionalityWhile our experimental results show that advantages of LogEI generally grow larger as the dimensionality of the problem grows, we stress that this is fundamentally due to the concentration of high objective values in the search space, not the dimensionality itself. Indeed, we have observed problems with high ambient dimensionality but low intrinsic dimensionality, where LogEI does not lead to significant improvements over EI, e.g. the SVM problem in Figure 6.
Figure 5: Best objective value for parallel BO as a function of the number evaluations for single-objective optimization on the 16-dimensional Ackley function with varying batch sizes \(q\). Notably, joint optimization of the batch outperforms sequential greedy optimization.
Figure 6: Best objective value as a function of number of function evaluations (iterations) on three high-dimensional problems, including Eriksson and Jankowiak [19]’s SAAS prior.
On asymptotic improvementsWhile members of the LogEI family can generally be optimized better, leading to higher acquisition values, improvements in optimization performance might be small in magnitude, e.g. the log-objective results on the convex 10D sum of squares in Fig. 2, or only begin to materialize in later iterations, like for \(q=16\) on DTLZ2 in Figure 7.
On model qualityEven if good objective values are concentrated in a small volume of the search space and many iterations are run, LogEI might still not outperform EI if the surrogate's predictions are poor, or its uncertainties are not indicative of the surrogate's mismatch to the objective, see Rover in Fig. 6. In these cases, better acquisition values do not necessarily lead to better BO performance.
Replacing EIDespite these limitation, we strongly suggest replacing variants of EI with their LogEI counterparts. If LogEI were dominated by EI on some problem, it would be an indication that the EI family itself is sub-optimal, and improvements in performance can be attributed to the exploratory quality of randomly distributed candidates, which could be incorporated explicitly.
## 7 Conclusion
Our results demonstrate that the problem of vanishing gradients is a major source of the difficulty of optimizing improvement-based acquisition functions and that we can mitigate this issue through careful reformulations and implementations. As a result, we see substantially improved optimization performance across a variety of modified EI variants across a broad range of problems. In particular, we demonstrate that joint batch optimization for parallel BO can be competitive with, and at times exceed the sequential greedy approach typically used in practice, which also benefits from our modifications. Besides the convincing performance improvements, one of the key advantages of our modified acquisition functions is that they are much less dependent on heuristic and potentially brittle initialization strategies. Moreover, our proposed modifications do not meaningfully increase the computational complexity of the respective original acquisition function.
While our contributions may not apply verbatim to other classes of acquisition functions, our key insights and strategies do translate and could help with e.g. improving information-based [32; 70], cost-aware [46; 60], and other types of acquisition functions that are prone to similar numerical challenges. Further, combining the proposed methods with gradient-aware first-order BO methods [4; 14; 21] could lead to particularly effective high-dimensional applications of BO, since the advantages of both methods tend to increase with the dimensionality of the search space. Overall, we hope that our findings will increase awareness in the community for the importance of optimizing acquisition functions well, and in particular, for the care that the involved numerics demand.
Figure 7: Batch optimization performance on two multi-objective problems, as measured by the hypervolume of the Pareto frontier across observed points. This plot includes JES [65]. Similar to the single-objective case, the LogEI variant qLogEHVI significantly outperforms the baselines.
## Acknowledgments and Disclosure of Funding
The authors thank David Bindel for insightful conversations about the difficulty of optimizing EI.
|
2309.04905 | Most Rotational Variables Dominated by a Single Bright Feature are
$α^2$ CVn Stars | We previously reported a rare class of variable star light curves isolated
from a sample of 4.7 million candidate variables from the ATLAS survey. Dubbed
`UCBH' light curves, they have broad minima and narrow, symmetrical maxima,
with typical periods of 1-10 days and amplitudes of 0.05--0.20 mag. They
maintain constant amplitude, shape, and phase coherence over multiple years,
but do not match any known class of pulsating variables. A localized bright
spot near the equator of a rotating star will produce a UCBH-type light curve
for most viewing geometries. Most stars that exhibit rotational variability
caused primarily by a single bright feature should therefore appear as UCBH
stars, although a rotating bright spot is not the only thing that could produce
a UCBH-type lightcurve. We have spectroscopically investigated fourteen UCBH
stars and found ten of them to be Ap/Bp stars: A-type or B-type stars with
greatly enhanced photospheric abundances of specific heavy elements.
Rotationally variable Ap/Bp stars are referred to as $\alpha^2$ CVn variables.
Most ATLAS UCBH stars are therefore $\alpha^2$ CVn stars, although only a
minority of $\alpha^2$ CVn stars in the literature have UCBH light curves. The
fact that $\alpha^2$ CVn stars dominate the UCBH class suggests that lone
bright spots with sufficient size and contrast develop more readily on Ap/Bp
stars than on any other type. The $\alpha^2$ CVn UCBH stars may be
characterized by a specific magnetic field topology, making them intriguing
targets for future Zeeman-Doppler imaging. | A. N. Heinze, Heather Flewelling, Mark E. Huber | 2023-09-10T01:20:31Z | http://arxiv.org/abs/2309.04905v2 | # Most Rotational Variables Dominated by a Single Bright Feature are \(\alpha^{2}\) CVn Stars
###### Abstract
We previously reported a rare class of variable star light curves isolated from a sample of 4.7 million candidate variables from the ATLAS survey. Dubbed 'UCBH' light curves, they have broad minima and narrow, symmetrical maxima, with typical periods of 1-10 days and amplitudes of 0.05-0.20 mag. They maintain constant amplitude, shape, and phase coherence over multiple years, but do not match any known class of pulsating variables. A localized bright spot near the equator of a rotating star will produce a UCBH-type light curve for most viewing geometries. Most stars that exhibit rotational variability caused primarily by a single bright feature should therefore appear as UCBH stars, although a rotating bright spot is not the only thing that could produce a UCBH-type lightcurve. We have spectroscopically investigated fourteen UCBH stars and found ten of them to be Ap/Bp stars: A-type or B-type stars with greatly enhanced photospheric abundances of specific heavy elements. Rotationally variable Ap/Bp stars are referred to as \(\alpha^{2}\) CVn variables. Most ATLAS UCBH stars are therefore \(\alpha^{2}\) CVn stars, although only a minority of \(\alpha^{2}\) CVn stars in the literature have UCBH light curves. The fact that \(\alpha^{2}\) CVn stars dominate the UCBH class suggests that lone bright spots with sufficient size and contrast develop more readily on Ap/Bp stars than on any other type. The \(\alpha^{2}\) CVn UCBH stars may be characterized by a specific magnetic field topology, making them intriguing targets for future Zeeman-Doppler imaging.
0000-0002-4880-7880]A. N. Heinze
## 1 Introduction
Though stellar photometry is typically not their primary mission, modern astronomical surveys such as the Catalina Sky Survey (Larson et al., 2003), the All-Sky Automated Survey for Supernovae (ASAS-SN, Shappee et al., 2014), Pan-STARRS1 (Chambers et al., 2016; Flewelling et al., 2016; Magnier et al., 2016, 2016, 2016), ATLAS (Tonry et al., 2018), the Zwicky Transient Facility (Graham et al., 2018), and others produce well-sampled photometric time series for millions of stars. These data sets are invaluable both for large-scale statistics of variables stars and for identifying rare, highly interesting objects. The huge sample sizes and the presence of photometry but not spectra for many of the objects enable an interesting new perspective on variable stars. Spectrum-blind analysis of millions of light curves can reveal new, physically meaningful commonalities that do not necessarily align with established classes of variable stars. Though the established classes are (of course) also physically meaningful, they were defined in a context of smaller sample sizes and more intensive spectroscopic investigation to which the current big-data context has meaningful things to add.
Herein, we analyze a rare class of variable stars, the 'UCBH' stars (Heinze et al., 2018), defined by a specific lightcurve shape and identified purely photometrically using data from the ATLAS survey. We introduce these stars in Section 1.1, and in Section 1.2 we introduce the established variable class (the \(\alpha^{2}\) CVn stars) to which most of them are found to belong. In Section 2 we show examples of UCBH light curves and demonstrate that the characteristic light curve shape will result from a single bright spot on a rotating star, over a wide range of sizes and viewing geometries. We present our spectroscopic results in Section 3, demonstrating that most of them are \(\alpha^{2}\) CVn stars (although only a minority of known \(\alpha^{2}\) CVn stars have UCBH-type lightcurves). In Section 5 we use Gaia parallaxes to place our UCBH stars on HR diagrams, demonstrating that most of them have luminosities and colors consistent with main-sequence Ap stars subject to interstellar
reddening -- with some interesting exceptions. We discuss astrophysical implications and offer our conclusions in Section 6.
### ATLAS Variable Star DR1 and the UCBH stars
The Asteroid Terrestrial-impact Last Alert System (ATLAS; Tonry et al., 2018) is a NASA-funded planetary defense survey that scans the sky for near-Earth asteroids while simultaneously producing well-calibrated data useful for many other astrophysical investigations. Each ATLAS image is photometrically calibrated using a customized, highly-precise catalog (Tonry et al., 2018) created by mutually calibrating several state-of-the-art photometric catalogs.
In its first two years on the sky, ATLAS operated only one telescope (it now has four). This single ATLAS unit surveyed one fourth of the accessible sky every night, obtaining four 30-second exposures of each target field over a period of about one hour. Hence, during good weather in its observing season, a given star would be observed an average of once per night - but these observations occur in clumps of four in one hour, with a four-day gap before the next clump. Not all images yield flux measurements of every object in the field: for example, faint stars would not be detected in bad seeing. Nevertheless, in two years ATLAS obtained 100 or more photometric measurements for each of 142 million distinct stars, of which 4.7 million were identified as candidate variables. Photometric time series for these candidate variables, as well as classifications we obtained for them using machine learning, constitute ATLAS variable star Data Release One (DR1) and are publicly available through STScI (Heinze et al., 2018).
While preparing ATLAS DR1, we manually examined thousands of light curves of objects that had periods, amplitudes, or other characteristics not typical of the classes the machine had assigned them. We identified a rare but well-defined class of light curves, mostly identified as pulsators by the machine, that did not seem to match any known type of variable star. These objects had coherent, periodic light curves with a distinctive shape defined by narrow, symmetrical maxima and broad, flat minima (Figure 1). They looked like the light curves of contact eclipsing binaries turned upside down. Since we'd defined a light curve category called CBH (Contact eclipsing Binaries folded at Half the true period), we called this new set of stars the upside-down CBH variables, or UCBH stars. They have typical periods of 1-10 days and peak-to-trough amplitudes of 0.05-0.20 magnitudes. The amplitudes are usually similar between the ATLAS \(c\) and \(o\) bands1.
Footnote 1: These broad, customized bandpasses are described in Tonry et al. (2018); briefly, \(c\) corresponds approximately to Sloan \(g+r\) and \(o\) to \(r+i\).
Herein, we present a catalog of 98 UCBH stars identified in ATLAS DR1 photometry. This catalog constitutes the entire set of ATLAS variables we have confidently assigned to the UCBH class. We carry the analysis of UCBH stars beyond pure photometry for the first time, presenting low-resolution spectra for 14 of them (chosen based on brightness and observability during our scheduled telescope time), intensive multi-band photometry for one, and HR diagrams based on Gaia parallaxes for all.
### Overview of \(\alpha^{2}\) CVn Variables
Our spectra (Section 3) indicate that a majority of UCBH stars are \(\alpha^{2}\) CVn variables. An \(\alpha^{2}\) CVn variable is an Ap or Bp star that exhibits rotationally modulated variability (Peterson, 1970; Catalano & Leone, 1993). Ap and Bp stars are A-type or B-type stars with enormously enhanced photospheric abundances of specific heavy elements (silicon, chromium, strontium, europium, and others). The enhancement is believed to be produced by radiative levitation (Michaud, 1970). This levitation occurs because the elements in question interact more strongly with the radiation field than most other atoms -- i.e., they have many strong spectral lines at wavelengths near the peak of the star's spectral energy distribution (Hummerich et al., 2018). Radiation pressure therefore exerts a stronger upward force (relative to their mass) on the atoms of these elements than on the majority constituents of the stellar atmosphere. This upward force is believed to concentrate the elements in the upper layers of the stars. The stellar atmospheres must be remarkably free from convection for the extremely weak force of radiative levitation to produce the observed concentrations of heavy elements. Michaud (1970) calculated that convective velocities must be slower than \(10^{-5}\) m/sec, and theorized that strong magnetic fields might be able to stabilize the ionized atmospheres of these stars against convective stirring.
That some Ap/Bp stars should exhibit rotational variability is not surprising: longitudinal inhomogeneity in the concentration of heavy elements would naturally cause rotational variation (Peterson, 1970). The shape, amplitude, and detectability of this variation depend on the details of the inhomogeneity (Shulyak et al., 2010),
which in turn might arise from spatial variations in the magnetic field (Michaud et al., 1981).
Interestingly, Peterson (1970) found that a photospheric spot having an enhanced concentration of silicon will be _brighter_ at optical wavelengths, because the strong silicon absorption lines in the UV will redistribute flux into the optical. We might expect that the heavy elements would be most concentrated at the regions of strongest magnetic field (where convection is most strongly suppressed and radiative levitation can have the greatest effect), and that the flux redistribution would render these regions the brightest at optical wavelengths. This would imply that the (optical) photometric maximum should coincide with points when the region of greatest magnetic field strength is centered on the hemisphere of the star that faces us. Accordingly, Dukes & Adelman (2018) found that the \(\alpha^{2}\) CVn star HD 215441 has its photometric maximum at about the same rotational phase as the maximum of the magnetic field measured by Zeeman splitting of the spectral lines. This might not, however, be a general rule: both theory (Michaud et al., 1981) and observation (Kochukhov & Wade, 2010; Kochukhov et al., 2015) indicate that different radiatively levitated elements can be affected differently by the magnetic field and hence have different photospheric distributions. Despite this complexity, there is broad observational evidence (e.g. Pyper, 1969) that optical photometric maxima do tend to occur near the same rotational phase where the greatest abundance of radiatively levitated elements is measured, consistent with the UV flux redistribution predicted by Peterson (1970).
Known \(\alpha^{2}\) CVn stars have amplitudes mostly smaller than is typical for the ATLAS UCBH stars, and some have longer periods, but the distributions of both period and amplitude overlap heavily. Sikora et al. (2019) have shown that the rotation periods of most of the magnetic, chemically peculiar A and B stars whose variability enables their periods to be measured fall within the same 1-10 day range that characterizes UCBH stars, with the few exceptions mostly having longer periods.
The light curves of known \(\alpha^{2}\) CVn stars have a variety of shapes, including some that exactly match our UCBH stars and many that do not. Hensberge et al. (1977) present \(uvby\) photometry of six \(\alpha^{2}\) CVn stars with periods from 1.48 to 4.75 days, of which only one (HD 207188) shows a UCBH-type lightcurve. Ryabchikova et al. (1990) find the \(\alpha^{2}\) CVn star HD 192913 to have a period of 16.5 days and an amplitude of 0.04 magnitudes, with a saw-tooth rather than UCBH-type light curve. Catalano & Leone (1993) present multi-band (\(uvby\)) light curves of eight bright \(\alpha^{2}\) CVn stars with periods ranging from 1.3 to 6.8 days and amplitudes from 0.03-0.10 mag. Two of them (HD 54118 and HD 73340) exhibit UCBH-type light curves in at least one of the four photometric bands, while the others have various different shapes. Poretti et al. (1997) also probe \(\alpha^{2}\) CVn stars with \(uvby\) photometry: HR 2746 (period 0.92 days; amplitude 0.004-0.024 mag depending on photometric band) and HR 2761 (period
Figure 1: Example lightcurves for ATLAS UCBH stars. The left panel shows the c-band lightcurves, and the right panel shows the corresponding o-band lightcurves for the same objects. A random selection of stars has been attempted to avoid cherry-picking the cleanest examples.
2.06 days; amplitudes 0.015-0.057 mag). They find a UCBH-type lightcurve for HR 2746 and sinusoidal variations for HR 2761. Drury et al. (2017) present Kepler and ground-based photometry of the \(\alpha^{2}\) CVn star KIC 2569073, finding a period of 14.67 days, approximately sinusoidal variations, and peak-to-trough amplitudes varying from 0.03 to 0.34 magnitudes, with a phase reversal seen in the \(B\)-band relative to the \(V\), \(R_{\rm C}\), and \(I_{\rm C}\) bands. Dukes & Adelman (2018) acquired precise \(uvby\) lightcurves of eight \(\alpha^{2}\) CVn stars. One of these, HD 26792, shows a perfect UCBH-type light curve in all filters, while HD 5797 shows a noisy but UCBH-like light curve in the \(y\) filter only. Among the other six stars, none shows a UCBH-type light curve shape in any filter. Most recently, Bernhard et al. (2020) analyzed and published light curves for 294 magnetic chemically peculiar stars (i.e., Ap/Bp stars) using data from three recent surveys that used very small apertures and hence maintained photometric precision for very bright stars. We find that 33 of the Bernhard et al. (2020) light curves are of the UCBH type (more on this in Section 4).
## 2 The characteristic light curves of UCBH stars
In Tables 1 and 2 we present the full set of UCBH stars we have identified in ATLAS data, divided into those that are (Table 1) and are not (Table 2) probable \(\alpha^{2}\) CVn stars, based on color and luminosity thresholds discussed in Section 5.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ ATLAS ID\({}^{a}\)} & Period (d) & \(c-o^{b}\) & amplitude\({}^{c}\) & g & r & g-z & parallax\({}^{d}\) (mas) & \(M_{V}\,e\) & \(M_{K}\,e\) \\ \hline J010.7230+57.8087 & 4.435267 & -0.004 & 0.208 & 12.962 & 12.831 & -0.03 & \(0.544\pm 0.014\) & \(1.56^{+0.05}_{-0.05}\) & \(1.01^{+0.05}_{-0.05}\) \\ J042.3381+51.3632 & 2.762592 & 0.069 & 0.172 & 13.762 & 13.581 & 0.08 & \(0.389\pm 0.018\) & \(1.60^{+0.10}_{-0.11}\) & \(0.88^{+0.10}_{-0.11}\) \\ J053.4996+56.7983 & 3.709273 & 0.372 & 0.176 & 12.516 & 12.113 & 0.74 & \(0.726\pm 0.141\) & \(1.58^{+0.39}_{-0.47}\) & \(-0.09^{+0.39}_{-0.47}\) \\ J060.4363+55.5067 & 2.474011 & 0.329 & 0.126 & 14.202 & 13.778 & 0.78 & \(0.527\pm 0.015\) & \(2.56^{+0.06}_{-0.06}\) & \(0.87^{+0.06}_{-0.06}\) \\ J061.6000+59.6651 & 3.434188 & 0.158 & 0.191 & 14.400 & 14.097 & 0.35 & \(0.274\pm 0.023\) & \(1.41^{+0.18}_{-0.19}\) & \(0.22^{+0.18}_{-0.19}\) \\ J062.2757+57.3439 & 2.119598 & 0.281 & 0.131 & 14.863 & 14.497 & 0.54 & \(0.262\pm 0.020\) & \(1.74^{+0.16}_{-0.17}\) & \(0.39^{+0.16}_{-0.17}\) \\ J063.5805+46.9075 & 1.456892 & 0.271 & 0.145 & 13.601 & 13.237 & 0.58 & \(0.274\pm 0.082\) & \(0.58^{+0.57}_{-0.77}\) & \(-0.88^{+0.57}_{-0.77}\) \\ J065.5257+51.2992 & 3.622927 & 0.358 & 0.209 & 15.384 & 14.837 & 0.77 & \(0.324\pm 0.024\) & \(2.62^{+0.16}_{-0.17}\) & \(0.78^{+0.16}_{-0.17}\) \\ J065.7038+47.6938 & 2.774777 & 0.174 & 0.154 & 11.687 & 11.452 & 0.34 & \(1.222\pm 0.189\) & \(1.98^{+0.31}_{-0.37}\) & \(0.79^{+0.31}_{-0.37}\) \\ J065.8718+43.5268 & 1.453359 & 0.157 & 0.115 & 14.617 & 14.437 & 0.23 & \(0.269\pm 0.021\) & \(1.66^{+0.16}_{-0.18}\) & \(0.63^{+0.16}_{-0.18}\) \\ J072.5642+39.5294\({}^{f}\) & 2.263109 & 0.220 & 0.143 & 14.852 & 14.597 & 0.43 & \(-0.022\pm 0.123\) & \(<2.41\) & \(<1.10\) \\ J073.7460+43.3008 & 7.746157 & 0.011 & 0.165 & 13.697 & 13.624 & -0.04 & \(0.322\pm 0.018\) & \(1.19^{+0.12}_{-0.12}\) & \(0.54^{+0.12}_{-0.12}\) \\ J076.4023+45.6101 & 2.156734 & 0.222 & 0.158 & 14.793 & 14.528 & 0.38 & \(0.322\pm 0.024\) & \(2.18^{+0.16}_{-0.17}\) & \(0.83^{+0.16}_{-0.17}\) \\ J079.6501+37.6469 & 2.445079 & 0.252 & 0.076 & 14.636 & 14.289 & 0.56 & \(0.290\pm 0.037\) & \(1.74^{+0.26}_{-0.29}\) & \(0.18^{+0.26}_{-0.29}\) \\ J080.3836+43.4165 & 2.896787 & 0.112 & 0.073 & 14.607 & 14.443 & 0.19 & \(0.174\pm 0.023\) & \(0.71^{+0.27}_{-0.31}\) & \(0.28^{+0.27}_{-0.31}\) \\ J081.9629+42.4325 & 1.761931 & 0.024 & 0.187 & 13.104 & 13.071 & -0.02 & \(0.537\pm 0.021\) & \(1.73^{+0.08}_{-0.08}\) & \(1.06^{+0.08}_{-0.08}\) \\ J082.4358+39.0290 & 1.947387 & 0.313 & 0.169 & 15.162 & 14.827 & 0.64 & \(0.350\pm 0.025\) & \(2.68^{+0.15}_{-0.16}\) & \(0.95^{+0.15}_{-0.16}\) \\ J084.6836+33.5786 & 1.083538 & 0.159 & 0.072 & 14.163 & 13.938 & 0.31 & \(0.422\pm 0.019\) & \(2.16^{+0.09}_{-0.10}\) & \(0.94^{+0.09}_{-0.10}\) \\ J084.9321+37.2119 & 1.838578 & 0.348 & 0.203 & 15.146 & 14.674 & 0.79 & \(0.441\pm 0.027\) & \(3.09^{+0.13}_{-0.14}\) & \(1.17^{+0.13}_{-0.14}\) \\ J089.1590+29.6893 & 2.092882 & 0.111 & 0.201 & 15.228 & 15.023 & 0.37 & \(0.221\pm 0.029\) & \(1.83^{+0.27}_{-0.31}\) & \(
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline ATLAS ID\({}^{a}\) & Period (d) & \(c-o^{b}\) & amplitude\({}^{c}\) & g & r & g-z & parallax\({}^{d}\) (mas) & \(M_{V}\)\({}^{e}\) & \(M_{K}\)\({}^{e}\) \\ \hline J091.6776+11.6763 & 2.546904 & 0.131 & 0.222 & 15.743 & 15.453 & 0.33 & \(0.228\pm 0.032\) & \(2.36^{+0.28}_{-0.33}\) & \(1.47^{+0.28}_{-0.33}\) \\ J092.1616+30.8849 & 2.167543 & 0.288 & 0.130 & 14.564 & 14.178 & 0.60 & \(0.328\pm 0.022\) & \(1.92^{+0.14}_{-0.15}\) & \(0.40^{+0.14}_{-0.15}\) \\ J095.0108+01.8715 & 2.379236 & 0.179 & 0.164 & 14.074 & 13.783 & 0.45 & \(0.374\pm 0.017\) & \(1.77^{+0.10}_{-0.10}\) & \(0.52^{+0.10}_{-0.10}\) \\ J095.1623+09.4416 & 2.680946 & -0.013 & 0.055 & 13.176 & 13.154 & -0.06 & \(0.252\pm 0.016\) & \(0.17^{+0.14}_{-0.14}\) & \(-0.52^{+0.14}_{-0.14}\) \\ J098.3181-07.1966 & 1.777259 & 0.163 & 0.084 & 12.823 & 12.596 & 0.29 & \(0.736\pm 0.014\) & \(2.02^{+0.04}_{-0.04}\) & \(0.92^{+0.04}_{-0.04}\) \\ J098.5294-00.6734 & 1.756423 & 0.073 & 0.172 & 12.441 & 12.305 & 0.07 & \(0.721\pm 0.014\) & \(1.65^{+0.04}_{-0.04}\) & \(0.91^{+0.04}_{-0.04}\) \\ J098.8005+06.4102 & 5.645326 & 0.179 & 0.134 & 13.963 & 13.719 & 0.35 & \(0.390\pm 0.014\) & \(1.77^{+0.08}_{-0.08}\) & \(0.43^{+0.08}_{-0.08}\) \\ J099.2222+00.8030 & 8.34857 & 0.273 & 0.153 & 13.815 & 13.441 & 0.57 & \(0.454\pm 0.022\) & \(1.88^{+0.10}_{-0.11}\) & \(0.37^{+0.10}_{-0.11}\) \\ J100.1756+00.2310 & 1.461506 & 0.141 & 0.075 & 14.051 & 13.857 & 0.28 & \(0.428\pm 0.020\) & \(2.09^{+0.10}_{-0.10}\) & \(0.97^{+0.10}_{-0.10}\) \\ J105.5347-01.9077 & 2.944283 & -0.035 & 0.102 & 13.902 & 13.903 & -0.17 & \(0.260\pm 0.015\) & \(0.97^{+0.13}_{-0.13}\) & \(0.47^{+0.13}_{-0.13}\) \\ J106.0648-03.3054 & 1.428092 & 0.075 & 0.120 & 15.109 & 14.961 & 0.14 & \(0.216\pm 0.033\) & \(1.69^{+0.31}_{-0.37}\) & \(0.80^{+0.31}_{-0.37}\) \\ J106.2127-00.9740 & 2.027484 & -0.062 & 0.129 & 13.672 & 13.707 & -0.21 & \(0.404\pm 0.016\) & \(1.72^{+0.09}_{-0.09}\) & \(1.27^{+0.09}_{-0.09}\) \\ J107.2480-12.7673 & 2.951689 & 0.298 & 0.096 & 14.592 & 14.194 & 0.67 & \(0.320\pm 0.018\) & \(1.88^{+0.12}_{-0.13}\) & \(0.13^{+0.12}_{-0.13}\) \\ J108.1707-08.2617 & 2.079303 & 0.105 & 0.079 & 14.238 & 14.074 & 0.22 & \(0.288\pm 0.020\) & \(1.44^{+0.15}_{-0.16}\) & \(0.28^{+0.15}_{-0.16}\) \\ J109.4314-15.7080 & 3.587642 & 0.311 & 0.137 & 15.395 & 14.981 & 0.71 & \(0.234\pm 0.027\) & \(2.00^{+0.24}_{-0.27}\) & \(0.27^{+0.24}_{-0.27}\) \\ J109.7734-07.1470 & 1.858906 & -0.096 & 0.059 & 13.978 & 14.049 & -0.28 & \(0.255\pm 0.021\) & \(1.05^{+0.17}_{-0.19}\) & \(0.76^{+0.17}_{-0.19}\) \\ J110.2675-03.2520 & 2.809982 & -0.146 & 0.233 & 12.659 & 12.753 & -0.39 & \(0.445\pm 0.024\) & \(0.95^{+0.11}_{-0.12}\) & \(0.71^{+0.11}_{-0.12}\) \\ J110.2057-08.5343 & 2.501694 & 0.031 & 0.167 & 15.241 & 15.003 & 0.13 & \(0.284\pm 0.027\) & \(2.36^{+0.20}_{-0.22}\) & \(1.58^{+0.20}_{-0.22}\) \\ J110.9074-12.0800 & 1.647087 & 0.055 & 0.075 & 13.760 & 13.655 & 0.10 & \(0.347\pm 0.018\) & \(1.39^{+0.11}_{-0.11}\) & \(0.54^{+0.11}_{-0.11}\) \\ J110.9392-21.9535 & 1.609184 & 0.229 & 0.084 & 14.555 & 14.269 & 0.43 & \(0.333\pm 0.018\) & \(2.00^{+0.11}_{-0.12}\) & \(0.55^{+0.11}_{-0.12}\) \\ J111.2165-23.1396 & 7.770775 & 0.311 & 0.120 & 15.032 & 14.636 & 0.66 & \(0.215\pm 0.018\) & \(1.46^{+0.18}_{-0.19}\) & \(-0.25^{+0.18}_{-0.19}\) \\ J112.7672-19.4071 & 1.769319 & 0.065 & 0.098 & 13.521 & 13.425 & 0.06 & \(0.364\pm 0.012\) & \(1.27^{+0.07}_{-0.07}\) & \(0.33^{+0.07}_{-0.08}\) \\ J112.9447-28.3476 & 2.963217 & -0.008 & 0.110 & 14.224 & 14.185 & -0.08 & \(-0.272\pm 0.181\) & \(<1.36\) & \(<0.57\) \\ J113.2387-19.4340 & 2.273453 & 0.150 & 0.082 & 13.536 & 13.309 & 0.26 & \(0.353\pm 0.013\) & \(1.14^{+0.08}_{-
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ ATLAS IDa } & Period (d) & \(c-o\)b & amplitudec & g & r & g-z & parallaxd (mas) & \(M_{V}\)e & \(M_{K}\)e \\ \hline J057.9558+54.6451 & 2.222517 & 0.469 & 0.083 & 13.811 & 13.195 & 1.03 & \(0.843\pm 0.039\) & \(3.08^{+0.10}_{-0.10}\) & \(0.98^{+0.10}_{-0.10}\) \\ J059.8990+50.2781 & 4.764477 & 0.576 & 0.215 & 16.508 & 15.729 & 1.37 & \(0.282\pm 0.039\) & \(3.30^{+0.28}_{-0.32}\) & \(0.72^{+0.28}_{-0.32}\) \\ J067.0419+51.6124 & 6.716061 & 0.413 & 0.193 & 16.158 & 15.558 & 0.98 & \(0.209\pm 0.034\) & \(2.41^{+0.33}_{-0.38}\) & \(0.59^{+0.33}_{-0.38}\) \\ J074.6123+26.0721 & 2.610292 & 0.395 & 0.078 & 14.988 & 14.503 & 0.92 & \(0.375\pm 0.021\) & \(2.57^{+0.12}_{-0.12}\) & \(0.69^{+0.12}_{-0.12}\) \\ J075.5525+46.9691 & 2.664824 & 0.440 & 0.095 & 14.993 & 14.414 & 0.98 & \(0.366\pm 0.025\) & \(2.47^{+0.15}_{-0.16}\) & \(0.43^{+0.15}_{-0.16}\) \\ J082.6906-06.8709 & 1.137648 & 1.094 & 0.215 & 16.911 & 15.651 & 2.76 & \(2.814\pm 0.028\) & \(8.43^{+0.02}_{-0.02}\) & \(3.93^{+0.02}_{-0.02}\) \\ J083.1858+21.5801 & 2.081992 & 0.750 & 0.398 & 16.055 & 15.071 & 1.78 & \(0.787\pm 0.059\) & \(4.96^{+0.16}_{-0.17}\) & \(1.42^{+0.16}_{-0.17}\) \\ J089.0960+24.8012 & 1.363957 & 0.369 & 0.164 & 15.985 & 15.468 & 0.83 & \(0.256\pm 0.036\) & \(2.72^{+0.29}_{-0.33}\) & \(0.74^{+0.29}_{-0.33}\) \\ J095.3593+12.9723 & 4.515769 & 0.521 & 0.213 & 16.424 & 15.589 & 1.19 & \(0.225\pm 0.040\) & \(2.70^{+0.36}_{-0.43}\) & \(0.47^{+0.36}_{-0.43}\) \\ J101.5867-01.46979 & \(1.903659\) & 0.214 & 0.081 & 13.631 & 13.349 & 0.49 & \(1.146\pm 0.628\) & \(3.76^{+0.95}_{-1.72}\) & \(2.29^{+0.95}_{-1.72}\) \\ J114.4350-24.3432 & 2.242821 & 0.376 & 0.123 & 15.442 & 14.839 & 0.92 & \(0.345\pm 0.023\) & \(2.78^{+0.14}_{-0.15}\) & \(0.85^{+0.14}_{-0.15}\) \\ J138.5489+06.3771 & 4.097173 & 0.440 & 0.178 & 15.372 & 14.779 & 1.05 & \(0.598\pm 0.031\) & \(3.91^{+0.11}_{-0.12}\) & \(1.47^{+0.11}_{-0.11}\) \\ J207.7199+36.7006f & \(3.31534\) & -0.373 & 0.318 & 13.290 & 13.645 & -0.95 & \(1.095\pm 0.024\) & \(3.69^{+0.05}_{-0.05}\) & \(4.00^{+0.05}_{-0.05}\) \\ J238.9224-20.7209 & 1.028827 & 1.235 & 0.260 & 15.358 & 13.964 & 3.11 & \(7.149\pm 0.019\) & \(8.82^{+0.01}_{-0.01}\) & \(3.72^{+0.01}_{-0.01}\) \\ J266.7656+06.0408g & \(4.555574\) & 0.508 & 0.298 & 14.688 & 14.303 & 0.56 & \(0.954\pm 0.029\) & \(4.36^{+0.06}_{-0.07}\) & \(\cdots\) \\ J279.0944-07.2749 & 3.236192 & 0.561 & 0.176 & 15.053 & 14.377 & 1.29 & \(0.520\pm 0.022\) & \(3.24^{+0.09}_{-0.09}\) & \(0.85^{+0.09}_{-0.09}\) \\ \hline \end{tabular}
\end{table}
Table 2: ATLAS UCBH stars that probably are not \(\alpha\)\({}^{2}\) CVn stars
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ ATLAS IDa } & Period (d) & \(c-o\)b } & amplitudec & g & r & g-z & parallaxd (mas) & \(M_{V}\)e & \(M_{K}\)e [FOOTNOTE:e]Footnote e: The absolute magnitudes quoted for these stars are 3 \(\sigma\) upper limits, since their nominal parallaxes are
Figures 1 and 2 give examples of the specific and unusual 'upside-down contact binary' light curve shape that defines these stars. The UCBH lightcurves have narrow, symmetrical maxima and broad, nearly-flat minima. The machine learning we used in ATLAS DR1 classified most UCBH stars as pulsators. However, known classes of pulsating variables typically have markedly asymmetrical maxima (the familiar'sawtooth' shape of RRAB and \(\delta\) Scuti variables) or else more sinusoidal variations (RRC and some types of Cepheids). Furthermore, our spectral types combined with absolute magnitudes based on Gaia distances (Section 5) indicate that most UCBH variables are A-type or late B-type main sequence stars, and when such stars pulsate (e.g., the \(\delta\) Scuti stars), their fundamental frequencies lead to periods much shorter than those of UCBH stars.
Alternatively, as illustrated by Figure 2, a rotating star with a single bright spot near the equator will naturally exhibit UCBH-type variations for a wide range of non-polar viewing geometries. The probability that our line of sight to a randomly oriented star will be inclined by an angle \(\theta\) to its rotation axis is proportional to \(\sin(\theta)\), so near-polar (\(\theta\sim 0\)) viewing geometries are statistically disfavored. Hence, if the Milky Way contains a population of stars that have a single bright spot at low latitude, simple geometry dictates that the majority of them _must_ appear as UCBH stars.
A bright, low-latitude spot is not the only way to produce a UCBH-type rotational lightcurve. The pink curves in Figure 2 demonstrate that a band of equatorial dark spots with a gap in it will produce a similar effect. However, this band-and-gap explanation (though it may apply to some systems) is more complex and specific: Occam's Razor favors the model with just a single bright spot. Unless stars with such a feature are vanishingly rare in the Milky Way, the geometrical argument we have already made demonstrates that they must be represented among our UCBH objects.
To explore the photometric behavior of UCBH stars with higher precision and more wavelength bands, we monitored the UCBH star ATO J110.9074-12.0800 intensively for five nights (UT 2019 January 18-22) using the University of Hawaii 2.2 meter telescope on Maunakea (Figure 3). This star was later spectrally confirmed to be an Ap star and hence an \(\alpha^{2}\) CVn variable. For our photometric monitoring we used the \(B\), \(R\), \(I\), and \(z\) filters, finding very similar lightcurve shape in all filters, with slightly reduced amplitudes in the \(B\) and
possibly \(z\) bands. Interestingly, we do not see a phase-reversal at \(B\)-band relative to \(R\) and \(I\), such as was noted by Drury et al. (2017) in the sinusoidally-varying \(\alpha^{2}\) CVn variable KIC 2569073.
The phase and lightcurve shape of ATO J110.9074-12.0800 have remained coherent and unchanging to within measurement error from the beginning of ATLAS data acquisition in October 2015 up through the UH 2.2 meter monitoring in January 2019. In the higher-precision 2.2 meter data the maximum continues to appear very symmetrical. A slant and slight 'bump' on the floor of the broad minimum, hinted at in the ATLAS data, are confirmed by the more precise photometry. Such features also seem to be indicated in ATLAS data for other UCBH stars, notably ATO J010.7230+57.8087 (Figure 2). This indicates that the feature producing the photometric maximum, while dominant, is not necessarily the only photospheric inhomogeneity on a typical UCBH star.
## 3 Spectra of UCBH stars
### Observations
In 2018 we acquired spectra of five ATLAS UCBH stars with the GMOS spectrograph on the 8-meter Gemini North telescope under proposal ID GN-2018B-Q-216. This proposal was designed to take advantage of the worst usable weather by targeting bright objects that could be usefully observed even through moonlit cloud with bad seeing. This observing plan produced a win-win situation in which we provided Gemini queue observers with targets for conditions when almost nothing else could be observed, while the spectra they acquired for us were in fact considerably better than our nominal requirements. This happened because worst-case observing conditions are statistically rare, so the majority of our data were acquired in somewhat better weather than we had planned for (though still too poor for most observing programs).
Our plan of exploiting the worst usable weather at Gemini determined determined both our choice of target objects (the brightest ATLAS UCBH stars observable) and the slit width (2.0 arcsec, to allow for very bad seeing). We used the GMOS B1200 grating, which delivers a nominal resolution of \(R=3744\) at 4630A with an 0.5 arcsecond slit2. Our 2.0 arcsecond slit would therefore be expected to deliver \(R=936\), four times worse than nominal - but the actual resolution could be higher if the seeing was smaller than the slit. We observed each target alternately with two different central wavelength settings, 4400A and 4680A, enabling us to fill in gaps between the three GMOS CCDs and obtain continuous spectral coverage from 3650-5500A.
Figure 2: The characteristic lightcurves of ATLAS UCBH stars match those expected from a single bright spot on a rotating star for a variety of spot sizes, contrasts, latitudes, and sub-observer latitudes, both with and without limb-darkening. _Left:_ Example lightcurves for four ATLAS UCBH stars, with c-band photometry in blue, o-band in red, and Fourier fits (see Heinze et al. (2018)) plotted as solid curves. _Right:_ Model light curves for rotating stars with a single bright spot. They resemble UCBH light curves except for spot diameters larger than 120\({}^{\circ}\), and for high sub-observer latitudes (that is, low inclinations), when the spots can become circumpolar. A similar light curve results from the more contrived case of a ring of dark spots with a gap (pink curves in the upper plots).
In addition to our GMOS observations, we acquired spectra of nine additional UCBH stars using the SNIFS instrument (Lantz et al., 2004) on the University of Hawaii 2.2 meter telescope on Maunakea, in February and March 2019. A substantially larger number of spectra was originally expected from this observing program, but it was plagued with bad weather and equipment problems. The SNIFS instrument has a blue module delivering spectral coverage from 3200-5600A with resolution \(R\sim 1000\) at 4300A, and a red module covering 5200-10000A with \(R\sim 1300\) at 7600A (Lantz et al., 2004).
Our nominal resolution element at \(\sim 4500\)A should be 4500/936 = 4.8A with GMOS and 4500/1000 = 4.5A with SNIFS. Comparisons of our GMOS and SNIFS spectra demonstrate that GMOS has actually delivered better resolution -- an indication that the seeing was smaller than our 2.0 arcsecond slit width during our GMOS observations.
### Identification of Ap Stars
Our Gemini spectra (Figure 4) showed all five science targets to be Ap/Bp stars: that is, A-type or B-type stars with enormously enhanced abundances of a few specific heavy elements (mainly silicon, europium, chromium, and strontium). The peculiar lines that we detect most strongly form two blends, one near 4080A (likely a blend of Sr and Cr) and another near 4130A (a blend of Si and Eu). The resolution of our spectra is insufficient to determine the relative contributions of each element to the blended lines. Previous work on such stars (see, e.g. Preston, 1974; Dukes & Adelman, 2018) distinguishes fine gradations of spectral classification depending on magnetic field strength and on what elements are enhanced to what extent. Since multiple lines are blended in our spectra, they do not enable us to assign exact types of chemical peculiarity -- but they do establish that our targets fall into the broad category of chemically peculiar A-type or B-type stars.
The SNIFS spectra, though not matching the resolution and signal-to-noise ratio of GMOS, are nevertheless sufficient to show that five of our SNIFS targets are Ap/Bp stars, while the rest are not A or B stars at all. Figure 4 shows spectra for all ten of our spectrally confirmed Ap/Bp stars, five from GMOS and five from SNIFS.
### Spectral Types of UCBH stars
We have attempted to determine spectral types for the UCBH stars for which we have spectra. We have done this classification manually using comparison spectra from the Stellar Spectral Flux Library of Pickles (1998), guided in part by the diagnostic spectral lines mentioned in the Atlas of Stellar Spectra3. The effec
Figure 3: Folded lightcurves of ATO J110.9074-12.0800. _Left:_ apparent magnitude vs. phase for targeted \(B\), \(R\), \(I\), \(z\)-band photometry from the University of Hawaii 2.2m telescope on Mauna Kea, together with the \(o\) and \(c\)-band photometry from ATLAS. _Right:_ Same data as at left, but with magnitude offsets applied to facilitate comparing the light curves in greater detail. The light curve shape is consistent from 2015 (ATLAS data) through 2019 (UH 2.2m data), and across the different photometric bands probed here — in strong contrast to the sinusoidally-varying \(\alpha^{2}\) CVn variable KIC 2569073, which showed a phase-reversal in the \(B\)-band relative to \(R_{\rm C}\) and \(I_{\rm C}\)(Drury et al., 2017).
tive resolution of our SNIFS spectra matches fairly well with the 5A sampling used in the Pickles (1998) library. The higher resolution of our GMOS spectra made narrow spectral lines look too deep relative to the Pickles (1998) library, so we smoothed the GMOS classification spectra using a Gaussian blur of \(\sigma=3\)A.
The classification spectra of our \(\alpha^{2}\) CVn stars, with appropriate comparison spectra from Pickles (1998), are shown in Figure 5, and the spectral types we assigned are in Table 3. For these A-type (or very late B) stars, classification with uncertainty no greater than one spectral subtype appears to be possible based on the strength of the calcium H line at 3969A (which is the only one in our spectra with significant diagnostic power). Based on this, we would expect our spectral types to be quite accurate -- with the important caveat that the chemical peculiarity of our stars might have affected the calcium H line or our perception of it (e.g., by changing the nearby continuum). That the spectra are not typical of A stars is obvious even at reduced resolution: besides numerous lines not present in the comparison spectra, the hydrogen Balmer lines seem somewhat weaker in the UCBH stars. However, unless classification biases from the peculiar spectra are extremely severe, there is no doubt that all of our \(\alpha^{2}\) CVn stars are early A or very late B-type.
Four of the UCBH stars for which we obtained SNIFS spectra were not \(\alpha^{2}\) CVn stars. One of these, ATO J207.7199+36.7006, is in fact the known subdwarf OB star PG 1348+369 (Green et al., 1986; Wesemael et al., 1992). Since our spectra are consistent with the published results, and indicate a star much too hot to be an \(\alpha^{2}\) CVn variable, we have not attempted to reclassify this object. Spectra of the remaining three UCBH stars, which have much later spectral types, are shown along with Pickles (1998) comparison spectra in Figure 6. The spectral types we assigned them are provided in Table 3. For these classifications, the diagnostic lines listed in the Atlas of Stellar Spectra were of limited value because our SNIFS spectra of these red objects were very faint in the blue region covered by the Atlas. Hence, we made use of many other lines at much longer wavelengths that appeared to be diagnostic based on their variations with spectral type seen in the Pickles (1998) library. We expect our classifications of these later-type stars to have an accuracy of around two spectral subtypes. Interestingly, all three of our late-type UCBH stars show significant H\(\alpha\) emission (Figure 6, right panel).
## 4 UCBH stars from Bernhard et al. (2020)
Bernhard et al. (2020) make a remarkable contribution to the photometry of \(\alpha^{2}\) CVn stars by analyzing photometry of 294 bright Ap/Bp stars with previous spectroscopic identifications. They use data from three
Figure 4: _Left:_ Low-resolution spectra of five ATLAS ‘UCBH’ stars acquired with Gemini/GMOS (blue), compared with those of normal A-type standard stars (black). The UCBH stars have strong enhancements of specific heavy elements in their atmospheres, as indicated by the lines labeled Sr/Cr and Si/Eu. As the lines are blended at this resolution, the relative contributions of the different elements cannot be determined. _Right:_ Similar comparison for spectra of five additional UCBH stars (dark red) acquired with the SNIFS spectrograph at the UH 2.2 meter telescope on Mauna Kea. Although the SNIFS spectra do not have as high resolution and SNR as those from GMOS, the peculiar metal lines can still be clearly seen.
Figure 5: Spectral classifications of A-type UCBH stars with GMOS (left, blue) and SNIFS (right, dark red). Comparison spectra, plotted in black, are from the library of Pickles (1998). Our GMOS spectra have been smoothed to match the library resolution. Gray vertical lines mark some of the spectral lines mentioned as useful for classification in the Atlas of Stellar Spectra (see text). The spectral types we found for these stars, based almost exclusively on the changing strength of the calcium H line at 3969Å, are written on the spectra in these plots and listed Table 3.
Figure 6: Spectral classifications of late-type UCBH stars with SNIFS (left), and the detection of H\(\alpha\) emission in these stars (right, with H\(\alpha\) marked by a gray vertical line). The target spectra are shown in dark red, while comparison spectra from the library of Pickles (1998) are plotted in black. Gray vertical lines mark some of the spectral lines mentioned as useful for classification in the Atlas of Stellar Spectra, but as the Atlas covers only relatively short wavelengths, we have used many other lines and bands to arrive at the spectral types given in Table 3. The pale blue line near 7600Å in the left-hand plot marks the Fraunhofer A band, which is not intrinsic to the stars but is caused by oxygen in Earth’s atmosphere.
surveys that, by using very small apertures, maintain photometric precision for bright stars that are saturated in ATLAS photometry. Examining their published lightcurves, we identify 33 UCBH stars, which we list in Table 4 together with relevant parameters for these stars from Bernhard et al. (2020), Tonry et al. (2018), and Gaia DR3 (Gaia Collaboration et al., 2022). These stars, being much brighter than the ATLAS UCBH stars of Table 1, can more easily be explored with high resolution, high-SNR spectroscopy or spectropolarimetry such as is required for detailed abundance analysis or Zeeman Doppler imaging. Many of the Bernhard et al. (2020) stars might be less interesting targets because they have much smaller photometric amplitudes relative to the ATLAS UCBH stars -- but a few exceptions (particularly HD 191287, HD 77314, and HD 205938) have perfect UCBH lightcurves with ATLAS-like amplitudes. As these stars are magnitudes brighter than any in the ATLAS catalog, they are the most promising targets for followup spectroscopy and Zeeman Dopper imaging to probe the chemical abundances and magnetic field topologies of \(\alpha^{2}\) CVn stars with UCBH lightcurves (see Section 6).
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Star} & Period(d) & Sp\({}^{\mathbf{a}}\) & amplitude\({}^{\mathbf{b}}\) & V\({}^{\mathbf{c}}\) & g-z\({}^{\mathbf{d}}\) & parallax\({}^{\mathbf{c}}\) (mas) & \(M_{V}\)\({}^{f}\) & \(M_{K}\)\({}^{f}\) & Remarks\({}^{\mathbf{g}}\) \\ \hline HD 7546 & 3.9725 & A0 & 0.03 & 9.43 & -0.525 & \(2.9380\pm 0.1027\) & \(1.77^{+0.08}_{-0.08}\) & \(-1.00^{+0.08}_{-0.08}\) & \\ HD 26792 & 3.8023 & B8 & 0.04 & 6.69 & -0.581 & \(6.1745\pm 0.0307\) & \(0.64^{+0.01}_{-0.01}\) & \(0.54^{+0.01}_{-0.01}\) & Strong \\ HD 30466 & 1.40687 & A0 & 0.03 & 7.25 & -0.402 & \(5.1396\pm 0.2791\) & \(0.80^{+0.11}_{-0.12}\) & \(0.37^{+0.11}_{-0.12}\) & \\ HD 39317 & 2.6558 & B9 & 0.01 & 5.59 & -0.731 & \(6.8553\pm 0.0959\) & \(-0.23^{+0.03}_{-0.03}\) & \(-0.25^{+0.03}_{-0.03}\) & \\ HD 43819 & 14.981 & B9 & 0.02 & 6.27 & -0.796 & \(4.0642\pm 0.1735\) & \(-0.69^{+0.09}_{-0.09}\) & \(-0.54^{+0.09}_{-0.09}\) & \\ HD 44903 & 1.41143 & A5 & 0.03 & 8.37 & -0.445 & \(4.7409\pm 0.0430\) & \(1.75^{+0.02}_{-0.02}\) & \(1.46^{+0.02}_{-0.02}\) & Strong \\ HD 46462 & 10.346 & B9 & 0.06 & 7.53 & -0.935 & \(4.1051\pm 0.3830\) & \(0.60^{+0.19}_{-0.21}\) & \(0.87^{+0.19}_{-0.21}\) & \\ HD 51418 & 5.4377 & A0 & 0.13 & 6.67 & -0.536 & \(5.6092\pm 0.0929\) & \(0.42^{+0.04}_{-0.04}\) & \(0.35^{+0.04}_{-0.04}\) & Strong \\ HD 55667 & 1.79690 & A2 & 0.03 & 6.95 & -0.645 & \(7.4800\pm 0.0277\) & \(1.32^{+0.01}_{-0.01}\) & \(1.31^{+0.01}_{-0.01}\) & Strong \\ HD 56273 & 1.78678 & B8 & 0.04 & 7.90 & -0.749 & \(2.7056\pm 0.0353\) & \(0.06^{+0.03}_{-0.03}\) & \(0.22^{+0.03}_{-0.03}\) & \\ HD 77314 & 2.86445 & A2 & 0.08 & 7.24 & -0.528 & \(4.4294\pm 0.0493\) & \(0.47^{+0.02}_{-0.03}\) & \(0.20^{+0.02}_{-0.03}\) & Ideal \\ HD 88701 & 25.77 & B9 & 0.06 & 9.30 & -0.453 & \(2.0931\pm 0.0201\) & \(0.90^{+0.02}_{-0.02}\) & \(0.82^{+0.02}_{-0.02}\) & \\ HD 129189 & 1.35563 & B9 & 0.03 & 8.61 & -0.511 & \(3.6861\pm 0.0231\) & \(1.44^{+0.01}_{-0.01}\) & \(1.22^{+0.01}_{-0.01}\) & \\ HD 142884 & 0.80296 & B9 & 0.02 & 6.77 & -0.581 & \(5.7423\pm 0.0415\) & \(0.56^{+0.02}_{-0.01}\) & \(0.47^{+0.02}_{-0.01}\) \\ HD 150714 & 1.62906 & A0 & 0.05 & 7.56 & \(0.342\) & \(6.0507\pm 0.0331\) & \(1.47^{+0.01}_{-0.01}\) & \(0.96^{+0.01}_{-0.01}\) \\ HD 151199 & 2.2267 & A3 & 0.01 & 6.17 & -0.539 & \(9.6547\pm 0.1270\) & \(1.09^{+0.03}_{-0.03}\) &... & \\ HD 154187 & 8.096 & A0 & 0.03 & 9.27 & 0.518 & \(3.3825\pm 0.0349\) & \(1.92^{+0.02}_{-0.02}\) & \(0.33^{+0.02}_{-0.02}\) \\ HD 173650 & 9.976 & A0 & 0.04 & 6.51 & -0.456 & \(4.0447\pm 0.0261\) & \(-0.46^{+0.01}_{-0.01}\) & \(-0.57^{+0.01}_{-0.01}\) \\ HD 176582 & 1.58193 & B5 & 0.02 & 6.40 & -0.643 & \(3.2506\pm 0.0411\) & \(-1.04^{+0.03}_{-0.03}\) & \(-0.52^{+0.03}_{-0.03}\) \\ \hline \end{tabular}
\end{table}
Table 4: \(\alpha^{2}\) CVn stars with UCBH lightcurves from Bernhard et al. (2020)
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{4}{c}{ Spectral} \\ \multicolumn{1}{c}{ Star} & Type & Instrument & \(\alpha^{2}\) CVn? \\ \hline ATO J010.7230+57.8087 & A0 & GMOS & yes \\ ATO J063.5805+46.9075 & A1 & GMOS & yes \\ ATO J065.5257+51.2992 & A2 & SNIFS & yes \\ ATO J065.7038+47.6938 & B9 & GMOS & yes \\ ATO J073.7460+43.3008 & A2 & GMOS & yes \\ ATO J081.9629+42.4325 & A0 & GMOS & yes \\ ATO J082.6906-06.8709 & M2 & SNIFS & no \\ ATO J083.1858+21.5801 & K2 & SNIFS & no \\ ATO J089.1800+11.3598 & A0 & SNIFS & yes \\ ATO J092.1616+30.8849 & B9 & SNIFS & yes \\ ATO J110.2675-03.2520 & A1 & SNIFS & yes \\ ATO J110.9074-12.0800 & A0 & SNIFS & yes \\ ATO J138.5489+06.3771 & K2 & SNIFS & no \\ ATO J207.7199+36.7006 & sd0 & SNIFS & no \\ \hline \end{tabular}
\end{table}
Table 3: ATLAS UCBH stars classified with low-resolution spectra
## 5 HR Diagrams of UCBH Stars
The precision and comprehensive sky coverage of Gaia parallaxes (Gaia Collaboration et al., 2016) are revolutionizing Galactic stellar astrophysics, and our UCBH stars are no exception. Figure 7 shows observers' HR diagrams of our UCBH stars against a background plot of about \(10^{5}\) high Galactic latitude stars which outline the main sequence and the giant branch. We used \(g-z\) colors to obtain strong wavelength leverage and reduce sensitivity to the known photometric variability of these stars. Magnitudes are taken from Tonry et al. (2018), where we have determined \(V\) magnitudes from \(g\) and \(r\) using Equation 1, which comes from a transformation derived by Robert Lupton4. This transformation should be valid through the whole range of stellar colors and spectral types relevant to this paper, since it is based on Peter Setson's photometric standard stars5, which span \(B-V\) colors ranging from -0.4 to +3.5 mag: i.e., the entire range of ordinary stars from spectral types O and B through late-M.
Footnote 4: [http://classic.sdss.org/dr4/algorithms/sdssUBVRITransform.html](http://classic.sdss.org/dr4/algorithms/sdssUBVRITransform.html)
Footnote 5: See, e.g., [https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/community/STF](https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/community/STF)
\[V=g-0.5784*(g-r)-0.0038 \tag{1}\]
For the UCBH stars, we have used parallaxes from Gaia Data Release 3 (DR3; Gaia Collaboration et al., 2022), while for the gray background points in Figure 7 we have used the parallaxes in the photometric catalog of Tonry et al. (2018), which come from Gaia Data Release 2 (DR2; Gaia Collaboration et al., 2018).
Figure 7 shows a large range of colors even for the UCBH stars that we have confidently determined to be A or late B-type \(\alpha^{2}\) CVn stars. Furthermore, they are mostly redder and less luminous than nearby main sequence stars with early A spectral types. To determine
if this can be plausibly attributed to dust reddening and extinction, we used the interstellar extinction coefficients provided in Table 21.6 of Cox (2000) for \(R_{V}=3.1\). Since this table does not provide coefficients for the \(g\) and \(z\) bands we chose for our colors, we interpolated it to the effective wavelengths given for these bands by Bessell (2005). Hence, we arrived at interstellar extinction coefficients (relative to the \(V\) band) of 1.2426 for \(g\), 0.4930 for \(z\), and 0.108 for \(K\). From these, we calculated the reddening vectors plotted in both panels of Figure 7. These vectors indicate the direction a star moves on the figure as it becomes increasingly dust-reddened. We set the origin of each vector at the position of an unreddened A2V star, intended to be characteristic of a 'typical' \(\alpha^{2}\) CVn star unaffected by dust extinction -- hence, we expect reddened stars of A or late B-type to fall along the reddening vector in each plot.
The reddening vectors plotted in Figure 7 indicate that our \(\alpha^{2}\) CVn UCBH stars all have colors and absolute magnitudes close to what would be expected for reddened main sequence stars of early A-type (or late B-type). The amount of interstellar extinction implied varies greatly from star to star, but approaches two magnitudes at \(V\) band for our reddest spectrally confirmed \(\alpha^{2}\) CVn stars. By contrast, the much nearer sample of UCBH stars from Bernhard et al. (2020) are consistent with A or late B-type stars with near-zero dust extinction -- as we should expect given their much smaller distances relative to the ATLAS UCBH stars. There may be an indication that the ATLAS UCBH stars are slightly underluminous (they tend to lie slightly below the line of the reddening vector), but we cannot conclude this with confidence given our rather simplistic reddening correction.
We have used Figure 7 as a guide to the range of color and absolute magnitude inhabited by UCBH stars that are also A or late-B type \(\alpha^{2}\) CVn variables. We have selected the range -1.0 to 0.8 in \(g-z\) color, and absolute magnitude thresholds of 3.0 for \(M_{V}\) and 1.5 for \(M_{K}\), indicated by green dashed rectangles in Figure 7. We believe most of the UCBH stars in these regions of the HR diagrams will also be \(\alpha^{2}\) CVn variables. Interlopers are possible, for example from less-reddened and slightly overluminous objects of later spectral types. However, the fact that no such interlopers were identified among our spectral sample of ten objects suggests they will be a small minority. Similarly, Figure 7 indicates that many of the UCBH stars redder than our limit of \(g-z=0.8\) would also be perfectly consistent with strongly reddened \(\alpha^{2}\) CVn variables. Some of them
Figure 7: HR diagrams for UCBH stars for V-band absolute magnitude (left) and K-band (right). Absolute magnitudes of UCBH stars are based on fluxes from Tonry et al. (2018) and parallaxes from Gaia DR3 (Gaia Collaboration et al., 2016, 2021, 2022). Green rectangles illustrate the regions of each diagram from which objects were selected as probable \(\alpha^{2}\) CVn stars for inclusion in Table 1. The dark red arrow in each figure indicates the direction a star moves as it becomes increasingly dust-reddened. While classification is not definitive without spectra, the vast majority (\(>90\%\)) of stars within the green rectangles, as well as some objects that lie outside of them but along the reddening vectors, should be \(\alpha^{2}\) CVn objects. The distribution of UCBH stars from Bernhard et al. (2020) is consistent with the expectation that these nearer objects should be less reddened. Small gray points illustrate the Galactic field population using data from Tonry et al. (2018).
almost certainly are exactly that. However, objects beyond our red limit could also be less reddened evolved stars ascending the giant branch, and hence we believe \(g-z=0.8\) is a good provisional limit to maintain a fairly pure sample in the absence of spectra for most of the stars. The green rectangles drawn in Figure 7 therefore mark the boundary between stars listed as probable \(\alpha^{2}\) CVn variables in Table 1 and those listed as probably something else in Table 2. Figure 8 gives a Venn Diagram of the respective classifications.
## 6 Discussion and Conclusion
Using photometry from the ATLAS survey (Tonry et al., 2018), we have identified a rare population of periodic variable stars (the 'UCBH' stars) with characteristic lightcurves having broad minima, narrow, symmetrical maxima, and periods mostly in the range of 1-10 days. Among 142 million distinct stars analyzed in ATLAS DR1 (Heinze et al., 2018), only 98 are identified as UCBH stars. Though the relatively low amplitudes of the UCBH stars mean we could have identified them only among the brightest \(\sim 20\%\) of the DR1 sample, the fact that we found fewer than 100 in all shows they are extremely rare. Our spectroscopy of these objects indicates that most (\(\sim 75\%\)) of them are \(\alpha^{2}\) CVn variables -- that is, Ap/Bp stars that show rotationally modulated photometric variations. Although most UCBH stars are \(\alpha^{2}\) CVn variables, only a minority (10-15%) of \(\alpha^{2}\) CVn variables appear to be UCBH stars. Meanwhile, \(\alpha^{2}\) CVn stars themselves are only a subset of the Ap/Bp stars, which in turn comprise a small fraction of all A-type and B-type stars.
We have demonstrated that a single bright feature at low latitude on a rotating star will produce a UCBH-type lightcurve, by geometrical necessity (Figure 2), for the most probable viewing inclinations. Hence, if the Milky Way contains a non-negligible population of stars with bright low-latitude features, they should be represented among our UCBH objects. The fact that most UCBH stars are \(\alpha^{2}\) CVn variables suggests that the dominant astrophysical effect that can produce such a feature is connected to the \(\alpha^{2}\) CVn stars -- i.e., to the peculiar abundances of heavy elements that characterize them. Before discussing the physical connection between \(\alpha^{2}\) CVn variables and UCBH stars in more detail in Section 6.2, we briefly consider the UCBH stars that do _not_ fall into the \(\alpha^{2}\) CVn class.
### UCBH stars that are NOT \(\alpha^{2}\) CVn variables
The single localized bright spot that most simply explains a UCBH-type lightcurve could be produced by phenomona not related to the \(\alpha^{2}\) CVn stars. One example is an accretion stream impacting a stellar photosphere. This may be the explanation for some UCBH stars -- notably the hot subdwarf PG 1348+369 (Green et al., 1986; Wesemael et al., 1992).
An approximately even longitudinal distribution of _dark_ star spots with a prominent gap could also produce a UCBH-type rotationally modulated lightcurve (Figure 2), with the'missing' dark spots of the gap functioning like a single bright feature. While \(\alpha^{2}\) CVn stars are too hot for the ordinary form of dark magnetic starspots, this scenario may apply to the significant minority (22 out of 98 in the ATLAS sample; see Figure 8) of UCBH stars with much later spectral types -- a hypothesis which is further bolstered by the detection of strong H\(\alpha\) emission in all three of the late-type UCBH stars for which we have spectra. Such emission is characteristic of late type stars that are magnetically active and heavily spotted.
While late-type UCBH stars exist, the variability we observe in A-type or B-type UCBH stars cannot reasonably be attributed to an unresolved late-type companion. Such a companion would have to be several
Figure 8: Venn diagram illustrating that most UCBH stars appear to be \(\alpha^{2}\) CVn variables, although only a minority of known \(\alpha^{2}\) CVn variables have UCBH-type lightcurves. Since we have spectra for only fourteen out of 98 UCBH stars, the dividing line between \(\alpha^{2}\) CVn variables and either hotter or cooler UCBH stars is based on simple color cuts on the \(o-c\) color obtained from ATLAS lightcurves. Hence, the counts are very approximate and could be affected by interstellar reddening, both here and in Tables 1 and 2 where there same color cuts have been used.
times fainter than the primary to escape detection in our spectra, implying a very large-amplitude photometric variation for the late-type star itself. If late-type stars commonly had high-amplitude UCBH-type lightcurves, they should be much easier to detect in the field than as binary companions to brighter A stars that would dilute their photometric amplitudes. In this case, isolated late-type stars, being far more numerous than A-type or B-type stars, should dominate our UCBH sample -- the opposite of what we observe. Additionally, it would be a strange coincidence if late-type UCBH companions were found only around chemically peculiar A-type primaries. Furthermore, the literature contains many examples of UCBH-type variations in \(\alpha^{2}\) CVn stars (Section 1.2), and these stars are known to exhibit correlated spectral variations that clearly implicate the Ap/Bp star itself -- rather than a hypothetical late-type companion -- as the photometric variable.
In short, although a tiny minority of late-type stars do exhibit UCBH-type lightcurves, there is no doubt that the UCBH variations we observe in Ap/Bp stars originate from the bright stars themselves and not from an unresolved late-type companion.
### UCBH stars that ARE \(\alpha^{2}\) CVn variables
A large majority (73 out of 98) of the ATLAS UCBH stars appear to be \(\alpha^{2}\) CVn variables -- that is, Ap/Bp stars with rotationally modulated photometric variability. The Ap/Bp stars are chemically peculiar A-type or B-type stars with greatly enhanced abundances of specific heavy elements in their photospheres. The enhanced abundances are believed to be caused by radiative levitation of the heavy elements in question (Michaud, 1970), which is strongly influenced (and likely enabled) by magnetic fields (Michaud et al., 1981).
The rotationally modulated variability of \(\alpha^{2}\) CVn variables results from inhomogeneous distributions of the radiatively levitated elements across the stars' photospheres (Shulyak et al., 2010). For such a star, the single bright feature implied by a UCBH lightcurve is naturally interpreted as the region where the concentration of radiatively levitated heavy elements is the highest. Such a region is bright at optical wavelengths because the strong UV absorption lines of the levitated elements redistribute the star's intense UV flux into the optical (Michaud et al., 1981). This single region of greatly enhanced heavy element abundance likely owes its existence to a particular configuration of the magnetic field.
The expected causal connection between the magnetic field and the rotational light curve implies that the \(\alpha^{2}\) CVn variables that share UCBH-type light curves may also have similar magnetic field topologies. In this context, it is interesting that the lightcurves of some \(\alpha^{2}\) CVn UCBH stars show a small 'bump' or secondary maximum in the center of the broad, nearly flat minimum. These include ATO J010.7230+57.8087, ATO J110.9074-12.0800, and others in the ATLAS sample -- and also examples from the literature such as HD 207188 (Hensberge et al., 1977) and HD 54118 (catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog )(catalog (catalog ))(catalog (catalog ))(catalog (catalog ))(catalog (catalog ))(catalog (catalog ))(catalog (catalog ))(catalog (catalog ))(catalog (catalog ))(catalog (catalog )) (catalog(catalog )) (catalog (catalog )) (catalog(catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog(catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog(catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog catalog (catalog )) (catalog (catalog )) (catalog catalog (catalog )) (catalog catalog ) (catalog (catalog )) (catalog (catalog )) (catalog catalog (catalog )) (catalog catalog (catalog )) (catalog (catalog )) (catalog catalog ) (catalog (catalog )) (catalog catalog ) (catalog (catalog )) (catalog catalog ) (catalog (catalog )) (catalog (catalog )) (catalog catalog (catalog )) (catalog (catalog )) (catalog (catalog )) (catalog catalog (catalog )) (catalog (catalog )) (catalog catalog ) (catalog (catalog )) (catalog catalog ) (catalog ) (catalog (catalog )) (catalog catalog ) (catalog ) (catalog ) (catalog (catalog )) (catalog (catalog )) (catalog catalog ) (catalog (catalog )) (catalog (catalog )) (catalog catalog ) (catalog (catalog )) (catalog catalog ) (catalog ) (catalog (catalog )) (catalog catalog ) (catalog ) (catalog ) catalog (catalog ) (catalog ) catalog (catalog ) (catalog ) (catalog ) (catalog ) catalog (catalog ) (catalog ) catalog (catalog ) (catalog ) catalog (catalog ) (catalog ) (catalog ) catalog (catalog ) (catalog ) (catalog catalog ) (catalog ) (catalog ) (catalog ) (catalog ) catalog (catalog ) (catalog ) catalog (catalog ) (catalog ) (catalog ) catalog (catalog ) (catalog )) \\ (catalog )(catalog catalog )(catalog (catalog )) & ( ( ( ( ( ( )))))))))))))))))))))))))))))}}}}(((())))))))))}((((((((((((((((((((((((((((((((((((((((1)))))))))))))))))))))))))))))))))))))))))))))))))))))))))((((())
Bernhard et al. (2020). Most of these have substantially smaller photometric amplitudes than the ATLAS UCBH stars, but there are exceptions. The most promising of these -- bright, high-amplitude variables with perfect UCBH light curves -- are HD 191287, HD 77314, and HD 205938 (see Table 4 and Bernhard et al., 2020). These stars are ideal targets for Zeeman Doppler imaging and other forms of high-resolution spectroscopic investigation to probe the detailed astrophysics behind the UCBH-type light curves of \(\alpha^{2}\) CVn variables.
## 7 Acknowledgments
This publication presents discoveries made by the Asteroid Terrestrial-Impact Last Alert System (ATLAS). Support for the ATLAS survey is provided by NASA grants NN12AR55G and 80NSSC18K0284 under the guidance of Lindley Johnson and Kelly Fast.
This research is based on observations obtained at the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigacion y Desarrollo (Chile), Ministerio de Ciencia, Tecnologia e Innovacion (Argentina), Ministerio da Ciencia, Tecnologia, Inovacoes e Comunicacoes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). These observations were obtained under Gemini Program ID GN-2018B-Q-216.
This work was enabled by observations made from the Gemini North telescope and the University of Hawaii 2.2 meter telescope, both located within the Maunakea Science Reserve and adjacent to the summit of Maunakea. We are grateful for the privilege of observing the Universe from a place that is unique both for its astronomical quality and for its place in Hawaiian indigenous culture.
We thank Simon Murphy for helping us realize that our mysterious objects were \(\alpha^{2}\) CVn stars, and for giving us guidance about which elements were likely responsible for the peculiar spectral lines we observed.
This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
This publication makes use of the SIMBAD online database, operated at CDS, Strasbourg, France, and the VizieR online database (see Ochsenbein et al. (2000)).
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
We have also made extensive use of information and code from Press et al. (1992). Facilities:
_Facility:_ Gemini North, UH88
|
2309.10970 | Real roots of hypergeometric polynomials via finite free convolution | We examine two binary operations on the set of algebraic polynomials, known
as multiplicative and additive finite free convolutions, specifically in the
context of hypergeometric polynomials. We show that the representation of a
hypergeometric polynomial as a finite free convolution of more elementary
blocks, combined with the preservation of the real zeros and interlacing by the
free convolutions, is an effective tool that allows us to analyze when all
roots of a specific hypergeometric polynomial are real. Moreover, the known
limit behavior of finite free convolutions allows us to write the asymptotic
zero distribution of some hypergeometric polynomials as free convolutions of
Marchenko-Pastur, reversed Marchenko-Pastur, and free beta laws, which has an
independent interest within free probability. | Andrei Martinez-Finkelshtein, Rafael Morales, Daniel Perales | 2023-09-19T23:53:11Z | http://arxiv.org/abs/2309.10970v3 | # Real roots of hypergeometric polynomials via finite free convolution
###### Abstract.
We examine two binary operations on the set of algebraic polynomials, known as multiplicative and additive finite free convolutions, specifically in the context of hypergeometric polynomials. We show that the representation of a hypergeometric polynomial as a finite free convolution of more elementary blocks, combined with the preservation of the real zeros and interlacing by the free convolutions, is an effective tool that allows us to analyze when all roots of a specific hypergeometric polynomial are real. Moreover, the known limit behavior of finite free convolutions allows us to write the asymptotic zero distribution of some hypergeometric polynomials as free convolutions of Marchenko-Pastur, reciprocal Marchenko-Pastur, and free beta laws, which has an independent interest within free probability.
Key words and phrases:Hypergeometric polynomials; Finite free convolution; Free probability; Zeros 2020 Mathematics Subject Classification: Primary: 33C20; Secondary: 33C45, 42C05, 46L54
###### Contents
* 1 Introduction
* 2 Preliminaries
* 2.1 Polynomials and their coefficients
* 2.2 Hypergeometric polynomials
* 2.3 Some classical hypergeometric polynomials
* 2.4 Finite free convolution of polynomials
* 2.5 Real roots, interlacing, and free finite convolution
* 3 Convolutions of hypergeometric polynomials
* 3.1 Finite free multiplicative convolution
* 3.2 Finite free additive convolution
* 4 Real zeros of hypergeometric polynomials
* 4.1 Simplest real hypergeometric polynomials
* 4.2 General hypergeometric polynomials
* 4.3 \({}_{2}F_{2}\) and \({}_{3}F_{1}\) Hypergeometric polynomials
* 4.4 \({}_{3}F_{2}\) Generalized hypergeometric polynomials
* 5 Finite free probability and asymptotics
* 5.1 Free probability
* 5.2 Parameter rescaling
* 5.3 Asymptotic results and new insights in free probability
## 1. Introduction
The definition of the general hypergeometric function \({}_{i+1}F_{j}\) with \(i+1\) numerator and \(j\) denominator parameters is well known, see (5) below. If one of the numerator parameters is equal to a negative integer, say \(-n\), with \(n\in\mathbb{N}\), then the series terminates and is a polynomial of degree \(n\). The natural question that arises in connection with any polynomial is the location and behavior of its zeros, in particular, when they are all real ("real-rootedness"). If all its zeros are real, we also want to know additional properties like positivity/negativity, interlacing, and monotonicity with respect to the parameters. This has importance, among other matters, in the study of the Laguerre-Polya class \(\mathcal{L}\)-\(\mathcal{P}\) of entire functions (functions that can be obtained as a limit, uniformly on compact subsets of \(\mathbb{C}\), of a sequence of negative-real-rooted polynomials, see [50]).
The connection between \({}_{i+1}F_{j}\) hypergeometric polynomials and some classical families of polynomials, in many cases orthogonal, yields straightforward answers to these questions, at least for small values of \(i\) and \(j\). But when \(i\geq 1\) and \(j\geq 2\), the problem becomes more difficult due to the limited number of tools that allow us to investigate the zero location.
One of such tools is the idea of transformations acting on the space of polynomials. Several such transformations have "zero-mapping" properties, the differentiation acting on polynomials with all real roots being the simplest example. Further examples of such linear transformations can be constructed within the theory of multiplier sequences, originated in [46], see also [28, 11, 12, 24, 9, 6]. In the classical theory, multiplier sequences that preserve real zeros are characterized by means of certain analytic properties of their generating functions (e.g., that they belong to the \(\mathcal{L}\)-\(\mathcal{P}\) class).
Several of these transformations can also be written as a "convolution" of a given polynomial with another polynomial or function. Again, many results can be traced back to the work of Szego, Schur, Walsh, and others. Recently, several such transformations have been rediscovered as a finite analogue of free probability, named generically as finite free convolution of polynomials [40]. They have a number of very useful properties, not only preserving real-rootedness, but also interlacing, monotonicity and even asymptotic distribution of zeros under certain conditions.
The connection between these polynomial convolutions and free probability is revealed in the asymptotic regime, when we consider the zero-counting measure (also known in this context as the empirical root distribution) of a polynomial of degree \(n\) and let the degree tend to \(\infty\) to obtain a limiting measure. Then the finite free convolution of polynomials turns into a free convolution of measures. This interesting connection has benefited both areas of research. On the one hand, the several relations between measures studied in free probability can guide our intuition on the type of relation that their polynomial analogues could satisfy, as well as provide a simple way to compute limiting measures using free probability. On the other hand, some properties that are clear in the context of discrete measures (such as zero-counting measures of polynomials) give a concrete explanation to phenomena that are not apparent when working with absolutely continuous measures.
In this paper, we examine two of such finite free convolutions, namely the multiplicative \(\boxtimes_{n}\) (also known as Schur-Szego composition) and the additive \(\boxplus_{n}\) convolutions, specifically in the context of hypergeometric polynomials. The main finding is that these operations have natural realizations in the class of these polynomials, providing an additional tool for studying their zeros. To make it more precise, as well as to provide a guide to facilitate the reader to navigate the unavoidable abundance of formulas and identities, we give a brief outline of the main highlights of this paper next.
We introduce all the necessary notation and facts in Section 2. In particular, given two complex polynomials
\[p(x)=\sum_{i=0}^{n}x^{n-i}(-1)^{i}e_{i}(p)\qquad\text{and}\qquad q(x)=\sum_{i=0} ^{n}x^{n-i}(-1)^{i}e_{i}(q)\]
of degree \(n\), the finite free additive convolution, \(p\boxplus_{n}q\), and the finite free multiplicative convolution, \(p\boxtimes_{n}q\), are defined as:
\[[p\boxplus_{n}q](x):=\sum_{k=0}^{n}x^{n-k}(-1)^{k}\sum_{i+j=k}\frac{(n-i)!(n-j)!}{n!(n-k)!}\,e_{i}(p)e_{j}(q),\]
and
\[[p\boxtimes_{n}q](x):=\sum_{k=0}^{n}x^{n-k}(-1)^{k}\binom{n}{k}^{-1}e_{i}(p)e_ {k}(q).\]
These operations are closed on the set of polynomials with all real positive roots, making them a useful tool to study real-rooted polynomials, their root interlacing, and root separation; see Subsections 2.4 and 2.5 below for details.
Our goal is to study the effect of these operations on the roots of hypergeometric polynomials
\[{}_{i+1}\mathcal{F}_{j}\binom{-n,\boldsymbol{a}}{\boldsymbol{b}};x\right):= \left(\boldsymbol{b}\right)^{\overline{n}}\,{}_{i+1}F_{j}\binom{-n, \boldsymbol{a}}{\boldsymbol{b}};x\right)=\left(\boldsymbol{b}\right)^{ \overline{n}}\sum_{k=0}^{n}\frac{\left(-n\right)^{\overline{k}}\left( \boldsymbol{a}\right)^{\overline{k}}}{\left(\boldsymbol{b}\right)^{\overline{ k}}}\frac{x^{k}}{k!},\]
where \(\boldsymbol{a}=(a_{1},\ldots,a_{i})\in\mathbb{R}^{i}\) and \(\boldsymbol{b}=(b_{1},\ldots,b_{j})\in\mathbb{R}^{j}\) are vectors of parameters, and \(\left(\boldsymbol{a}\right)^{\overline{k}}:=\left(a_{1}\right)^{\overline{k}} \left(a_{2}\right)^{\overline{k}}\ldots\left(a_{s}\right)^{\overline{k}}\), with \(\left(a\right)^{\overline{k}}:=a(a+1)\ldots(a+k-1)\), denotes the rising factorial; see Subsection 2.1.
Section 3 contains the main results on the representation of finite free multiplicative and additive convolutions of two hypergeometric polynomials. For instance, we show that the free multiplicative convolution satisfies
\[{}_{i+1}\mathcal{F}_{j_{1}}\binom{-n,\boldsymbol{a}_{1}}{\boldsymbol{b}_{1}} ;x\right)\boxtimes_{n}\,{}_{i_{2}+1}\mathcal{F}_{j_{2}}\binom{-n,\boldsymbol{a }_{2}}{\boldsymbol{b}_{2}};x\right)=\,{}_{i_{1}+i_{2}+1}\mathcal{F}_{j_{1}+j_{ 2}}\binom{-n,\boldsymbol{a}_{1},\boldsymbol{a}_{2}}{\boldsymbol{b}_{1}, \boldsymbol{b}_{2}};x\bigg{)},\]
(Theorem 3.1), while closed expressions for the additive convolution
\[{}_{i+1}\mathcal{F}_{j}\binom{-n,\,\,\boldsymbol{a}}{\boldsymbol{b}};x\right) \boxplus_{n}\,{}_{s+1}\mathcal{F}_{t}\binom{-n,\,\,\boldsymbol{c}}{\boldsymbol{ d}};x\bigg{)},\]
follow from factorizations (summation formulas) of hypergeometric functions of the form
\[{}_{j_{1}}F_{i_{1}}\!\left(\begin{matrix}\boldsymbol{a}_{1}\\ \boldsymbol{b}_{1}\end{matrix};x\right)\,{}_{j_{2}}F_{i_{2}}\!\left(\begin{matrix} \boldsymbol{a}_{2}\\ \boldsymbol{b}_{2}\end{matrix};x\right)=\,{}_{j_{3}}F_{i_{3}}\!\left( \begin{matrix}\boldsymbol{a}_{3}\\ \boldsymbol{b}_{3}\end{matrix};x\right)\!,\]
(Theorem 3.4 and Corollary 3.5). These formulas allow us to assemble more complicated hypergeometric polynomials from simpler hypergeometric "building blocks".
Combining knowledge of the behavior of the zeros of these blocks with the zero-preserving properties of the finite free convolution, we can obtain further results on zeros of hypergeometric polynomials (Section 4). For small values of \(i\) and \(j\), the \({}_{i+1}\mathcal{F}_{j}\) hypergeometric polynomials correspond to classical families of polynomials: Laguerre, Bessel, and Jacobi. Their root location has been extensively studied, with very precise descriptions on when the polynomials are real-rooted, when they interlace, and several results on the asymptotic distribution of the zero counting measures. A combination of this knowledge with the results of Section 3 yields a systematic approach to the construction of families of real-rooted hypergeometric polynomials for larger \(i\) and \(j\).
For instance, here are some of the general facts we can establish about zeros of the hypergeometric polynomial
\[p(x)=\ _{i+1}\mathcal{F}_{j}\binom{-n,\ \boldsymbol{a}}{\boldsymbol{b}};x\bigg{)}.\]
* If \(b_{1},\ldots,b_{j}>0\) and \(a_{1},\ldots,a_{i}<-n+1\) then \(p\) is real-rooted with all the roots of the same sign (positive if \(i\) is even, or negative if \(i\) is odd), see Theorem 4.6.
* If \(j\geq i\), \(b_{1},\ldots,b_{j}>0\) and \(a_{1},\ldots,a_{i}\in\mathbb{R}\) are such that \(a_{s}\geq n-1+b_{s}\) for \(s=1,\ldots,i\), then \(p\) has all positive roots, see Theorem 4.7.
For the \({}_{2}\mathcal{F}_{2}\), \({}_{1}\mathcal{F}_{3}\), and \({}_{3}\mathcal{F}_{2}\) polynomials, we provide more specific results in Sections 4.3 and 4.4. For the reader's convenience, we have compiled the main combinations of the hypergeometric parameters for which the corresponding polynomials are real-rooted in Tables 3-8. However, we want to make clear that neither these results are exhaustive nor we consider the free convolution to be the universal tool for establishing the real-rootedness of such polynomials. Instead, our goal is to illustrate how this approach can yield some new and non-trivial results or provide alternative proofs of known facts.
Finally, in Section 5 we use a simple reparametrization to recast the previous results in the framework of finite free probability in the hope of giving additional intuition or insight to the readers more familiar with this field. Moreover, with this new reformulation the asymptotic root-counting measure of hypergeometric polynomials is reduced to studying the distribution of the addition and multiplication of free random variables that obey the Marchenko-Pastur, reciprocal Marchenko-Pastur, or free beta laws.
This text is part of a larger project that includes an article [43] in preparation.
## 2. Preliminaries
### Polynomials and their coefficients
We start by introducing some notation. In what follows, \(\mathbb{P}_{n}\) stands for all algebraic polynomials of degree \(\leq n\), and \(\mathbb{P}_{n}^{*}\subset\mathbb{P}_{n}\) is the subset of monic polynomials of degree \(n\). Also, for \(K\subset\mathbb{C}\), we denote by \(\mathbb{P}_{n}(K)\) (resp., \(\mathbb{P}_{n}^{*}(K)\)) the subset of polynomials of degree \(\leq n\) (resp., monic polynomials of degree \(n\)) with all zeros in \(K\). In particular, \(\mathbb{P}_{n}^{*}(\mathbb{R})\) denotes the family of real-rooted monic polynomials of degree \(n\), \(\mathbb{P}_{n}^{*}(\mathbb{R}_{\geq 0})\) is the subset of \(\mathbb{P}_{n}^{*}(\mathbb{R})\) of polynomials having only non-negative roots, etc.
Every polynomial \(p\) of degree \(n\) can be written in the form
\[p(x)=\sum_{j=0}^{n}x^{n-j}(-1)^{j}e_{j}(p). \tag{1}\]
Since we do not require a priori \(e_{0}(p)\neq 0\), notation \(e_{j}(p)\) implicitly assumes the dependence on \(n\). It is convenient to keep it in mind, although we will avoid mentioning it explicitly to simplify the notation.
If \(p\) is monic of degree \(n\), then the coefficients \(e_{j}(p)\) are just the symmetric sums of its roots: denoting by \(\lambda_{1}(p),\ldots,\lambda_{n}(p)\) the roots of \(p\) (in the case when \(p\) is real-rooted, we use the convention that \(\lambda_{1}(p)\geq\lambda_{2}(p)\geq\cdots\geq\lambda_{n}(p)\)), then
\[e_{j}(p)=\sum_{sym}\lambda_{1}(p)\lambda_{2}(p)\ldots\lambda_{j}(p):=\sum_{1 \leq i_{1}<i_{2}<\cdots<i_{j}\leq n}\lambda_{i_{1}}(p)\lambda_{i_{2}}(p) \ldots\lambda_{i_{j}}(p).\]
One simple observation that we will use later is that if \(p\) is (1), and that for a constant \(c\in\mathbb{C}\),
\[q(x):=x^{n}p(c/x)\]
then the coefficients for \(q\) are
\[e_{j}(q)=(-1)^{n}c^{j}e_{n-j}(p),\quad j=0,1,\ldots,n. \tag{2}\]
### Hypergeometric polynomials
Rising and falling factorials play a crucial role in our calculations. The **rising factorial** (also, **Pochhammer's symbol1**) for \(a\neq 0\) and \(j\in\mathbb{Z}_{\geq 0}:=\mathbb{N}\cup\{0\}\) is
Footnote 1: Another standard notation for the raising factorial is \((a)_{j}\). We prefer to use the notation defined here.
\[(a)^{\overline{j}}:=a(a+1)\ldots(a+j-1)=\frac{\Gamma(a+j)}{\Gamma(a)},\quad(a )^{\overline{0}}:=1,\]
while the **falling factorial** is defined as
\[(a)^{\underline{j}}:=a(a-1)\ldots(a-j+1)=(a-j+1)^{\overline{j}}\,,\quad(a)^{ \underline{0}}:=1.\]
Notice the obvious useful relations
\[(a)^{\underline{j}}=(-1)^{j}\left(-a\right)^{\overline{j}}, \tag{3}\]
as well as
\[(a)^{\overline{n}}=(a)^{\overline{n-j}}\left(a+n-1\right)^{\underline{j}}, \quad 0\leq j\leq n. \tag{4}\]
A generalized **hypergeometric series**[33, 45] is an expression
\[{}_{i+1}F_{j}\!\left(\!\!\begin{array}{c}a_{0},a_{1},\ldots,a_{i}\\ b_{1},\ldots,b_{j}\end{array}\!\!;x\right)=\sum_{k=0}^{\infty}\frac{\left(a_{0} \right)^{\overline{k}}\left(a_{1}\right)^{\overline{k}}\ldots\left(a_{i} \right)^{\overline{k}}}{\left(b_{1}\right)^{\overline{k}}\ldots\left(b_{j} \right)^{\overline{k}}}\frac{x^{k}}{k!}. \tag{5}\]
If \(\boldsymbol{a}=(a_{1},\ldots,a_{i})\in\mathbb{R}^{i}\) is a vector (tuple), we understand by
\[(\boldsymbol{a})^{\overline{k}}=\prod_{s=1}^{i}\left(a_{s}\right)^{\overline {k}},\]
and therefore, with \(\boldsymbol{a}=(a_{1},\ldots,a_{i})\in\mathbb{R}^{i}\) and \(\boldsymbol{b}=(b_{1},\ldots,b_{j})\in\mathbb{R}^{j}\), we can write
\[{}_{i+1}F_{j}\!\left(\!\!\begin{array}{c}a_{0},\boldsymbol{a}\\ \boldsymbol{b}\end{array}\!\!;x\right)=\sum_{k=0}^{\infty}\frac{\left(a_{0} \right)^{\overline{k}}\left(\boldsymbol{a}\right)^{\overline{k}}}{\left( \boldsymbol{b}\right)^{\overline{k}}}\frac{x^{k}}{k!}. \tag{6}\]
In the particular case when \(a_{0}\) is a negative integer, the series is terminating and defines a polynomial. More precisely, for \(n\in\mathbb{N}\),
\[{}_{i+1}F_{j}\!\left(\!\!\begin{array}{c}-n,\boldsymbol{a}\\ \boldsymbol{b}\end{array}\!\!;x\right)=\sum_{k=0}^{n}\frac{\left(-n\right)^{ \overline{k}}\left(\boldsymbol{a}\right)^{\overline{k}}}{\left(\boldsymbol{b} \right)^{\overline{k}}}\frac{x^{k}}{k!}\]
is a (generalized) **hypergeometric polynomial** of degree \(\leq n\), as long as
\[b_{1},\ldots b_{j}\in\mathbb{C}\setminus\{0,-1,-2,\ldots,-n+1,-n\}.\]
In what follows, it will be more convenient for us to work with the **normalized terminating hypergeometric series**
\[{}_{i+1}\mathcal{F}_{j}\!\left(\!\!\begin{array}{c}-n,a_{1},\ldots,a_{i}\\ b_{1},\ldots,b_{j}\end{array}\!\!;x\right):=\left(\prod_{k=1}^{j}\left(b_{k} \right)^{\overline{n}}\right)\ {}_{i+1}F_{j}\!\left(\!\!\begin{array}{c}-n,a_{1},\ldots,a_{i}\\ b_{1},\ldots,b_{j}\end{array}\!\!;x\right)\!. \tag{7}\]
Correspondingly, we say that the (generalized) **hypergeometric polynomial**\(p\) of degree \(n\in\mathbb{Z}_{\geq 0}\) is in **standard normalization** if 2
Footnote 2: If \(j=0\), then the factor \(\left(\boldsymbol{b}\right)^{\overline{n}}\) in the right-hand side of (8) equals \(1\), and thus we identify \({}_{i+1}F_{0}\) and \({}_{i+1}\mathcal{F}_{0}\).
\[p(x)=\ _{i+1}\mathcal{F}_{j}\binom{-n,\boldsymbol{a}}{\boldsymbol{b}};x\bigg{)}= \left(\boldsymbol{b}\right)^{\overline{n}}\sum_{k=0}^{n}\frac{\left(-n\right)^ {\overline{k}}\left(\boldsymbol{a}\right)^{\overline{k}}}{\left(\boldsymbol{b }\right)^{\overline{k}}}\frac{x^{k}}{k!}. \tag{8}\]
Notice that with this normalization, \(p\) is a polynomial in both \(x\) and its parameters \(a_{s}\), \(b_{s}\): using (4), we can rewrite the expression (8) as
\[p(x)=\ _{i+1}\mathcal{F}_{j}\binom{-n,\boldsymbol{a}}{\boldsymbol{b}};x\bigg{)} =\sum_{k=0}^{n}\ \left(-n\right)^{\overline{k}}\left(\boldsymbol{a} \right)^{\overline{k}}\left(\boldsymbol{b}+k\right)^{\overline{n-k}}\frac{x^ {k}}{k!}; \tag{9}\]
so that, for this polynomial, written in the form (1), we have that
\[e_{k}(p)=(-1)^{n+k(i+j)}\left(\boldsymbol{a}\right)^{\overline{n}}\binom{n}{ k}\frac{\left(-\boldsymbol{b}-n+1\right)^{\overline{k}}}{\left(-\boldsymbol{a}-n+1 \right)^{\overline{k}}}=(-1)^{n}\binom{n}{k}\left(\boldsymbol{a}\right)^{ \overline{n-k}}\left(\boldsymbol{b}+n-k\right)^{\overline{k}}, \tag{10}\]
and in particular,
\[e_{0}(p)=(-1)^{n}\left(\boldsymbol{a}\right)^{\overline{n}},\quad e_{n}(p)=( -1)^{n}\left(\boldsymbol{b}\right)^{\overline{n}}. \tag{11}\]
These expressions show that the polynomial is of degree exactly \(n\) if and only if
\[a_{1},\ldots,a_{i}\in\mathbb{C}\setminus\{0,-1,-2,\ldots,-n+1\}, \tag{12}\]
constraint that we will consider enforced henceforth. For the rest of the values of the parameters, we always understand by the hypergeometric polynomial in standard normalization the expression (9).
Since the following discrete set will appear very frequently in this work, we will introduce the notation
\[\mathbb{Z}_{n}:=\{0,1,2,\ldots,n-1\}, \tag{13}\]
understanding by \(-\mathbb{Z}_{n}\) the set \(\{0,-1,-2,\ldots,-n+1\}\). In particular, condition (12) can be written as \(a_{1},\ldots,a_{i}\notin(-\mathbb{Z}_{n})\).
Notice that as in (2), for a constant \(c\in\mathbb{C}\), the polynomial
\[q(x):=x^{n}p(c/x)=x^{n}{}_{i+1}F_{j}\left(\begin{matrix}-n,\boldsymbol{a}\\ \boldsymbol{b}\end{matrix};\frac{c}{x}\right),\]
written in the form (1), has coefficients
\[e_{k}(q)=c^{k}\binom{n}{k}\frac{\left(\boldsymbol{a}\right)^{\overline{k}}}{ \left(\boldsymbol{b}\right)^{\overline{k}}}.\]
Comparing it with (2) and (10), direct computations lead to the following identity:
**Lemma 2.1**.: _The following identity holds true for \(x\neq 0\):_
\[{}_{i+1}\mathcal{F}_{j}\binom{-n,\ \boldsymbol{a}}{\boldsymbol{b}};x\bigg{)}= \left((-1)^{i+1}x\right)^{n}\ _{j+1}\mathcal{F}_{i}\binom{-n,\ -n-\boldsymbol{b}+1}{-n- \boldsymbol{a}+1};(-1)^{i+j}\frac{1}{x}\bigg{)}.\]
### Some classical hypergeometric polynomials
Many classical families of polynomials are hypergeometric. In this section, we summarize some basic information needed in what follows. Further details and formulas can be found in [29, 33, 45, 52].
We will use, whenever possible, the standard normalization introduced in (8).
Since for \(a\in\mathbb{C}\), and for each \(n\in\mathbb{N}\),
\[p_{n}(x)=(x-a)^{n}=\sum_{k=1}^{n}x^{n-k}(-1)^{k}\binom{n}{k}a^{k},\]
we can conclude that for \(a\neq 0\),
\[(x-a)^{n}=(-a)^{n}\ _{1}F_{0}\left(\genfrac{.}{.}{0.0pt}{}{-n}{\cdot};\frac{x}{a} \right). \tag{14}\]
The (generalized) **Laguerre polynomials** of degree \(n\) and parameter \(\alpha\in\mathbb{C}\), in their traditional normalization, are defined as
\[L_{n}^{(\alpha)}(x):=\sum_{k=0}^{n}\frac{(n+\alpha)\overline{n-k}}{k!(n-k)!} \,(-x)^{k}. \tag{15}\]
Additionally, for \(k=1,2,\ldots,n\),
\[L_{n}^{(-k)}(x)=(-x)^{k}\,\frac{(n-k)!}{n!}\,L_{n-k}^{(k)}(x). \tag{16}\]
When \(\alpha\geq-1\), \(L_{n}^{(\alpha)}\) are orthogonal on \([0,\infty)\), so that all their roots are simple and \(L_{n}^{(\alpha)}\in\mathbb{P}_{n}(\mathbb{R}_{>0})\). By (16), for \(\alpha=-1,-2,\ldots,-n\), \(L_{n}^{(\alpha)}\in\mathbb{P}_{n}(\mathbb{R}_{\geq 0})\), with a unique multiple root of order \(-\alpha\) at \(0\), and all other roots distinct and positive. Moreover, \(L_{n}^{(\alpha)}\in\mathbb{P}_{n}(\mathbb{R})\) even when \(\alpha\in(-2,-1)\), with \(n-1\) positive zeros and one negative zero, see, e.g., [52, SS6,73].
As a hypergeometric function one has
\[L_{n}^{(\alpha)}(x)=\frac{1}{n!}\ \,_{1}\mathcal{F}_{1}\binom{-n}{\alpha+1} \,;x\biggr{)}. \tag{17}\]
By Lemma 2.1, the reciprocal polynomials are
\[q(x)=x^{n}L_{n}^{(\alpha)}(-1/x)=\frac{1}{n!}\ \,_{2}\mathcal{F}_{0}\binom{-n,-n- \alpha}{\cdot},x\biggr{)}, \tag{18}\]
and are known as **Bessel polynomials**.
Finally, the **Jacobi polynomials** of degree \(n\) and parameters \(\alpha,\beta\in\mathbb{C}\) are
\[P_{n}^{(\alpha,\beta)}(x) :=\frac{1}{n!}\sum_{k=0}^{n}\binom{n}{k}\,(n+\alpha+\beta+k)^{ \underline{k}}(\alpha+n)\overline{n-k}\left(\frac{x-1}{2}\right)^{k}\] \[=\frac{1}{n!}\sum_{k=0}^{n}\binom{n}{k}\,(n+\alpha+\beta+1)^{ \overline{k}}\,(\alpha+k+1)^{\overline{n-k}}\left(\frac{x-1}{2}\right)^{k}.\]
In consequence,
\[P_{n}^{(\alpha,\beta)}(x) :=\frac{1}{n!}\ {}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,n+ \alpha+\beta+1}{\alpha+1};\frac{1-x}{2}\right) \tag{19}\] \[=\frac{(-1)^{n}}{n!}\ {}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,n +\alpha+\beta+1}{\beta+1};\frac{1+x}{2}\right)\] (20) \[=\frac{1}{n!}\,\left(\frac{1+x}{2}\right)^{n}\ {}_{2}\mathcal{F}_{1}\! \left(\genfrac{.}{.}{0.0pt}{}{-n,-n-\beta}{\alpha+1};\frac{x-1}{x+1}\right). \tag{21}\]
The following identities are well known:
\[P_{n}^{(\alpha,\beta)}(-x) =(-1)^{n}P_{n}^{(\beta,\alpha)}(x), \tag{22}\] \[P_{n}^{(\alpha,\beta)}(x) =\left(\frac{1-x}{2}\right)^{n}P_{n}^{(-2n-\alpha-\beta-1,\beta)} \left(\frac{x+3}{x-1}\right),\] \[P_{n}^{(\alpha,\beta)}(x) =\left(\frac{1+x}{2}\right)^{n}P_{n}^{(\alpha,-2n-\alpha-\beta-1 )}\left(\frac{3-x}{x+1}\right),\]
see [52, SS4.22].
The classical Jacobi polynomials (that correspond to parameters \(\alpha,\beta>-1\)) are orthogonal on \([-1,1]\) with respect to the weight function \((1-x)^{\alpha}(1+x)^{\beta}\). Consequently, all their zeros are simple and belong to the interval \((-1,1)\). If \(\alpha\) or \(\beta\) are in \([-2,-1]\), we can also guarantee that \(P_{n}^{(\alpha,\beta)}\) has real zeros, but not all of them in \([-1,1]\), see [19].
Moreover, \(P_{n}^{(\alpha,\beta)}(x)\) may have a multiple zero, but always at \(x=\pm 1\):
* at \(x=1\), if \(\alpha\in\{-1,\ldots,-n\}\). More precisely, for \(k\in\{1,\ldots,n\}\), we have (see [52, Eq. (4.22.2)]), \[P_{n}^{(-k,\beta)}(x)=\frac{(n+\beta+1-k)^{\overline{k}}}{(n-k+1)^{\overline{ k}}}\left(\frac{x-1}{2}\right)^{k}P_{n-k}^{(k,\beta)}(x).\] (23) This implies, in particular, that \(P_{n}^{(-k,\beta)}(x)\equiv 0\) if additionally \(\max\{k,-\beta\}\leq n\leq k-\beta-1\).
* at \(x=-1\) if \(\beta\in\{-1,\ldots,-n\}\). More precisely, when \(l\in\{1,\ldots,n\}\). \[P_{n}^{(\alpha,-l)}(x)=\frac{(n+\alpha+1-l)^{\overline{l}}}{(n-l+1)^{\overline {l}}}\left(\frac{x+1}{2}\right)^{l}P_{n-l}^{(\alpha,l)}(x).\] (24) The formulas above show that when both \(k,l\in\mathbb{N}\) and \(k+l\leq n\), we have \[P_{n}^{(-k,-l)}(x)=2^{-k-l}(x-1)^{k}(x+1)^{l}P_{n-k-l}^{(k,l)}(x),\] (25) with \(P_{n}^{(-k,-l)}\equiv 0\) if \(n\leq k+l-1\).
* at \(x=\infty\) (which means a degree reduction): when \(n+\alpha+\beta=-k\in\{-1,\ldots,-n\}\), \[P_{n}^{(\alpha,\beta)}(x)=\frac{\Gamma(n+\alpha+1)}{\Gamma(k+\alpha)}\frac{(k -1)!}{n!}P_{k-1}^{(\alpha,\beta)}(x);\] see [52, Eq. (4.22.3)].
These identities can be easily reformulated in terms of the hypergeometric polynomials in standard normalization using that
\[{}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,a}{b};x\right):=n!\,P_ {n}^{(b-1,-n+a-b)}(1-2x)=(-1)^{n}n!\,P_{n}^{(-n+a-b,b-1)}(2x-1). \tag{26}\]
Furthermore, for \(k\in\mathbb{Z}_{n}\), we have the following consequences of applying formula (19) to (23)-(25):
* formula (23) gives \[{}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,b+k+1}{-n+k+1};x\right)= \left(b+k+1\right)^{\overline{n-k}}\,(-x)^{n-k}\,\,\,{}_{2}\mathcal{F}_{1}\! \left(\genfrac{.}{.}{0.0pt}{}{-k,b+n+1}{n-k+1};x\right)\] (27) (in order to enforce condition (12) in both sides, we assume that \[b+k\notin(-\mathbb{Z}_{n}),\] (28) and evaluate the polynomials using the equivalent expression (9));
* formula (24) produces \[{}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,b+k}{b};x\right)= \left(b+k\right)^{\overline{n-k}}\,(1-x)^{n-k}\,\,\,{}_{2}\mathcal{F}_{1}\! \left(\genfrac{.}{.}{0.0pt}{}{-k,b+n}{b};x\right)\] (29) (again, we require (28));
* finally, by (25), for \(0\leq k\leq j\leq n\), \[{}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,j-k+1}{1-k};x\right)= \left(j-k+1\right)^{\overline{n+k-j}}\,(-x)^{k}\,(1-x)^{n-j}\,\,\,{}_{2} \mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{k-j,n+1}{k+1};x\right)\!.\] (30)
### Finite free convolution of polynomials
In this section, we summarize some definitions and results on the finite free additive and multiplicative convolutions that will be used throughout the paper. These correspond to two classical polynomial convolutions studied a century ago by Szego [51] and Walsh [54] that were recently rediscovered in [40] as expected characteristic polynomials of the sum and product of randomly rotated matrices.
#### 2.4.1. Multiplicative finite free convolution
_Definition 2.2_ ([40]).: Given two polynomials, \(p\) and \(q\), of degree at most \(n\), the \(n\)**-th multiplicative finite free convolution** of \(p\) and \(q\), denoted as \(p\boxtimes_{n}q\), is a polynomial of degree at most \(n\), which can be defined in terms of the coefficients of polynomials written in the form (1): if
\[p(x)=\sum_{j=0}^{n}x^{n-j}(-1)^{j}e_{j}(p)\quad\text{ and }q(x)=\sum_{j=0}^{n}x^{n-j}(-1)^{j}e_{j}(q), \tag{31}\]
then
\[[p\boxtimes_{n}q](x)=\sum_{k=0}^{n}x^{n-k}(-1)^{k}e_{k}(p\boxtimes_{n}q),\]
with
\[e_{k}(p\boxtimes_{n}q):=\binom{n}{k}^{-1}e_{j}(p)e_{j}(q). \tag{32}\]
In particular, if \(p,q\in\mathbb{P}_{n}^{*}\), then also \(p\boxtimes_{n}q\in\mathbb{P}_{n}^{*}\).
_Remark 2.3_.: This operation, originally introduced in [51], is known in the study of geometry of polynomials and of the Laguerre-Polya class as the **Schur-Szego composition**, see, e.g. [34, 35]. It can also be regarded as the Hadamard product of \(p\) and \(q\), up to a sign and binomial factor; see [1]. We will use the term "multiplicative finite free convolution" for uniformity of terminology.
The multiplicative finite free convolution is a linear operator from \(\mathbb{P}_{n}\times\mathbb{P}_{n}\) to \(\mathbb{P}_{n}\): if \(p,q,r\in\mathbb{P}_{n}\), and \(\alpha\in\mathbb{R}\), then
\[(\alpha p+q)\boxtimes_{n}r=\alpha(p\boxtimes_{n}r)+q\boxtimes_{n}r.\]
Definition 2.2 allows us to establish easily that
\[p(x)\boxtimes_{n}(x-1)^{n}=p(x), \tag{33}\]
(that is, \((x-1)^{n}\) is an identity for the multiplicative convolution), as well as that
\[p(x)\boxtimes_{n}(x-\alpha)^{n}=\alpha^{n}p\left(\frac{x}{\alpha}\right),\quad \alpha\neq 0. \tag{34}\]
This motivates the following definition:
_Definition 2.4_.: Given \(p\) of degree \(n\), the polynomial \(q\in\mathbb{P}_{n}\) such that \(p(x)\boxtimes_{n}q(x)=(x-1)^{n}\) is called the **inverse of \(p\) under the multiplicative (finite free) convolution**.
Notice that such an inverse does not always exist, since by (32), a coefficient of \(p\boxtimes_{n}q\) vanishes if the corresponding coefficient of \(p\) or \(q\) is \(0\).
#### 2.4.2. Additive finite free convolution
_Definition 2.5_ ([40]).: Given two polynomials, \(p\) and \(q\), of degree at most \(n\), the \(n\)**-th additive finite free convolution** of \(p\) and \(q\), denoted as \(p\boxplus_{n}q\), is a polynomial of degree at most \(n\), defined in terms of the coefficients of polynomials written in the form (1): if
\[p(x)=\sum_{j=0}^{n}x^{n-j}(-1)^{j}e_{j}(p)\quad\text{ and }q(x)=\sum_{j=0}^{n}x^{n-j}( -1)^{j}e_{j}(q), \tag{35}\]
then
\[[p\boxplus_{n}q](x)=\sum_{k=0}^{n}x^{n-k}(-1)^{k}e_{k}(p\boxplus_{n}q),\]
with
\[e_{k}(p\boxplus_{n}q)\coloneqq\sum_{i+j=k}\frac{(n-i)!(n-j)!}{n!(n-k)!}\,e_{i} (p)e_{j}(q) \tag{36}\]
(and thus, \(e_{0}(p\boxplus_{n}q)=e_{0}(p)e_{0}(q)\)).
Equivalently, the additive free convolution can be defined as
\[[p\boxplus_{n}q](x)\coloneqq\frac{1}{n!}\sum_{i=0}^{n}p^{(i)}(x)q^{(n-i)}(0)= \frac{1}{n!}\sum_{i=0}^{n}q^{(i)}(x)p^{(n-i)}(0). \tag{37}\]
Especially useful for our purposes will be the third equivalent definition in terms of the associated differential operators. Namely, given the polynomial \(p\) in (1), define a differential operator \(D_{p}\) as
\[D_{p}\coloneqq\sum_{j=0}^{n}(-1)^{j}\frac{e_{j}(p)}{(n)^{j}}\left(\frac{ \partial}{\partial x}\right)^{j}. \tag{38}\]
Then
\[D_{p}[x^{n}]=\sum_{j=0}^{n}x^{n-j}(-1)^{j}e_{j}(p)=p(x). \tag{39}\]
Clearly, the correspondence \(p\leftrightarrow D_{p}\) is a bijection between \(\mathbb{P}_{n}\) and linear differential operators of degree \(\leq n\) with constant coefficients. For future reference, it is also convenient to observe that for a constant \(c\neq 0\),
\[\left(\sum_{j=0}^{n}(-1)^{j}\frac{e_{j}(p)}{(n)^{j}}\left(c\frac{\partial}{ \partial x}\right)^{j}\right)[x^{n}]=\sum_{j=0}^{n}x^{n-j}(-1)^{j}c^{j}e_{j}(p) =c^{n}p(x/c). \tag{40}\]
Now, back to the additive free convolution: if \(D_{p}\) and \(D_{q}\) are the differential operators, corresponding to polynomials \(p\) and \(q\) in (35), that is, if \(p(x)=D_{p}[x^{n}]\) and \(q(x)=D_{q}[x^{n}]\), then
\[[p\boxplus_{n}q](x)=D_{p}[D_{q}[x^{n}]]=D_{q}[D_{p}[x^{n}]]. \tag{41}\]
It follows from here (or directly from the definition) that
\[[p(-x)]\boxplus_{n}[q(-x)]=(-1)^{n}[p\boxplus_{n}q](-x). \tag{42}\]
When at least one of the polynomial is of degree strictly smaller than \(n\), then (see Lemma 1.16 of [40])
\[p\boxplus_{n}q=\frac{1}{n}p^{\prime}\boxplus_{n-1}q, \tag{43}\]
whenever \(p\in\mathbb{P}_{n}\) and \(q\in\mathbb{P}_{n-1}\). In particular, if \(\deg p=n\) and \(0\leq k\leq n\), then
\[p\boxplus_{n}x^{k}=\frac{(n-k)!}{n!}\,p^{(n-k)},\]
which also easily follows from (37).
The additive finite free convolution is a linear operator from \(\mathbb{P}_{n}\times\mathbb{P}_{n}\) to \(\mathbb{P}_{n}\): if \(p,q,r\in\mathbb{P}_{n}\), and \(\alpha\in\mathbb{R}\), then
\[(\alpha p+q)\boxplus_{n}r=\alpha(p\boxplus_{n}r)+q\boxplus_{n}r.\]
The three equivalent definitions of additive finite free convolution allow us to establish easily that
\[p(x)\boxplus_{n}(x-\alpha)^{n}=p(x-\alpha),\quad p\in\mathbb{P}_{n}, \tag{44}\]
so that, in particular, \(p\boxplus_{n}x^{n}=p\). In other words, \(x^{n}\) is an identity for the additive convolution. This motivates the following definition:
_Definition 2.6_.: Given \(p\) of degree \(n\), the polynomial \(q\in\mathbb{P}_{n}\) such that \(p(x)\boxplus_{n}q(x)=x^{n}\) is called the **inverse of \(p\) under the additive (finite free) convolution**.
Such an inverse always exists (see [38, Corollary 6.2]) and can be constructed recursively, using (36).
Moreover, \(p\boxplus_{n}q=0\) if and only if \(\deg(p)+\deg(q)<n\), or if \(\deg p=n\) then \(q\equiv 0\) (last observation follows from (37) and the fact that when \(\deg p=n\), polynomials \(p^{(i)}\), \(i=0,1,\ldots,n\), form a basis of \(\mathbb{P}_{n}\)). This also shows that the inverse of any \(p\in\mathbb{P}_{n}\) under the additive (finite free) convolution \(\boxplus_{n}\) is unique.
### Real roots, interlacing, and free finite convolution
A very important fact is that in many circumstances the finite free convolution of two polynomials with real roots also has all its roots real. Here, we use the notation introduced at the beginning of Section 2.1.
**Proposition 2.7** (Szego [51], Walsh [54]).: _Let \(p,q\in\mathbb{P}_{n}\). Then_
1. \(p,q\in\mathbb{P}_{n}(\mathbb{R})\ \Rightarrow\ p\boxplus_{n}q\in\mathbb{P}_{n}( \mathbb{R})\)_._
2. \(p\in\mathbb{P}_{n}(\mathbb{R}),\ q\in\mathbb{P}_{n}(\mathbb{R}_{\geq 0})\ \Rightarrow\ p\boxtimes_{n}q\in \mathbb{P}(\mathbb{R})\)_._
3. \(p,q\in\mathbb{P}_{n}(\mathbb{R}_{\geq 0})\ \Rightarrow\ p\boxtimes_{n}q\in \mathbb{P}(\mathbb{R}_{\geq 0})\)
Taking into account that \(p(-x)=p\boxtimes_{n}(x+1)^{n}\) (see (34) with \(\alpha=-1\)), a simple consequence of this proposition is that additionally the following "rule of signs" applies:
* \(p,q\in\mathbb{P}_{n}(\mathbb{R}_{\leq 0})\ \Rightarrow\ p\boxtimes_{n}q\in \mathbb{P}(\mathbb{R}_{\geq 0})\)
* \(p\in\mathbb{P}_{n}(\mathbb{R}_{\leq 0}),\ q\in\mathbb{P}_{n}(\mathbb{R}_{\geq 0})\ \Rightarrow\ p \boxtimes_{n}q\in\mathbb{P}(\mathbb{R}_{\leq 0})\).
_Remark 2.8_.: Multiplicative finite free convolution can also be considered in the framework of finite multiplier sequences [12, 11, 34], where the zero preservation results are known.
_Definition 2.9_ (Interlacing).: Let
\[p(x)=e_{0}(p)\prod_{j=1}^{n}\big{(}x-\lambda_{j}(p)\big{)}\in\mathbb{P}_{n}( \mathbb{R}),\quad\lambda_{1}(p)\leq\cdots\leq\lambda_{n}(p),\]
and
\[q(x)=e_{0}(q)\prod_{j=1}^{m}\big{(}x-\lambda_{j}(q)\big{)}\in\mathbb{P}_{m}( \mathbb{R}),\quad\lambda_{1}(q)\leq\cdots\leq\lambda_{m}(q).\]
We say that \(q\)**interlaces**\(p\) (or, equivalently, that **zeros of \(q\) interlace zeros of \(p\)**, see, e.g., [16]), and denote it \(p\preccurlyeq q\), if
\[m=n\quad\text{and}\quad\lambda_{1}(p)\leq\lambda_{1}(q)\leq\lambda_{2}(p)\leq \lambda_{2}(q)\leq\cdots\leq\lambda_{n}(p)\leq\lambda_{n}(q), \tag{45}\]
or if
\[m=n-1\quad\text{and}\quad\lambda_{1}(p)\leq\lambda_{1}(q)\leq\lambda_{2}(p) \leq\lambda_{2}(q)\leq\cdots\leq\lambda_{n-1}(p)\leq\lambda_{n-1}(q)\leq \lambda_{n}(p). \tag{46}\]
Furthermore, we use the notation \(p\preccurlyeq q\) when all inequalities in (45) or (46) are strict.
Convex combinations of interlacing polynomials are real, see [13, 14] or [39, Lemma 4.5]:
**Proposition 2.10**.: _For \(p,q\in\mathbb{P}_{n}^{*}(\mathbb{R})\),_
\[p\preccurlyeq q\quad\Leftrightarrow\quad tp+(1-t)q\in\mathbb{P}_{n}^{*}( \mathbb{R})\quad\text{for every $t\in[0,1]$}.\]
From here and the linearity of the free finite convolution we easily obtain the following interlacing-preservation property:
**Proposition 2.11** (Preservation of interlacing).: _If \(p,\widetilde{p}\in\mathbb{P}_{n}^{*}(\mathbb{R})\) and \(q,\widetilde{q}\in\mathbb{R}_{\geq 0}\), then_
\[p\preccurlyeq\widetilde{p}\quad\Rightarrow\quad p\boxtimes_{n}q\preccurlyeq \widetilde{p}\boxtimes_{n}q,\]
_and_
\[q\preccurlyeq\widetilde{q}\quad\Rightarrow\quad p\boxtimes_{n}q\preccurlyeq p \boxtimes_{n}\widetilde{q}.\]
_Analogously, if \(p,\widetilde{p},q\in\mathbb{P}_{n}^{*}(\mathbb{R})\), then_
\[p\preccurlyeq\widetilde{p}\quad\Rightarrow\quad p\boxplus_{n}q\preccurlyeq \widetilde{p}\boxplus_{n}q.\]
For a proof of this result, see, for instance, [2, Lemma B.3].
_Definition 2.12_.: Given \(p\in\mathbb{P}_{n}\) with \(n\geq 2\), we define its (absolute) **root separation** or **mesh** as the minimal distance between its roots:
\[\text{mesh}(p)=\min\{|\lambda_{i}(p)-\lambda_{j}(p)|:1\leq i<j\leq n\}\]
(see e.g. [8]).
Since \(\text{mesh}(p)>r\) if and only if \(p(x)\preccurlyeq p(x-r)\), and \(p(x-r)=p(x)\boxplus_{n}(x-r)^{n}\), it follows that the preservation of interlacing implies the zero separation does not decrease under finite additive free convolution:
**Proposition 2.13** (Preservation of mesh).: _If \(p,q\in\mathbb{P}_{n}(\mathbb{R})\), both of degree \(n\), then \(\operatorname{mesh}(p\boxplus_{n}q)\geq\operatorname{mesh}(p)\)._
Proof.: If \(\operatorname{mesh}(p)>r\), then \(p\preccurlyeq[p\boxplus_{n}(x-r)^{n}]\), and by preservation of interlacing, \([p\boxplus q]\preccurlyeq[p\boxplus_{n}(x-r)^{n}\boxplus_{n}q]\). Since the second polynomial is simply \([p\boxplus q](x-r)\), we conclude that \(\operatorname{mesh}(p\boxplus q)>r\).
A useful corollary of this result is
**Corollary 2.14**.: _If all roots of \(p\in\mathbb{P}(\mathbb{R})\) are simple and if \(q\) is the (unique) inverse of \(p\) under additive convolution, then \(q\notin\mathbb{P}(\mathbb{R})\)._
Finite free multiplicative convolution has similar properties, now using the relative separation or logarithmic mesh:
_Definition 2.15_.: Given \(p\in\mathbb{P}(\mathbb{R}_{>0})\) with \(n\geq 2\), we define its **logarithmic mesh** as the minimal ratio (bigger than 1) between its roots:
\[\operatorname{lmesh}(p)=\min\{\lambda_{i}(p)/\lambda_{i+1}(p):1\leq i<n\},\]
assuming \(\lambda_{1}(p)\geq\lambda_{2}(p)\geq\cdots\geq\lambda_{n}(p)>0\).
Again, the observation that for \(r>0\), \(\operatorname{lmesh}(p)>r\) if and only if \(p(x)\preccurlyeq[p(x)\boxtimes_{n}(x-r)^{n}]\), yields
**Proposition 2.16** (Preservation of lmesh, [32]).: _If \(p,q\in\mathbb{P}(\mathbb{R}_{>0})\), then \(\operatorname{lmesh}(p\boxtimes q)\geq\operatorname{lmesh}(p)\)._
Proof.: If \(\operatorname{lmesh}(p)>r\) then \(p\preccurlyeq[p\boxtimes_{n}(x-r)^{n}]\), and by preservation of interlacing, \([p\boxtimes q]\preccurlyeq[p\boxtimes_{n}(x-r)^{n}\boxtimes_{n}q]=[p\boxtimes q ]\boxtimes_{n}(x-r)^{n}\), so \(\operatorname{lmesh}(p\boxtimes q)>r\).
Again, as in the case of Corollary 2.14, we have
**Corollary 2.17**.: _If all roots of \(p\in\mathbb{P}(\mathbb{R}_{>0})\) are simple and if \(q\) is an inverse of \(p\) under multiplicative convolution, then \(q\notin\mathbb{P}(\mathbb{R}_{>0})\)._
## 3. Convolutions of hypergeometric polynomials
### Finite free multiplicative convolution
We start with a simple result that will allow us to "assemble" more complicated hypergeometric polynomials from elementary "building blocks" using the free multiplicative convolution:
**Theorem 3.1**.: _If \(n\in\mathbb{Z}_{\geq 0}\), and_
\[p(x)=\ _{i_{1}+1}\mathcal{F}_{j_{1}}\binom{-n,\boldsymbol{a}_{1}}{\boldsymbol{b }_{1}};x\bigg{)},\qquad q(x)=\ _{i_{2}+1}\mathcal{F}_{j_{2}}\binom{-n,\boldsymbol{a}_{2}}{ \boldsymbol{b}_{2}};x\bigg{)},\]
_where the parameters \(\boldsymbol{a}_{1},\boldsymbol{a}_{2},\boldsymbol{b}_{1},\boldsymbol{b}_{2}\) are tuples (of sizes \(i_{1},i_{2},j_{i},j_{2}\), respectively), then their \(n\)-th free multiplicative convolution is given by_
\[[p\boxtimes_{n}q](x)=\ _{i_{1}+i_{2}+1}\mathcal{F}_{j_{1}+j_{2}}\binom{-n, \boldsymbol{a}_{1},\boldsymbol{a}_{2}}{\boldsymbol{b}_{1},\boldsymbol{b}_{2}} ;x\bigg{)}.\]
Proof.: For a hypergeometric polynomial
\[{}_{i+1}\mathcal{F}_{j}\binom{-n,\ \boldsymbol{a}}{\boldsymbol{b}};x\bigg{)}=( \boldsymbol{b})^{\overline{n}}\sum_{k=0}^{n}\frac{(n)^{\underline{k}}\left( \boldsymbol{a}\right)^{\overline{k}}}{\left(\boldsymbol{b}\right)^{\overline {k}}}\frac{x^{k}(-1)^{k}}{k!},\]
written in the form (1), the coefficient \(e_{k}\) was given in (10), that is,
\[e_{k}=(-1)^{n}\binom{n}{k}\left(\boldsymbol{a}\right)^{\overline{n-k}}\left( \boldsymbol{b}+n-k\right)^{\overline{k}}, \tag{47}\]
and the assertion is a straightforward application of the formula in (32).
A simple consequence is that for \(m\in\mathbb{N}\),
\[\Bigg{(}\begin{smallmatrix}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Equivalently, by (40),
\[{}_{j}F_{i}\!\left(\begin{matrix}\boldsymbol{a}\\ \boldsymbol{b}\end{matrix};\frac{\partial}{\partial x}\right)\![x^{n}]=\frac{(-1 )^{jn}}{(\boldsymbol{b})^{\overline{n}}}\ {}_{i+1}\mathcal{F}_{j}\!\left(\begin{matrix}-n,\ - \boldsymbol{b}-n+1\\ -\boldsymbol{a}-n+1\end{matrix};(-1)^{i+j+1}x\right)\!. \tag{49}\]
Using this result in (41) we get
**Theorem 3.4**.: _Let \(p\) and \(q\) be hypergeometric polynomials of the following form:_
\[p(x)=\ {}_{i_{1}+1}\mathcal{F}_{j_{1}}\!\left(\begin{matrix}-n,\ \boldsymbol{a} _{1}\\ \boldsymbol{b}_{1}\end{matrix};x\right)\!,\qquad q(x)=\ {}_{i_{2}+1}\mathcal{F}_{j_{2}}\! \left(\begin{matrix}-n,\ \boldsymbol{a}_{2}\\ \boldsymbol{b}_{2}\end{matrix};x\right)\!,\]
_where the parameters \(\boldsymbol{a}_{1},\boldsymbol{a}_{2},\boldsymbol{b}_{1},\boldsymbol{b}_{2}\) are tuples (of sizes \(i_{1},i_{2},j_{i},j_{2}\), respectively)._
_Then, with the notation (11), their additive convolution \(p\boxplus_{n}q(x)\) is given by_
\[\left(\boldsymbol{a}_{1}\right)^{\overline{n}}\left(\boldsymbol{a}_{2}\right) ^{\overline{n}}\ {}_{j_{1}}F_{i_{1}}\!\left(\begin{matrix}-\boldsymbol{b}_{1}-n+1\\ -\boldsymbol{a}_{1}-n+1\end{matrix};(-1)^{i_{1}+j_{1}+1}\frac{\partial}{ \partial x}\right)\,{}_{j_{2}}F_{i_{2}}\!\left(\begin{matrix}-\boldsymbol{b}_ {2}-n+1\\ -\boldsymbol{a}_{2}-n+1\end{matrix};(-1)^{i_{2}+j_{2}+1}\frac{\partial}{ \partial x}\right)\![x^{n}].\]
Theorem 3.4 shows that factorization identities (or summation formulas) for hypergeometric functions lead to a representation of the corresponding polynomials in terms of the additive convolution of simpler components:
**Corollary 3.5**.: _Assume that_
\[{}_{j_{1}}F_{i_{1}}\!\left(\begin{matrix}\boldsymbol{a}_{1}\\ \boldsymbol{b}_{1}\end{matrix};x\right)\,{}_{j_{2}}F_{i_{2}}\!\left(\begin{matrix} \boldsymbol{a}_{2}\\ \boldsymbol{b}_{2}\end{matrix};x\right)=\ {}_{j_{3}}F_{i_{3}}\!\left( \begin{matrix}\boldsymbol{a}_{3}\\ \boldsymbol{b}_{3}\end{matrix};x\right)\!, \tag{50}\]
_and let_
\[p(x)=\ {}_{i_{1}+1}\mathcal{F}_{j_{1}}\!\left(\begin{matrix}-n,\ -\boldsymbol{b}_{1}-n+1\\ -\boldsymbol{a}_{1}-n+1\end{matrix};x\right)\]
_and_
\[q(x)=\ {}_{i_{2}+1}\mathcal{F}_{j_{2}}\!\left(\begin{matrix}-n,\ -\boldsymbol{b}_{2}-n+1\\ -\boldsymbol{a}_{2}-n+1\end{matrix};x\right)\!.\]
_Then the additive convolution of the hypergeometric polynomials_
\[p\left((-1)^{i_{1}+j_{1}}x\right)\boxplus_{n}q\left((-1)^{i_{2}+j_{2}}x\right)\]
_is, up to a constant factor, equal to_
\[{}_{i_{3}+1}\mathcal{F}_{j_{3}}\!\left(\begin{matrix}-n,\ -\boldsymbol{b}_{3}-n+1\\ -\boldsymbol{a}_{3}-n+1\end{matrix};(-1)^{i_{3}+j_{3}}x\right)\!.\]
Proof.: By (48),
\[p(x)=(-1)^{n}\left(-\boldsymbol{b}_{1}-n+1\right)^{\overline{n}}\ {}_{j_{1}}F_{i_{1}}\! \left(\begin{matrix}\boldsymbol{a}_{1}\\ \boldsymbol{b}_{1}\end{matrix};(-1)^{i_{1}+j_{1}+1}\frac{\partial}{\partial x} \right)\![x^{n}],\]
so that
\[p\left((-1)^{i_{1}+j_{1}+1}x\right)=(-1)^{n(i_{1}+j_{1})}\left(-\boldsymbol{b} _{1}-n+1\right)^{\overline{n}}\ {}_{j_{1}}F_{i_{1}}\!\left(\begin{matrix} \boldsymbol{a}_{1}\\ \boldsymbol{b}_{1}\end{matrix};\frac{\partial}{\partial x}\right)\![x^{n}],\]
with an analogous formula valid for \(q\). Hence, by Theorem 3.4,
\[p\left((-1)^{i_{1}+j_{1}+1}x\right)\boxplus_{n}q\left((-1)^{i_{2}+j_{2}+1}x\right)\]
is equal to
\[{}_{j_{1}}F_{i_{1}}\!\left(\begin{matrix}\boldsymbol{a}_{1}\\ \boldsymbol{b}_{1}\end{matrix};\frac{\partial}{\partial x}\right)\,{}_{j_{2}}F_{ i_{2}}\!\left(\begin{matrix}\boldsymbol{a}_{2}\\ \boldsymbol{b}_{2}\end{matrix};\frac{\partial}{\partial x}\right)\![x^{n}]=\ {}_{j_{3}}F_{i_{3}}\! \left(\begin{matrix}\boldsymbol{a}_{3}\\ \boldsymbol{b}_{3}\end{matrix};\frac{\partial}{\partial x}\right)\![x^{n}],\]
up to a multiplicative factor. Now by (49) this is a constant multiple of
\[{}_{i_{3}+1}\mathcal{F}_{j_{3}}\binom{-n,\ -\boldsymbol{b}_{3}-n+1}{-\boldsymbol{a}_ {3}-n+1};(-1)^{i_{3}+j_{3}+1}x\biggr{)},\]
which yields the identity
Formula (42) allows to reduce it to
\[p\left((-1)^{i_{1}+j_{1}}x\right)\boxplus_{n}q\left((-1)^{i_{2}+j_{2}}x\right) =C\ _{i_{3}+1}\mathcal{F}_{j_{3}}\binom{-n,\ -\boldsymbol{b}_{3}-n+1}{- \boldsymbol{a}_{3}-n+1};(-1)^{i_{3}+j_{3}}x\biggr{)},\]
as claimed; the value of the constant \(C\) can be obtained by examining the leading coefficients in the identity above:
\[C=(-1)^{n(j_{1}+j_{2}+j_{3}+1)}\frac{\left(\boldsymbol{b}_{1}\right)^{\overline {n}}\left(\boldsymbol{b}_{2}\right)^{\overline{n}}}{\left(\boldsymbol{b}_{3} \right)^{\overline{n}}}.\]
Observe, however, that the summation formula (50) implies that the parameters \(\boldsymbol{b}_{j}\) satisfy certain algebraic relations, which in practice simplifies the expression for the constant \(C\) considerably.
_Remark 3.6_.:
1. In the case when \(i_{1}+j_{1}\) and \(i_{2}+j_{2}\) have equal parity, (42) and Corollary 3.5 yield an expression for the free convolution \(p\boxplus_{n}q\), see Examples 3.7-3.11 below.
2. The direct computation of additive convolution via (36) for hypergeometric polynomials produces formulas where the coefficients can be expressed in terms of the hypergeometric functions evaluated at \(\pm 1\). The approach of Corollary 3.5 presents fewer but more elegant formulas, which is the reason for our choice.
3. Additional examples can be obtained from known summation formulas that involve evaluation in constant multiples or powers of the variable \(x\). Although these cases are not covered by Corollary 3.5, we can still use Theorem 3.4 and similar arguments for further examples of additive convolution of hypergeometric polynomials (now evaluated in \(cx\), \(x^{2}\), etc.). In particular, this allows us to study the symmetrization \(p(x)\boxplus_{n}p(-x)\) of several hypergeometric polynomials \(p\). Observe that symmetrization is an instance of non-linear transformation of the original polynomial \(p\); first non-trivial examples of such transformations that preserve real zeros appeared in the work of Branden [7]. We are planning to address these issues in detail in future work.
As an illustration, we will analyze several factorization identities for hypergeometric functions (a good source is Chapter 2 of [27]) and their consequences for finite additive convolution. Our intention is not to be exhaustive; instead, we concentrate on the most revealing or less trivial formulas.
_Example 3.7_ (Additive convolution of two Laguerre polynomials).: By binomial identity, we know that
\[{}_{1}F_{0}\binom{c_{1}}{\cdot};x\ _{1}F_{0}\binom{c_{2}}{\cdot};x\biggr{)}=\ _{1}F_{0}\binom{c_{1}+c_{2}}{\cdot};x\biggr{)}.\]
Using Corollary 3.5 we obtain,
\[{}_{1}\mathcal{F}_{1}\biggl{(}\genfrac{.}{.}{0.0pt}{}{-n}{b_{1}};x\biggr{)} \boxplus_{n}\ _{1}\mathcal{F}_{1}\biggl{(}\genfrac{.}{.}{0.0pt}{}{-n}{b_{2}};x \biggr{)}=(-1)^{n}\ _{1}\mathcal{F}_{1}\biggl{(}\genfrac{.}{.}{0.0pt}{}{-n}{b_{1}+b_{2}+n-1};x \biggr{)}.\]
By Equation (17), this can be rewritten in terms of Laguerre polynomials as
\[L_{n}^{(\alpha)}(x)\boxplus_{n}L_{n}^{(\beta)}(x)=\frac{(-1)^{n}}{n!}L_{n}^{( \alpha+\beta+n+1)}(x).\]
_Example 3.8_.: Euler's transformation (see identity (10) in [27]) is
\[{}_{1}F_{0}\!\left(\!\!\!\begin{array}{c}c_{1}+c_{2}-d\\ \cdot\end{array}\!;x\right)\,{}_{2}F_{1}\!\left(\!\!\!\begin{array}{c}d-c_{1},d-c_{2}\\ d\end{array}\!;x\right)=\,{}_{2}F_{1}\!\left(\!\!\!\begin{array}{c}c_{1},\,c_{ 2}\\ d\end{array}\!;x\right)\!.\]
Using Corollary 3.5 and some straightforward simplifications, we obtain
\[{}_{1}{\mathcal{F}}_{1}\!\left(\!\!\!\begin{array}{c}-n\\ b_{1}+b_{2}-a\end{array}\!;x\right)\boxplus_{n}\,{}_{2}{\mathcal{F}}_{2}\!\left( \!\!\!\begin{array}{c}-n,\,a\\ a-n+1-b_{1},\,\,a-n+1-b_{2}\end{array}\!;x\right)=(-1)^{n}\,{}_{2}{\mathcal{F}}_ {2}\!\left(\!\!\begin{array}{c}-n,\,a\\ b_{1},\,\,b_{2}\end{array}\!;x\right)\!.\]
In terms of Laguerre polynomials we can write it as
\[L_{n}^{(b_{1}+b_{2}-a-1)}(x)\boxplus_{n}\,{}_{2}{\mathcal{F}}_{2}\!\left(\! \!\!\begin{array}{c}-n,\,a\\ a-n+1-b_{1},\,\,a-n+1-b_{2}\end{array}\!;x\right)=\frac{(-1)^{n}}{n!}\,{}_{2}{ \mathcal{F}}_{2}\!\left(\!\!\begin{array}{c}-n,\,a\\ b_{1},\,\,b_{2}\end{array}\!;x\right)\!.\]
_Example 3.9_.: Clausen's formula (see identity (11) in [27]) asserts that
\[\left[\,{}_{2}F_{1}\!\left(\!\!\begin{array}{c}c,\,d\\ c+d+1/2\end{array}\!;x\right)\right]^{2}=\,{}_{3}F_{2}\!\left(\!\!\!\begin{array} []{c}2c,\,\,2d,\,\,c+d\\ c+d+1/2,\,\,2c+2d\end{array}\!;x\right)\!.\]
With an appropriate change of parameters, Corollary 3.5 yields that with
\[p(x)=\,{}_{2}{\mathcal{F}}_{2}\!\left(\!\!\!\begin{array}{c}-n,a+b-1/2\\ a-\frac{n-1}{2},\,\,b-\frac{n-1}{2}\,;x\right)\!, \tag{51}\]
we have that for
\[p(x)\boxplus_{n}p(x)=(-1)^{n}\frac{\big{(}a+b-1/2\big{)}^{n}}{(2a+2b+n-1)^{n} }\,\,{}_{3}{\mathcal{F}}_{3}\!\left(\!\!\begin{array}{c}-n,\,a+b-1/2,\,\,2a +2b+n-1\\ 2a,\,\,2b,\,\,a+b\end{array}\!;x\right)\!. \tag{52}\]
_Example 3.10_.: Identity (18) in [27], related to the product of Bessel functions, is
\[{}_{0}F_{1}\!\left(\!\!\begin{array}{c}\cdot\\ c\end{array}\!;x\right)\,{}_{0}F_{1}\!\left(\!\!\begin{array}{c}\cdot\\ d\end{array}\!;x\right)=\,{}_{2}F_{3}\!\left(\!\!\!\begin{array}{c}\frac{c+d }{2},\,\frac{c+d-1}{2}\\ c,\,\,d,\,\,c+d-1\end{array}\!;4x\!\right)\!.\]
By Corollary 3.5 and formula (40), the additive convolution \(p\boxplus_{n}q\) of the hypergeometric polynomials (closely related to Bessel polynomials, defined in (18))
\[p(x)=\,{}_{2}{\mathcal{F}}_{0}\!\left(\!\!\!\begin{array}{c}-n,\,-c-n+1\\ \cdot\end{array}\!;x\right)\!,\qquad q(x)=\,{}_{2}{\mathcal{F}}_{0}\!\left(\! \!\!\begin{array}{c}-n,\,\,-d-n+1\\ \cdot\end{array}\!;x\right)\!, \tag{53}\]
is given by
\[\frac{(-4)^{n}}{(c+d-1)^{n}}\,\,\,{}_{4}{\mathcal{F}}_{2}\!\left(\!\!\!\begin{array} []{c}-n,\,\,-c-n+1,\,\,-d-n+1,\,\,-c-d-n+2\\ -\frac{c+d}{2}-n+1,\,\,-\frac{c+d-1}{2}-n+1\end{array}\!;x/4\right)\!.\]
We can simplify this expression by setting \(a=-c-n+1\) and \(b=-d-n+1\), so that
\[{}_{2}{\mathcal{F}}_{0}\!\left(\!\!\!\begin{array}{c}-n,\,\,a\\ \cdot\end{array}\!;x\right)\boxplus_{n}\,\,\,{}_{2}{\mathcal{F}}_{0}\!\left(\! \!\!\begin{array}{c}-n,\,\,b\\ \cdot\end{array}\!;x\right)=\frac{(-4)^{n}}{(a+b+n)^{n}}\,\,{}_{4}{\mathcal{F}}_ {2}\!\left(\!\!\!\begin{array}{c}-n,\,\,a,\,\,b,\,a+b+n\\ \frac{a+b}{2},\,\,\frac{a+b+1}{2}\end{array}\!;x/4\!\right)\!.\]
In particular, with \(a=b\), we get
\[{}_{2}{\mathcal{F}}_{0}\!\left(\!\!\!\begin{array}{c}-n,\,\,a\\ \cdot\end{array}\!;x\right)\boxplus_{n}\,\,\,{}_{2}{\mathcal{F}}_{0}\!\left(\! \!\!\begin{array}{c}-n,\,\,a\\ \cdot\end{array}\!;x\right)=\frac{(-4)^{n}}{(2a+n)^{n}}\,\,{}_{3}{\mathcal{F}}_{1} \!\left(\!\!\begin{array}{c}-n,\,\,a,\,\,2a+n\\ a+\frac{1}{2}\end{array}\!;x/4\right)\!.\]
_Example 3.11_.: By identity (17) in [27],
\[{}_{1}F_{0}\binom{2c-2d}{\ \ \ };x\bigg{)}\ _{3}F_{2}\binom{2d-1,\ d+1/2,\ d-c-1/2}{c+d+1/2,\ d- 1/2};x\bigg{)}=\ _{3}F_{2}\binom{2c-1,\ c+1/2,\ c-d-1/2}{c+d+1/2,\ c-1/2};x\bigg{)}.\]
This implies that the additive convolution \(p\boxplus_{n}q\) of the hypergeometric polynomials
\[p(x)=\ _{1}\mathcal{F}_{1}\binom{-n}{2d-2c-n+1};x\bigg{)} \tag{54}\]
and
\[q(x)=\ _{3}\mathcal{F}_{3}\binom{-n,\ -c-d-n+1/2,\ -d-n+3/2}{-2d-n+2,\ -d-n+1/2,\ c -d-n+3/2};x\bigg{)}\]
is
\[\frac{(-1)^{n}\left(d-1/2\right)^{\overline{n}}}{\left(c-1/2\right)^{ \overline{n}}}\ _{3}\mathcal{F}_{3}\binom{-n,\ -c-d-n+1/2,\ -c-n+3/2}{-2c-n+2,\ -c-n+1/2,\ d-c-n+3/2};x\bigg{)}.\]
In other words, additive convolution of \(q\) with the polynomial \(p\) in (54) swaps the parameters \(c\) and \(d\) in \(q\), up to a multiplicative constant.
## 4. Real zeros of hypergeometric polynomials
A representation of a hypergeometric polynomial as a finite free convolution of more elementary blocks combined with the properties of preservation of the real zeros and interlacing of the free convolutions (see Section 2.5) is an effective tool that allows us to analyze when all roots of a specific hypergeometric polynomial are real3.
Footnote 3: As it was mentioned, some of the results below established by the multiplicative free convolution can have alternative proofs in the contexts of finite multiplier sequences.
In order to use this tool, we need to create an inventory of the simplest hypergeometric polynomials with real (or positive) roots that will serve as basic building blocks for more complicated functions.
### Simplest real hypergeometric polynomials
For small values of \(i\) and \(j\), the cases when
\[{}_{i+1}\mathcal{F}_{j}\binom{-n,\boldsymbol{a}}{\boldsymbol{b}};x\bigg{)}\]
has only real roots are well studied and follow from the explicit expressions appearing in Section 2.3. For example, for \({}_{1}F_{0}\) this is a consequence of formula (14), while for the \({}_{1}F_{1}\) case we can use the connection with the Laguerre polynomials (17), whose zeros are well understood. In particular, it follows that all roots of
\[p(x)=\ _{1}\mathcal{F}_{1}\binom{-n}{b};x\bigg{)},\]
which up to a constant coincides with the Laguerre polynomial \(L_{n}^{(b-1)}\), are positive only when \(b>0\); they are non-negative if we also admit the values \(b\in(-\mathbb{Z}_{n})\) (in this case, the polynomial has zeros at the origin with multiplicity \(-b+1\), see (16)), and \(p\in\mathbb{P}_{n}(\mathbb{R})\) also if \(b\in(-1,0)\), see e.g. [52, SS6.73].
Several results on the zero interlacing of Laguerre polynomials can be found in [4, 22, 23]. For instance, for \(\alpha>-1\),
\[L_{n}^{\alpha}\preccurlyeq L_{n}^{\alpha+t},\quad 0\leq t\leq 2;\]
also, if \(\alpha+1\geq n\), then
\[L_{n}^{\alpha}\preccurlyeq L_{n}^{\alpha+3}.\]
These facts have the following translation in terms of the \({}_{1}F_{1}\) hypergeometric polynomials: for \(b>0\),
\[{}_{1}\mathcal{F}_{1}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n}{b};x\bigg{)}\preccurlyeq\ _{1} \mathcal{F}_{1}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n}{b+t};x\bigg{)},\quad 0\leq t\leq 2; \tag{55}\]
also, if \(b\geq n\), then
\[{}_{1}\mathcal{F}_{1}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n}{b};x\bigg{)}\preccurlyeq _{1}\mathcal{F}_{1}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n}{b+3};x\bigg{)}. \tag{56}\]
In the case of Bessel or reciprocal Laguerre polynomials, by (18),
\[{}_{2}\mathcal{F}_{0}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ a}{.};x\bigg{)}=x^{n }\ {}_{1}\mathcal{F}_{1}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n}{-n-a+1};-\frac{1}{x} \bigg{)}=n!\,x^{n}L_{n}^{(-n-a)}(-1/x), \tag{57}\]
which shows that the zeros of
\[{}_{2}\mathcal{F}_{0}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ a}{.};x\bigg{)}\]
are negative when \(a<-n+1\), and real when \(-n+1<a<-n+2\). Furthermore, since for \(-n-a-1\in(-\infty,-1)\setminus(-\mathbb{Z}_{n})\) we know that \(L_{n}^{(-n-a)}\) has all its roots different from \(0\), and has at least one complex root, we conclude that for \(a>-n+2\) (and \(a\notin(-\mathbb{Z}_{n})\)), not all zeros of
\[{}_{2}\mathcal{F}_{0}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ a}{.};x\bigg{)}\]
are real. Finally, recall that in the remaining cases, when \(a\in(-\mathbb{Z}_{n})\), we obtain a real rooted polynomial of degree \(-a\) (smaller degree than \(n\)).
From the interlacing property (55) we readily deduce that for \(0\leq t\leq 2\) and \(a<-n+1\) one has
\[{}_{2}\mathcal{F}_{0}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ a}{.};x\bigg{)} \preccurlyeq\ _{2}\mathcal{F}_{0}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ a-t}{.};x\bigg{)}. \tag{58}\]
We summarize in Table 1 the real-rooted cases discussed so far.
Finally, by (19)-(21), the case of \({}_{2}F_{1}\) is equivalent to studying Jacobi polynomials \(P_{n}^{(\alpha,\beta)}\), see, e.g., [45, 52]. In particular, all pairs of parameters \((\alpha,\beta)\) for which their zeros are real and simple have been described in the literature (see [17]). For a small degree, \(n=1,2,3\), a complete description appears in [17, Prop. 4]; see [17, Thm. 5] for higher degrees. Formulas (23)-(24) also give us the cases where polynomials have multiple roots (which can occur only at \(\pm 1\)).
All these facts can be transferred to hypergeometric polynomials \({}_{2}F_{1}\) using identities (26)-(30). In Table 2 we summarize the known information, identifying when the roots are all non-positive, all non-negative, or when we have at least one positive and one negative root.
_Remark 4.1_.: A particular interesting case follows from (29) by taking \(k=1\): for every \(b\in\mathbb{R}\setminus\{-n\}\) one has
\[{}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ b+1}{b};x\right)=(1- x)^{n-1}\left(b-x(b+n)\right)=(-1)^{n}(b+n)(x-1)^{n-1}(x-\tfrac{b}{b+n}). \tag{59}\]
Notice that the only root of the polynomial in (59) that is not at \(x=1\) is negative if \(-n<b<0\), and positive if \(b<-n\) or \(b>0\). Using this expression, a direct computation yields that given a real rooted polynomial \(p(x)\),
\[p(x)\boxtimes_{n}\ {}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ b+1}{b };x\right)=(-1)^{n}\left(b+1\right)^{\overline{n-1}}\left(b\,p(x)+xp^{\prime} (x)\right),\]
see for instance [2, Lemma 3.5] for similar type of computation.
Moreover, since \({}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ b+1}{b};x\right)\) interlaces the multiplicative identity \({}_{1}\mathcal{F}_{0}\!\left(\genfrac{.}{.}{0.0pt}{}{-n}{\cdot};x\right)\) (either to the left or to the right, depending on the sign of \(b/(b+n)\)), we get in the same fashion that \(p(x)\) interlaces \(p(x)\boxtimes_{n}\ {}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ b+1}{b};x\right)\).
More generally, by (19)-(22), (29) and Table 2, we have that for \(k\in\mathbb{Z}_{n}\), polynomial
\[{}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ b+k}{b};x\right) \tag{60}\]
has a root at \(x=1\) of multiplicity \(n-k\); the remaining roots are all in \([0,1]\) if \(b>0\), and bigger than \(1\) if \(b<-n-k+1\).
Several results on the zero interlacing of Jacobi polynomials can be found in [21]. For instance, for \(\alpha,\beta>-1\),
\[P_{n}^{(\alpha+t,\beta)}\preccurlyeq P_{n}^{(\alpha,\beta+s)},\quad 0\leq t,s \leq 2;\]
actually, in both cases the interlacing is strict (\(\prec\)), unless \(t=s=0\). This fact has the following translation in terms of the \({}_{2}F_{1}\) hypergeometric polynomials:
* for \(b>0\) and \(a>n+b\), \[{}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,a+s}{b};x\right) \preccurlyeq\ {}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,a+t}{b+t};x \right)\!,\quad 0\leq s,t\leq 2;\] (61)
* for \(b>0\) and \(a<-n+1\), \[{}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,a}{b+t};x\right) \preccurlyeq\ {}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,a-s}{b};x\right)\!, \quad 0\leq s,t\leq 2;\] (62)
* For \(b<a-n+1\) and \(a<-n+1\), \[{}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,a-t}{b-t};x\right) \preccurlyeq\ {}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,a}{b-s};x\right)\!, \quad 0\leq s,t\leq 2.\] (63)
### General hypergeometric polynomials
In this section, we will use the tools of Theorems 3.1 and 3.4 and Corollary 3.5.
For instance, a finite multiplicative convolution with a Laguerre polynomial yields the following result, which shows that we can add a positive parameter downstairs without affecting the real-rootedness of a hypergeometric polynomial. Moreover, if the parameters we add differ in less than \(2\), then we get interlacing and monotonicity:
**Theorem 4.2**.: _Let \(\boldsymbol{a}\in\mathbb{R}^{i}\), \(\boldsymbol{b}\in\mathbb{R}^{j}\), and \(\gamma>0\). Then_
\[{}_{i+1}\mathcal{F}_{j}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ \boldsymbol{a}}{ \boldsymbol{b}};x\right)\in\mathbb{P}_{n}(\mathbb{R})\quad\Longrightarrow \quad{}_{i+1}\mathcal{F}_{j+1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ \boldsymbol{a}}{ \boldsymbol{b},\gamma};x\right)\in\mathbb{P}_{n}(\mathbb{R}). \tag{64}\]
_Moreover, if additionally \(0\leq t\leq 2\), then_
\[{}_{i+1}\mathcal{F}_{j+1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ \boldsymbol{a}}{ \boldsymbol{b},\gamma};x\right)\preccurlyeq\ i+1}\mathcal{F}_{j+1}\!\left( \genfrac{.}{.}{0.0pt}{}{-n,\ \boldsymbol{a}}{\boldsymbol{b},\gamma+t};x\right)\!.\]
Proof.: This is a straightforward consequence of the formulas in Theorem 3.1 with the explicit expression (17), the interlacing (55), and the properties of the multiplicative convolution stated in Section 2.5.
_Remark 4.3_.: In the previous proposition, we just illustrate the case that might be more useful, but there are several other interesting side results that either follow from the proposition or are its slight modifications. We collect some of these interesting facts next:
1. First, the addition of a positive parameter \(\gamma\) below not only preserves the class \(\mathbb{P}_{n}(\mathbb{R})\), but also \(\mathbb{P}_{n}(\mathbb{R}_{\geq 0})\) and \(\mathbb{P}_{n}(\mathbb{R}_{\leq 0})\).
2. Also, notice that instead of "adding" a positive parameter downstairs, we can also "remove" a positive parameter from upstairs (or even "move" it from upstairs to downstairs). For instance, with the same hypothesis of Theorem 4.2, we obtain that \[{}_{i+2}\mathcal{F}_{j}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ \boldsymbol{a},\gamma}{ \boldsymbol{b}};x\right)\in\mathbb{P}_{n}(\mathbb{R})\quad\Longrightarrow \quad{}_{i+1}\mathcal{F}_{j}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ \boldsymbol{a}}{ \boldsymbol{b}};x\right)\in\mathbb{P}_{n}(\mathbb{R}).\] (65) This follows from the fact that if we already have a parameter \(\gamma\) upstairs, then by Theorem 4.2 we can add a parameter \(\gamma\) downstairs and then cancel both.
3. The interlacing preservation by the multiplicative convolution with a Laguerre polynomial can be stated in a more general form. Namely, with the same hypothesis of Theorem 4.2, if two real-rooted hypergeometric polynomials interlace, \[{}_{i_{1}+1}\mathcal{F}_{j_{1}}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ \boldsymbol{a}_{1}}{ \boldsymbol{b}_{1}};x\right)\preccurlyeq\ i_{2}+1}\mathcal{F}_{j_{2}}\!\left( \genfrac{.}{.}{0.0pt}{}{-n,\ \boldsymbol{a}_{2}}{\boldsymbol{b}_{2}};x\right)\!,\]
then \[{}_{i_{1}+1}\mathcal{F}_{j_{1}+1}\binom{-n,\ \boldsymbol{a}_{1}}{\boldsymbol{b}_{1}, \gamma};x\right)\preccurlyeq\,_{i_{2}+1}\mathcal{F}_{j_{2}+1}\binom{-n,\ \boldsymbol{a}_{2}}{\boldsymbol{b}_{2}, \gamma};x\bigg{)}.\]
4. Finally, notice that so far we have only been concerned with the case where we multiply by a standard Laguerre polynomial, corresponding to Row 2 in Table 1. However, we can adapt Theorem 4.2 (and its modifications that we just mentioned) to include Rows 3 and 4. The reader should be aware that in some cases we need stronger assumptions on the hypergeometric polynomial. For example, if we want to use the non-standard Laguerre polynomials in row 3 in Table 1, which are not in \(\mathbb{P}_{n}(\mathbb{R}_{\geq 0})\), then the other polynomial must belong to \(\mathbb{P}_{n}(\mathbb{R}_{\geq 0})\) rather than \(\mathbb{P}_{n}(\mathbb{R})\).
Although Theorem 4.2 is rather limited, it already covers useful results from the literature. Notice, for instance, that the well-know fact that
\[\sum_{k=0}^{n}a_{k}x^{k}\in\mathbb{P}_{n}(\mathbb{R})\quad\Rightarrow\quad \sum_{k=0}^{n}\frac{a_{k}}{k!}x^{k}\in\mathbb{P}_{n}(\mathbb{R})\]
(see [46, Theorem 2.4.1] or [47, Problem V.1.65]) is just a particular case of (64) with \(\gamma=1\).
In the same fashion, we can use convolution with Bessel polynomials (see Table 1 and interlacing (56)) to derive the following result:
**Theorem 4.4**.: _Let \(\boldsymbol{a}\in\mathbb{R}^{i}\), \(\boldsymbol{b}\in\mathbb{R}^{j}\), and \(\gamma<-n+1\). Then_
\[{}_{i+1}\mathcal{F}_{j}\binom{-n,\ \boldsymbol{a}}{\boldsymbol{b}};x\bigg{)} \in\mathbb{P}_{n}(\mathbb{R})\quad\Longrightarrow\quad{}_{i+2} \mathcal{F}_{j}\binom{-n,\ \boldsymbol{a},\gamma}{\boldsymbol{b}};x\bigg{)}\in \mathbb{P}_{n}(\mathbb{R}). \tag{66}\]
_Moreover, if additionally \(0\leq t\leq 2\), then_
\[{}_{i+2}\mathcal{F}_{j}\binom{-n,\ \boldsymbol{a},\gamma}{\boldsymbol{b}};x \bigg{)}\preccurlyeq\,_{i+2}\mathcal{F}_{j}\binom{-n,\ \boldsymbol{a},\gamma-t}{ \boldsymbol{b}}.\]
Following the same reasoning of Remark 4.3 used to extend Theorem 4.2, we can adapt Theorem 4.4 to produce related results. For example, we can remove a parameter \(\gamma<-n+1\) from downstairs, or we can preserve the interlacing of the given polynomial. We can also obtain a similar result using Rows 6 and 7 of Table 1 instead of just Row 5. We avoid the details for the sake of brevity.
Another example of how the free multiplicative convolution with Bessel polynomials allows us to give a straightforward proof of a known result is as follows: by (15),
\[r(x)=x^{n}L_{n}^{(0)}(1/x)=\sum_{k=0}^{n}(-1)^{k}\binom{n}{k}\frac{1}{k!}\,x^{ n-k}.\]
Hence, if
\[p(x)=\sum_{k=0}^{n}a_{k}x^{k}=\sum_{k=0}^{n}(-1)^{k}\left((-1)^{k}\,a_{n-k} \right)x^{n-k}\]
and
\[q(x)=\sum_{k=0}^{n}b_{k}x^{k}=\sum_{k=0}^{n}(-1)^{k}\left((-1)^{k}\,b_{n-k} \right)x^{n-k},\]
then
\[p(x)\boxtimes_{n}q(x)\boxtimes_{n}r(x)=\sum_{k=0}^{n}(-1)^{k}\binom{n}{k}^{-1 }\frac{a_{n-k}\,b_{n-k}}{k!}\,x^{n-k}=\frac{(-1)^{n}}{n!}\,\sum_{k=0}^{n}k!\,a _{k}\,b_{k}\,(-x)^{k}.\]
We have that \(r\in\mathbb{P}_{n}(\mathbb{R}_{>0})\); if additionally, \(p,q\in\mathbb{P}_{n}(\mathbb{R}_{\leq 0})\), then by Proposition 2.7 (and the "rule of sign" formulated there), we have that \(p(x)\boxtimes_{n}q(x)\boxtimes_{n}r(x)\in\mathbb{P}_{n}(\mathbb{R}_{\geq 0})\). Thus,
\[p,q\in\mathbb{R}_{\leq 0}\quad\Rightarrow\quad\sum_{k=0}^{n}k!\,a_{k}b_{k}x^{k} \text{ has only real zeros},\]
which is a weaker version of Schur's theorem (where the assumptions are that \(p,q\in\mathbb{P}_{n}(\mathbb{R})\) and \(a_{k},b_{k}\geq 0\) for all \(k\)); it was first proved in [49], and appears also in [47, Problems V.2.155-156].
If we now consider the free multiplicative convolution with Jacobi polynomials, we get
**Theorem 4.5**.: _Let \(\mathbf{a}\in\mathbb{R}^{i}\), \(\mathbf{b}\in\mathbb{R}^{j}\), \(\beta>0\), and \(\alpha>\beta+n-1\). Then_
\[{}_{i+1}\mathcal{F}_{j}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ \mathbf{a}}{\mathbf{b}};x \right)\in\mathbb{P}_{n}(\mathbb{R})\quad\Longrightarrow\quad{}_{i+1} \mathcal{F}_{j+1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ \mathbf{a},\ \alpha}{\mathbf{b},\ \beta};x \right)\in\mathbb{P}_{n}(\mathbb{R}),\]
_Moreover, additionally for \(0\leq t,s\leq 2\):_
\[{}_{i+1}\mathcal{F}_{j+1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ \mathbf{a},\ \alpha+s}{\mathbf{b},\ \beta};x \right)\preccurlyeq\,{}_{i+1}\mathcal{F}_{j+1}\!\left(\genfrac{.}{.}{0.0pt}{ }{-n,\ \mathbf{a},\ \alpha+t}{\mathbf{b},\ \beta+t};x\right).\]
For the last assertion, we used interlacing properties (61)-(63). Following the same reasoning of Remark 4.3, used to extend Theorem 4.2, we can adapt Theorem 4.5 to yield related results. For example, the previous result corresponds to multiplication with a specific type of Jacobi polynomial, corresponding to row 2 of Table 2, but we can also obtain a similar result using the other rows of Table 2.
If we consider Laguerre, Bessel, and Jacobi polynomials as building blocks, and iteratively apply Theorems 4.2, 4.4, or 4.5, we can directly prove that a large class of hypergeometric polynomials (with several parameters) is real-rooted, or even more, their roots are all positive or all negative. Here are some illustrations.
**Theorem 4.6**.: _For any \(i,j\geq 0\), if \(b_{1},\ldots,b_{j}>0\) and \(a_{1},\ldots,a_{i}<-n+1\) then the roots of the hypergeometric polynomial_
\[p(x):={}_{i+1}\mathcal{F}_{j}\left(\genfrac{.}{.}{0.0pt}{}{-n,\mathbf{a}}{\mathbf{b}}; x\right),\]
_with \(\mathbf{a}=(a_{1},\ldots,a_{i})\), \(\mathbf{b}=\big{(}b_{1},\ldots,b_{j}\big{)}\), are all real and have the same sign. Specifically, if \(i\) is even, the roots are all positive, and if \(i\) is odd, the roots are all negative._
_Furthermore, if additionally \(0\leq t\leq 2\), then_
\[{}_{i+1}\mathcal{F}_{j}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ \mathbf{a}}{b_{1},\ \ldots,\ b_{j}};x \right)\preccurlyeq\,{}_{i+1}\mathcal{F}_{j}\!\left(\genfrac{.}{.}{0.0pt}{}{ -n,\ \mathbf{a}}{b_{1}+t,\ b_{2},\ \ldots,\ b_{j}};x\right)\!,\]
_and_
\[{}_{i+1}\mathcal{F}_{j}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a_{1},\ \ldots,\ a_{i}}{\mathbf{b}};x \right)\preccurlyeq\,{}_{i+1}\mathcal{F}_{j}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a_{1}-t,\ a_{2},\ \ldots,\ a_{i}}{\mathbf{b}};x\right)\!.\]
Proof.: By Theorem 3.1,
\[p(x)=\ {}_{2}\mathcal{F}_{0}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a_{1}}{\mathbf{ \cdot}};x\right)\boxtimes_{n}\cdots\boxtimes_{n}\ {}_{2}\mathcal{F}_{0}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a_{i}}{\mathbf{ \cdot}};x\right)\boxtimes_{n}\ {}_{1}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ \mathbf{b}_{1}};x\right)\boxtimes_{n}\cdots\boxtimes_{n}\ {}_{1} \mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ \mathbf{b}_{j}};x\right)\!.\]
By Table 1, rows 2 and 5, the first \(i\) polynomials in the product are Bessel polynomials with all negative roots, while the the last \(j\) polynomials in the product are Laguerre polynomials with all positive roots. By the rule of signs from Section 2.5 applied several times, we find that all roots of
\(p\) have the same sign, and the sign depends on the parity of \(i\), the number of Bessel polynomials that we multiply.
The first interlacing result follows from the same factorization, but grouping all terms except the one with \(b_{1}\):
\[p(x)=\ _{1}\mathcal{F}_{1}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n}{b_{1}};x\bigg{)} \boxtimes_{n}\ _{i+1}\mathcal{F}_{j-1}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ \mathbf{a}}{b_{2},\ \ldots,\ b_{j}};x\bigg{)}.\]
Then, Theorem 4.2 with \(\gamma=b_{1}\) yields the desired result. Similarly, the second interlacing result follows from Theorem 4.4 with \(\gamma=a_{1}\). Of course, by symmetry, the interlacing results hold when we vary any given parameter in the polynomial, and not only \(a_{1}\) or \(b_{1}\).
The following proposition is established in a similar way, but now we require the use of Jacobi polynomials.
**Theorem 4.7**.: _If \(j\geq i\), \(b_{1},\ldots,b_{j}>0\) and \(a_{1},\ldots,a_{i}\in\mathbb{R}\) such that \(a_{s}\geq n-1+b_{s}\) for \(s=1,\ldots,i\), then the hypergeometric polynomial_
\[p(x)=\ _{i+1}\mathcal{F}_{j}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ \mathbf{a}}{b};x \bigg{)},\]
_with \(\mathbf{a}=(a_{1},\ldots,a_{i})\), \(\mathbf{b}=\big{(}b_{1},\ldots,b_{j}\big{)}\), has all positive roots._
Proof.: First, we consider the case of \(i=j\). Then by Theorem 3.1,
\[p(x):=\ _{2}\mathcal{F}_{1}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ a_{1}}{b_{1}};x \bigg{)}\boxtimes_{n}\cdots\boxtimes_{n}\ _{2}\mathcal{F}_{1}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ a_{i}}{b_{i}};x \bigg{)}\]
with \(b_{1},\ldots,b_{j},\in(0,\infty)\), and \(a_{s}\geq n-1+b_{s}\) for \(s=1,\ldots,i\). By (26) (see also Table 2, second row), this is a multiplicative convolution of polynomials with zeros in \((0,1)\), so that all roots of \(p\) are positive.
Now, if \(j>i\), the assertion follows from what we just established by invoking (64).
_Remark 4.8_.: Similarly to Theorem 4.6, we can obtain interlacing results by varying the parameters of the polynomial. Since this factorization involves Jacobi polynomials, we need to use Theorem 4.5 and vary the parameters accordingly.
Our final observation in this section is that, by taking the reciprocal polynomial, the study of \({}_{i+1}F_{j}\) polynomials is in a certain sense dual to the study of \({}_{j+1}F_{i}\) polynomials.
_Remark 4.9_.: Given a polynomial \(p\in\mathbb{P}_{n}(\mathbb{C})\), its **reciprocal polynomial** is defined formally by
\[\hat{p}(x):=x^{n}p(1/x). \tag{67}\]
Alternatively, its coefficients are given by \(e_{j}(\hat{p})=(-1)^{n}e_{n-j}(p)\), for \(j=0,1,\ldots,n\).
It is not difficult to check that when restricted to \(\mathbb{P}_{n}(\mathbb{C}\setminus\{0\})\), the reciprocation operation is actually an involution that maps a polynomial with roots \(\lambda_{1},\ldots,\lambda_{n}\), to a polynomial with roots \(1/\lambda_{1},\ldots,1/\lambda_{n}\). In particular, the sets \(\mathbb{P}_{n}(\mathbb{R}\setminus\{0\})\), \(\mathbb{P}_{n}(\mathbb{R}_{>0})\) and \(\mathbb{P}_{n}(\mathbb{R}_{<0})\) are invariant under this operation. Moreover, Lemma 2.1 has the simple consequence that
\[{}_{i+1}\mathcal{F}_{j}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ \mathbf{a}}{b};x\bigg{)}\qquad\text{ and } \qquad{}_{j+1}\mathcal{F}_{i}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ -n-\mathbf{b}+1}{-n-\mathbf{a}+1};(-1)^{i+j}x\bigg{)}\]
are reciprocal polynomials. This implies that the reciprocation operation defines a bijection from the \({}_{i+1}F_{j}\) polynomials in \(\mathbb{P}_{n}(\mathbb{C}\setminus\{0\})\) to the \({}_{j+1}F_{i}\) polynomials in \(\mathbb{P}_{n}(\mathbb{C}\setminus\{0\})\). More precisely, if we let
\[p(x)=\ _{i+1}\mathcal{F}_{j}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ \mathbf{a}}{b};x \bigg{)},\quad q(x)=\ _{j+1}\mathcal{F}_{i}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ -n-\mathbf{b}+1}{-n-\mathbf{a}+1};x\bigg{)},\]
then
\[p\in\mathbb{P}_{n}(\mathbb{R}\setminus\{0\})\quad\Longleftrightarrow\quad q\in \mathbb{P}_{n}(\mathbb{R}\setminus\{0\}),\]
and
\[p\in\mathbb{P}_{n}(\mathbb{R}_{>0})\quad\Longleftrightarrow\quad q\in\begin{cases} \mathbb{P}_{n}(\mathbb{R}_{>0})&\text{if $i+j$ is even},\\ \mathbb{P}_{n}(\mathbb{R}_{<0})&\text{if $i+j$ is odd}.\end{cases}\]
After this discussion, it should be clear that we can focus our study on polynomials of the type \(i\leq j\) and then extrapolate the results to the case \(i>j\). Notice that the fact that the properties of the Bessel polynomials follow from those of the Laguerre polynomials is the particular case of this bijection when \(i=0\), \(j=1\).
In the following sections, we will discuss in more detail what can be said about \({}_{2}F_{2}\), \({}_{3}F_{1}\) and \({}_{3}F_{2}\) hypergeometric polynomials, since these appear frequently in practical applications, but are much less studied than the \({}_{2}F_{1}\) functions. Once again, we seek to illustrate our approach without aspiring to provide comprehensive results on real-rootedness or interlacing of these polynomials.
### \({}_{2}f_{2}\) and \({}_{3}f_{1}\) Hypergeometric polynomials
Several conclusions about \({}_{2}F_{2}\) and \({}_{3}F_{1}\) polynomials can be reached using the results in Section 4.2. Some of them are known (and we provide a bibliographic reference whenever available or known to us), and others are apparently new.
By Remark 4.9, the \({}_{3}F_{1}\) polynomials are just reciprocal \({}_{2}F_{2}\) polynomials. Therefore, we will focus on the \({}_{2}F_{2}\) family and discuss some consequences for the \({}_{3}F_{1}\) polynomials at the end of the section.
Table 3 summarizes some results that we can obtain directly using additive or multiplicative convolution of our building blocks (e.g. Laguerre, Bessel, and Jacobi). In the following, we provide a brief justification when the result is not trivial.
A particular case of the identity from Theorem 3.1 is
\[{}_{2}\mathcal{F}_{2}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a}{b_{1},\ b_{2}};x \right)=\ _{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a}{b_{1}};x \right)\boxtimes_{n}\ _{1}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n}{b_{2}};x \right)\!. \tag{68}\]
Recall that whenever both polynomials on the right-hand side of (68) are in \(\mathbb{P}_{n}(\mathbb{R})\) and, additionally, one of them has all its roots of the same sign, we can conclude that the \({}_{2}F_{2}\) polynomial on the left in (68) is also in \(\mathbb{P}_{n}(\mathbb{R})\); this is the essence of Theorem 4.2 in the case \(i=j=1\). Furthermore, we can narrow down the location of its roots to a smaller subset of \(\mathbb{R}\) provided that we have additional information on the roots of the two factors on the right. Therefore, the first six rows of Table 3 is a result of combining rows 1-6 of Table 2 with row 2 of Table 1 via the identity (68). The other rows are a consequence of the following proposition:
**Proposition 4.10**.: _Let \(n\geq 4\), \(k\in\mathbb{Z}_{n}\), and \(t\in\mathbb{Z}_{n}\cup\mathbb{R}_{>n-2}\)._
_The polynomial_
\[{}_{2}\mathcal{F}_{2}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a}{b_{1},\ b_{2}};x \right)\in\mathbb{P}_{n}(\mathbb{R}_{>0})\]
_if \(b_{2}>0\), and additionally, one of the following conditions holds:_
1. \(a=k+1/2\) _and either_ \(b_{1}=2-b_{2}>0\) _or_ \(b_{1}=1-b_{2}>0\)_;_
2. \(a=b_{1}+k-1/2\)_, and_ \(b_{1}=(b_{2}+t+1)/2\)_;_
3. \(a=(b_{1}+1)/2+k\)_, and_ \(b_{1}=2(b_{2}-1+t)\)_;_
4. \(a=b_{1}/2+k\) _and_ \(b_{1}=2(b_{2}+t)-1\)_._
_The polynomial_
\[{}_{2}\mathcal{F}_{2}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a}{b_{1},\ b_{2}};x \right)\in\mathbb{P}_{n}(\mathbb{R})\]
_if one of the following conditions holds:_
1. \(a=b_{2}-1/2\)_,_ \(b_{1}=2b_{2}-2\)_, and_ \(b_{2}\in(0,1)\)_;_
2. \(a=b_{2}-1/2\)_,_ \(b_{1}=2b_{2}-1\)_, and_ \(b_{2}\in(-1,0)\)_;_
3. \(a=b_{2}+k-1/2\)_,_ \(b_{1}=2b_{2}-2\)_, and_ \(b_{2}\in(1/2,1)\)_;_
4. \(a=k+1/2\)_, and_ \(b_{1}+b_{2}\in\{1,2\}\)_, if_ \(b_{2}\in(-1,0)\)_._
Recall that in practice, \(b_{1}\) and \(b_{2}\) are indistinguishable, and their the roles can be interchanged.
Proof.: In Example 3.10 we wrote the polynomial (up to a constant term, and a change of variable \(x/4\mapsto x\))
\[p(x)=\ _{4}\mathcal{F}_{2}\!\left(\!\!\begin{array}{cc}-n,&-c-n+1,\ -d-n+1,\ -c-d-n+2\\ &-\frac{c+d}{2}-n+1,\ -\frac{c+d-1}{2}-n+1\end{array}\!;x\right)\]
as an additive convolution of two Bessel polynomials. Recall that the Bessel polynomials (53) in Example 3.10 have real roots for \(c,d>-1\) and only negative roots for \(c,d>0\). Since by (11), for this polynomial \(p\),
\[\frac{e_{n}(p)}{e_{0}(p)}=\frac{(-1)^{n}}{4^{n}}\frac{\left(c+d+n-1\right)^{ \overline{n}}}{\left(c\right)^{\overline{n}}\left(d\right)^{\overline{n}}},\]
we conclude that for \(c,d>-1\), \(c,d\neq 0\), an appropriately normalized \(p\) has no roots at the origin and \(p\in\mathbb{P}_{n}(\mathbb{R})\). Moreover, \(p\in\mathbb{P}_{n}(\mathbb{R}_{<0})\) if \(c,d>0\).
As a next step, we can use the reciprocal polynomial as in Remark 4.9 to transfer these results to a \({}_{3}F_{3}\) polynomial: namely, we get that
\[{}_{3}\mathcal{F}_{3}\!\left(\begin{array}{cc}-n,&\frac{c+d}{2},&\frac{c+d-1} {2}\\ c,&d,&c+d-1\end{array};x\right) \tag{69}\]
is real rooted for \(c,d>-1\), with all roots positive for \(c,d>0\).
Next, we can get a \({}_{2}F_{2}\) function using the cancellation of some parameters in the hypergeometric expression above; obviously, all conclusions about the location of their zeros remain. Referring to the cases enumerated in the statement of the theorem:
1. Setting in (69) \(c=d-1\) we conclude that \({}_{2}\mathcal{F}_{2}\!\left(\begin{smallmatrix}-n,&d-1/2\\ d,&2d-2\end{smallmatrix};x\right)\) is real-rooted for \(d>0\), \(d\neq 1\), with all roots positive for \(d>1\).
2. With \(c=d\) in (69), we get that \({}_{2}\mathcal{F}_{2}\!\left(\begin{smallmatrix}-n,&d-1/2\\ d,&2d-1\end{smallmatrix};x\right)\) is real-rooted for \(d>-1\), \(d\neq 0\), with all roots positive for \(d>0\).
3. With \(c=2-d\) in (69), we see that \({}_{2}\mathcal{F}_{2}\!\left(\begin{smallmatrix}-n,&1/2\\ d,&2-d\end{smallmatrix};x\right)\) is real-rooted for \(-1<d<3\), \(d\neq 0,2\), with all roots positive for \(0<d<2\).
4. Finally, \(c=1-d\) yield that \({}_{2}\mathcal{F}_{2}\!\left(\begin{smallmatrix}-n,&1/2\\ d,&1-d\end{smallmatrix};x\right)\) is real-rooted for \(-1<d<2\), \(c\neq 0,1\), with all roots positive for \(0<d<1\).
the assertions in (a) and (b) for the real-rooted case yield (v) and (vi) of the proposition, respectively. All other assertions can be extended to larger families by "replacing" a parameter upstairs by using the finite free multiplicative convolution of the polynomials from above with certain \({}_{2}F_{1}\) polynomials from row 1 of Table 2,4
Footnote 4: Obviously, there are some alternative (sometimes more direct) ways to obtain these results.
\[{}_{2}\mathcal{F}_{2}\!\left(\begin{smallmatrix}-n,&a_{1}\\ b_{1},&b_{2}\end{smallmatrix};x\right)=\ _{2}\mathcal{F}_{2}\!\left( \begin{smallmatrix}-n,&a_{2}\\ b_{1},&b_{2}\end{smallmatrix};x\right)\boxtimes_{n}\ _{2}\mathcal{F}_{1}\!\left( \begin{smallmatrix}-n,&a_{1}\\ a_{2}\end{smallmatrix};x\right)\!. \tag{70}\]
Namely, using (70) to combine polynomials from the assertions (a) and (b) above with the hypergeometric polynomial \({}_{2}\mathcal{F}_{1}\!\left(\begin{smallmatrix}-n,&d+k-1/2\\ d-1/2\end{smallmatrix};x\right)\), yields that for \(k=1,2,\ldots,n-1\),
1. \({}_{2}\mathcal{F}_{2}\!\left(\begin{smallmatrix}-n,&d+k-1/2\\ d,&2d-2\end{smallmatrix};x\right)\) is real-rooted for \(d>1/2\), \(d\neq 1\), with all roots positive for \(d>1\).
2. \({}_{2}\mathcal{F}_{2}\!\left(\begin{smallmatrix}-n,&d+k-1/2\\ d,&2d-1\end{smallmatrix};x\right)\) has all roots positive for \(d>1/2\).
the real-rooted case in (a') yields (vii) from the proposition.
Analogously, applying (70) to combine polynomials from (c) and (d) above with the hypergeometric polynomial \({}_{2}\mathcal{F}_{1}\!\left(\begin{smallmatrix}-n,&k+1/2\\ 1/2\end{smallmatrix};x\right)\), we conclude that
1. \({}_{2}\mathcal{F}_{2}\!\left(\begin{smallmatrix}-n,&k+1/2\\ d,&2-d\end{smallmatrix};x\right)\) is real-rooted for \(-1<d<3\), \(d\neq 0,2\), with all roots positive for \(0<d<2\).
2. \({}_{2}\mathcal{F}_{2}\!\left(\begin{smallmatrix}-n,&k+1/2\\ d,&1-d\end{smallmatrix};x\right)\) is real-rooted for \(-1<d<2\), \(d\neq 0,1\), with all roots positive for \(0<d<1\).
These yield (i) of the proposition, in the case of all positive roots, and (viii) otherwise.
In a similar fashion, we can replace some parameters downstairs. For instance, using
\[{}_{2}\mathcal{F}_{2}\!\left(\begin{smallmatrix}-n,&a_{1}\\ b_{1},&b_{2}\end{smallmatrix};x\right)=\ _{2}\mathcal{F}_{2}\!\left( \begin{smallmatrix}-n,&a_{1}\\ b_{1},&b_{3}\end{smallmatrix};x\right)\boxtimes_{n}\ _{2}\mathcal{F}_{1}\!\left( \begin{smallmatrix}-n,&b_{3}\\ b_{2}\end{smallmatrix};x\right) \tag{71}\]
to combine polynomials from (a') and (b') above with the polynomials \(\ {}_{2}\mathcal{F}_{1}\Big{(}\genfrac{.}{.}{0.0pt}{}{-n,\ {}_{2}d-2}{b};x\Big{)}\) and \(\ {}_{2}\mathcal{F}_{1}\Big{(}\genfrac{.}{.}{0.0pt}{}{-n,\ {}_{2}d-1}{b};x\Big{)}\), all of them satisfying the conditions of row 2 in table 2, yields that for \(n\geq 4\), \(b>0\) and \(k\in\mathbb{Z}_{n}\), the following families of \({}_{2}\mathcal{F}_{2}\) polynomials are real-rooted and their zeros all positive:
(a") \(\ {}_{2}\mathcal{F}_{2}\Big{(}\genfrac{.}{.}{0.0pt}{}{-n,\ {}_{d}+k-1/2}{d,\ b};x\Big{)}\) for \(2d-1=b+t\), where either \(t\in\mathbb{Z}_{n}\) or \(t>n-2\).
This gives us (ii) of the proposition. If we use (71) to combine polynomials from (a') and (b') above with \(\ {}_{2}\mathcal{F}_{1}\Big{(}\genfrac{.}{.}{0.0pt}{}{a,\ {}_{b}};x\Big{)}\) satisfying the conditions of row 2 in table 2, then we find that for \(n\geq 4\), \(b>0\) and \(k\in\mathbb{Z}_{n}\), the following families of \({}_{2}\mathcal{F}_{2}\) polynomials are real-rooted and all their zeros positive:
(b") \(\ {}_{2}\mathcal{F}_{2}\Big{(}\genfrac{.}{.}{0.0pt}{}{-n,\ {}_{2}d+k-1/2}{2d-2,\ b};x\Big{)}\) for \(d=b+t\), where either \(t\in\mathbb{Z}_{n}\) or \(t>n-2\).
(c") \(\ {}_{2}\mathcal{F}_{2}\Big{(}\genfrac{.}{.}{0.0pt}{}{-n,\ {}_{2}d+k-1/2}{2d-1,\ b};x\Big{)}\) for \(d=b+t\), where either \(t\in\mathbb{Z}_{n}\) or \(t>n-2\).
These imply (iii) and (iv), respectively.
_Remark 4.11_.: As we mentioned earlier, our approach is not exhaustive and Table 3 does not contain all combinations of parameters that produce real-rooted polynomials. However, we can use our method to exclude some combinations of parameters. For example, the contrapositive of (ii) in Proposition 2.7 states that \(p\boxtimes_{n}q\notin\mathbb{P}_{n}(\mathbb{R}),\ q\in\mathbb{P}_{n}(\mathbb{R }_{\geq 0})\ \Rightarrow\ p\notin\mathbb{P}(\mathbb{R})\). We can use it with factorization
(72)
Notice that for \(b_{2}<-n+1\), the second term on the left-hand side is a Bessel polynomial with all negative roots. Thus, if the pair \((a,b_{1})\) does not satisfy some of the conditions of Table 2 (so that the right-hand side is a Jacobi polynomial with at least one complex root), then \(\ {}_{2}\mathcal{F}_{2}\Big{(}\genfrac{.}{.}{0.0pt}{}{-n,\ a}{b_{1},\ b_{2}};x \Big{)}\) also has at least one complex root. This kind of argument allows one to find several major regions of parameters where we can assure that the corresponding polynomials are not real-rooted polynomials. For instance, we have that
\[\ {}_{2}\mathcal{F}_{2}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ a}{b_{1},\ b_{2}};x \Big{)}\notin\mathbb{P}(\mathbb{R}),\]
whenever one of the following assumptions holds:
* \(b_{1},b_{2}<-n+1\) and \(a>0\) or \(a<\max\{b_{1}-n+2,b_{2}-n+2\}\); or
* \(b_{1}<-n+1\), \(a>0\) and \(b_{2}>a+n-2\).
We can also draw conclusions about the interlacing of the \({}_{2}\mathcal{F}_{2}\) polynomials using (55)-(56) and (61)-(63). For instance, applying Theorem 4.2 to the polynomials in Table 2 we can obtain the following result that covers several important cases:
**Corollary 4.12**.: _Let \(c\in(0,\infty)\) and let \(a,b\in\mathbb{R}\) be two parameters such that \(\ {}_{2}\mathcal{F}_{2}\Big{(}\genfrac{.}{.}{0.0pt}{}{-n,a}{b};x\Big{)}\in \mathbb{P}_{n}(\mathbb{R})\) (for instance, \((a,b)\) belong to a case covered in Table 2). Then_
\[\ {}_{2}\mathcal{F}_{2}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ a}{b,c};x\Big{)} \ \preccurlyeq\ {}_{2}\mathcal{F}_{2}\bigg{(}\genfrac{.}{.}{0.0pt}{}{-n,\ a}{b,c+t};x\bigg{)},\quad 0 \leq t\leq 2.\]
Several interesting results on real-rootedness and zero interlacing of \({}_{2}F_{2}\) polynomials have been obtained in [31, Section 2]. We revisit them next, using our approach, which allows us to obtain some generalizations.
For example, the fact that for \(c>0\), \(k\in\mathbb{Z}_{n}\), and \(b+k\notin(-\mathbb{Z}_{n})\), with either \(b>0\), \(b<-n-k+1\), or \(b\in(-\mathbb{Z}_{n})\), then
\[{}_{2}\mathcal{F}_{2}\!\left(\genfrac{}{}{0.0pt}{}{-n,b+k}{b,c};x\right)\in \mathbb{P}_{n}(\mathbb{R}_{\geq 0}),\]
which follows from rows \(2\) and \(3\) of Table 3, partially generalizes the statement of [31, Theorem 2.2] (where it is claimed that at least \(n-k\) roots are distinct and positive). Moreover, in the particular case of \(k=1\), identity (68) reads as
\[{}_{2}\mathcal{F}_{2}\!\left(\genfrac{}{}{0.0pt}{}{-n,b+1}{b,c};x\right)=\ {}_{1} \mathcal{F}_{1}\!\left(\genfrac{}{}{0.0pt}{}{-n}{c};x\right)\boxtimes_{n}\ {}_{2}\mathcal{F}_{1}\!\left(\genfrac{}{}{0.0pt}{}{-n,b+1}{b};x\right)\in \mathbb{P}_{n}(\mathbb{R}),\]
and we know that all roots are positive if \(b\notin[-n,0]\). Moreover, by Remark 4.1 we know that the resulting polynomial interlaces \({}_{1}\mathcal{F}_{1}\!\left(\genfrac{}{}{0.0pt}{}{-n}{c};x\right)\). These claims are precisely the content of [31, Theorem 2.3]. Actually, from the interlacing property of the Laguerre polynomials we can also derive that for \(0\leq t\leq 2\) the following interlacing holds:
\[{}_{2}\mathcal{F}_{2}\!\left(\genfrac{}{}{0.0pt}{}{-n,b+1}{b,c};x\right)\prec \ _{2}\mathcal{F}_{2}\!\left(\genfrac{}{}{0.0pt}{}{-n,b+1}{b,c+t};x\right)\!.\]
With a similar procedure, for the case \(k=2\) we can factorize
\[{}_{2}\mathcal{F}_{2}\!\left(\genfrac{}{}{0.0pt}{}{-n,b+2}{b,c};x\right):=\ {}_{1} \mathcal{F}_{1}\!\left(\genfrac{}{}{0.0pt}{}{-n}{c};x\right)\boxtimes_{n}\ {}_{2}\mathcal{F}_{1}\!\left(\genfrac{}{}{0.0pt}{}{-n,b+2}{b};x\right)\!.\]
The conclusions of [31, Theorem 2.4] then follow from the properties of the \({}_{2}F_{1}\) polynomial on the right and the multiplicative convolution.
_Remark 4.13_.: The results in [31] are based on the quasi-orthogonality property of the corresponding hypergeometric polynomials. Several examples show that a finite free convolution of (quasi-)orthogonal polynomials can generate families of orthogonal or multiple orthogonal polynomials. Understanding this property further is an interesting open problem in this field.
To finish this section, we use Remark 4.9 to claim that \({}_{3}F_{1}\) polynomials are just reciprocal \({}_{2}F_{2}\) polynomials. To be more precise, a combination of parameters \((a,b_{1},b_{2})\) satisfying a condition of any row of Table 3 implies that
\[{}_{3}\mathcal{F}_{1}\!\left(\genfrac{}{}{0.0pt}{}{-n,-b_{1}-n+1,-b_{2}-n+1}{- a-n+1};x\right) \tag{73}\]
is real-rooted. We illustrate this assertion in the following Corollary that is a result of taking the reciprocal polynomials from Proposition 4.10:
**Corollary 4.14**.: _Let \(n\geq 4\), \(k\in\mathbb{Z}_{n}\), and \(t\in\mathbb{Z}_{n}\cup\mathbb{R}_{>n-2}\)._
_The polynomial_
\[{}_{3}\mathcal{F}_{1}\!\left(\genfrac{}{}{0.0pt}{}{-n,\ a_{1},\ a_{2}}{b};x \right)\in\mathbb{P}_{n}(\mathbb{R}_{<0})\]
_if \(a_{2}<-n+1\), and additionally, one of the following conditions holds:_
1. \(b=-n-k+1/2\) _and either_ \(a_{1}=-a_{2}-2n<-n+1\) _or_ \(a_{1}=-a_{2}-2n+1<-n+1\)_;_
2. \(b=a_{1}-k+1/2\)_, and_ \(a_{1}=(a_{2}-n-t)/2\)_;_
3. \(b=(a_{1}-n)/2-k\)_, and_ \(a_{1}=2a_{2}+n+1-2t\)
_._
* \(b=(a_{1}-n+1)/2-k\) _and_ \(a_{1}=2a_{2}+n-2t\)_._
_Moreover, the polynomial_
\[{}_{3}\mathcal{F}_{1}\!\left(\genfrac{}{}{0.0pt}{}{-n,\ a_{1},\ a_{2}}{b};x \right)\in\mathbb{P}_{n}(\mathbb{R})\]
_if one of the following conditions holds:_
* \(b=a_{2}+1/2\)_,_ \(a_{1}=2a_{2}+n+1\)_, and_ \(a_{2}\in(-n,-n+1)\)_;_
* \(b=a_{2}+1/2\)_,_ \(a_{1}=2a_{2}+n\)_, and_ \(a_{2}\in(-n+1,-n+2)\)_;_
* \(b=a_{2}-k+1/2\)_,_ \(a_{1}=2a_{2}+n+1\)_, and_ \(a_{2}\in(-n,-n+1/2)\)_;_
* \(b=-n-k+1/2\)_, and_ \(a_{1}+a_{2}+2n\in\{0,1\}\)_, if_ \(a_{2}\in(-n+1,-n+2)\)_._
We summarize some of our results on the real zeros of \({}_{3}F_{1}\) polynomials in Table 4.
### \({}_{3}f_{2}\) Generalized hypergeometric polynomials
Several results on real-rootedness of \({}_{3}F_{2}\) were obtained in the literature, in particular in [18, 20, 30, 31]. In this section, we focus on how to establish some generalizations of these results using finite free multiplicative convolution (Theorem 3.1).
As in the previous section, we factorize the \({}_{3}\mathcal{F}_{2}\) polynomial into the multiplicative convolution of two or more real-rooted polynomials, all but one of them with all roots of the same sign. Obviously, these representations are not unique and depend on how we partition the parameter space \(\boldsymbol{a}\times\boldsymbol{b}\), with \(\boldsymbol{a}=(a_{1},a_{2})\) and \(\boldsymbol{b}=(b_{1},b_{2})\), into subsets.
The most basic option is to represent \(\boldsymbol{a}\times\boldsymbol{b}\) as the union of \((\{a_{j}\},\{\cdot\})\) and \((\{\cdot\},\{b_{j}\})\), with \(j=1,2\), which produces a representation of the \({}_{3}\mathcal{F}_{2}\) polynomial as \({}_{2}\mathcal{F}_{0}\boxtimes{}_{2}\mathcal{F}_{0}\boxtimes{}_{1}\mathcal{F }_{1}\boxtimes{}_{1}\mathcal{F}_{1}\). To obtain real rooted polynomials, each parameter must satisfy rather restrictive conditions. In general, we can do better by using other partitions, namely:
* \((\{a_{1}\},\{b_{1}\})\cup(\{a_{2}\},\{b_{2}\})\), which produces a representation \({}_{2}\mathcal{F}_{1}\boxtimes{}_{2}\mathcal{F}_{1}\);
* \((\{a_{1}\},\{b_{1},b_{2}\})\cup(\{a_{2}\},\{\cdot\})\), which produces a representation \({}_{2}\mathcal{F}_{2}\boxtimes{}_{2}\mathcal{F}_{0}\);
* \((\{a_{1},a_{2}\},\{b_{1}\})\cup(\{\cdot\},\{b_{2}\})\), which produces a representation \({}_{3}\mathcal{F}_{1}\boxtimes{}_{1}\mathcal{F}_{1}\).
Let us discuss these three partitions more systematically.
**Case 1:** we can generate real-rooted polynomials by using the representation
\[{}_{3}\mathcal{F}_{2}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,a_{1},a_{2}}{b_{1},b_{2 }};x\right)=\ _{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,a_{1}}{b_{1}};x \right)\boxtimes_{n}\ _{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,a_{2}}{b_{2}};x\right) \tag{74}\]
and the entries of Table 2. Namely, we can combine the parameters of any row of Table 2 with those in the rows corresponding to nonnegative (or nonpositive) zeros (rows 1-3). More precise information on the location of the zeros is obtained if we restrict ourselves to rows 1-3 only.
Since the methodology is straightforward, we are not going to provide a comprehensive list of the outcomes of such representations. Instead, we highlight some of the most interesting combinations of parameters \(a_{i}\) and \(b_{i}\) for which the polynomial on the left-hand side of (74) is real rooted, and point out how they generalize the already known results. Obviously, the roles of \(a_{1}\) and \(a_{2}\) (as well as \(b_{1}\) and \(b_{2}\)) can be freely interchanged.
For instance, by combining the major intervals of parameters in rows 1-3 from Table 2 we obtain six different domains of parameters for which the zeros of the polynomials are all real and have the same sign:
**Proposition 4.15**.: _Consider_
\[p(x)=\ _{3}\mathcal{F}_{2}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,a_{1},a_{2}}{b_{1},b _{2}};x\right)\!.\]
1. _if_ \(b_{1},b_{2}>0\)_,_ \(a_{1}<-n+1\) _and_ \(a_{2}>\min\{b_{1},b_{2}\}+n-2\) _then_ \(p\in\mathbb{P}_{n}(\mathbb{R}_{<0})\)_._
2. _if_ \(b_{1},b_{2}>0\)_,_ \(a_{1}>b_{1}+n-2\) _and_ \(a_{2}>b_{2}+n-2\)_, then_ \(p\in\mathbb{P}_{n}(\mathbb{R}_{>0})\)_._
3. _if_ \(b_{1},b_{2}>0\)_,_ \(a_{1},a_{2}<-n+1\) _then_ \(p\in\mathbb{P}_{n}(\mathbb{R}_{>0})\)_._
4. _if_ \(a_{1},a_{2}<-n+1\)_,_ \(b_{1}>0\) _and_ \(b_{2}<\min\{a_{1},a_{2}\}-n+2\) _then_ \(p\in\mathbb{P}_{n}(\mathbb{R}_{\leq 0})\)_._
5. _if_ \(a_{1},a_{2}<-n+1\)_,_ \(b_{1}<a_{1}-n+2\) _and_ \(b_{2}<a_{2}-n+2\) _then_ \(p\in\mathbb{P}_{n}(\mathbb{R}_{>0})\)_._
6. _if_ \(a_{1}<-n+1\)_,_ \(b_{1}<a_{1}-n+2\)_,_ \(b_{2}>0\) _and_ \(a_{2}>b_{2}+n-2\) _then_ \(p\in\mathbb{P}_{n}(\mathbb{R}_{>0})\)_._
The result in (iii) generalizes [20, Theorem 9]. We summarize these six main regions in Table 5.
In addition to these six domains of parameters, we can obtain more admissible combinations using the remaining values from rows 1-3 in Table 2. Since the detailed explanation is tedious, our exposition here will be more schematic.
Namely, any assertion from Proposition 4.15 (equivalently, any row from Table 5) can be extended as follows:
1. by replacing condition \(b_{i}>0\) by \(b\in(-\mathbb{Z}_{n})\) we arrive at the same conclusion but with possible roots at the origin.
2. by replacing condition \(b_{i}<a_{i}-n+2\) (or equivalently, \(a_{i}>b_{i}+n-2\)) by \(b_{i}\in\{a_{i}-1,a_{i}-2,\ldots,a_{i}-n+1\}\) (or equivalently, \(a_{i}\in\{b_{i}+1,b_{i}+2,\ldots,b_{i}+n-1\}\)) we get the same conclusion as before.
This procedure can be iterated with the other set of parameters as long as the conditions are met.
Finally, we can apply (74) combining the parameters from rows 1-3 in Table 2 with those in rows 4-6. In this case, we get real-rooted polynomials, although we cannot guarantee that all the roots will be of the same sign. Once again, any assertion of Proposition 4.15 (or the one obtained after applying procedures (R1)-(R2)) can be extended as follows:
1. by replacing condition \(b_{i}>0\) by \(b_{i}>-1\), or
2. by replacing condition \(a_{i}<-n+1\) by \(a_{i}<-n+2\),
which yields real-rooted polynomials. Notice however that procedures (R3)-(R4) cannot be iterated: a second replacement amounts to finding the multiplicative convolution of two real-rooted polynomials (and not all roots are necessarily of the same sign), whose outcome is not determined a priori.
Let us illustrate the considerations above by retrieving some previously known results.
By applying replacement (R2) to parameters from row 1 in Table 5 we conclude that
\[{}_{3}\mathcal{F}_{2}\!\left(\genfrac{}{}{0.0pt}{}{-n,a_{1},b_{2}+k}{b_{1},b_ {2}};x\right)\in\mathbb{P}_{n}(\mathbb{R}_{<0})\qquad\text{whenever }b_{1},b_{2}>0,\text{ and }a_{1}<-n+1.\]
This result (together with row 1 in Table 5 itself) generalizes [20, Theorem 7].
Analogously, applying replacement (R2) to parameters from rows 2, 4-6 in Table 5 (and with an appropriate reparametrization) we obtain that for
\[p(x)=\ _{3}\mathcal{F}_{2}\!\left(\genfrac{}{}{0.0pt}{}{-n,a_{1},c+k}{b_{1},c};x \right)\!,\quad k=1,\ldots,n-1,\]
* \(p\in\mathbb{P}_{n}(\mathbb{R}_{>0})\) whenever \(b_{1},c>0\), and \(a_{1}>b_{1}+n-2\).
* \(p\in\mathbb{P}_{n}(\mathbb{R}_{<0})\) whenever \(b_{1}>0\), \(c<-n-k+1\), and \(a_{1}<-n+1\).
* \(p\in\mathbb{P}_{n}(\mathbb{R}_{>0})\) whenever \(a_{1}<-n+1\), \(c<-n-k+1\), and \(b_{1}<a_{1}-n+2\).
* \(p\in\mathbb{P}_{n}(\mathbb{R}_{>0})\) whenever \(c>0\), \(a_{1}<-n+1\), and \(b_{1}<a_{1}-n+2\).
* \(p\in\mathbb{P}_{n}(\mathbb{R}_{>0})\) whenever \(b_{1}>0\), \(c<-n-k+1\), and \(a_{1}>b_{1}+n-2\).
These results partially extend [31, Corollary 3.2].
**Case 2:** we consider the representation
\[{}_{3}\mathcal{F}_{2}\!\left(\genfrac{}{}{0.0pt}{}{-n,\ a_{1},\ a_{2}}{b_{1},\ b_{2}};x\right)=\ _{2}\mathcal{F}_{2}\!\left(\genfrac{}{}{0.0pt}{}{-n,\ a_{1}}{b_{1},\ b_{2}};x \right)\boxtimes_{n}\ _{2}\mathcal{F}_{0}\!\left(\genfrac{}{}{0.0pt}{}{-n,\ a_{2}}{ \cdot};x\right)\!, \tag{75}\]
combined with the results from Table 3 and the Bessel polynomials, rows 5-6 from Table 1. Again, we need one of the polynomials to be real-rooted and the other one to have only nonnegative or only nonpositive roots.
Notice that the first 6 rows of Table 3 were obtained using the representation of the \({}_{2}F_{2}\) polynomials as \({}_{2}\mathcal{F}_{1}\boxtimes{}_{1}\mathcal{F}_{1}\). Therefore, a combination of these rows with rows 5-6 of Table 1 is equivalent to the factorization \({}_{2}\mathcal{F}_{1}\boxtimes{}_{1}\mathcal{F}_{1}\boxtimes{}_{2}\mathcal{F}_ {0}\), which is already included in case 1. Hence, here we focus only on rows 7-15 of Table 3 (Proposition 4.10), which were obtained using additive convolution. For instance, a multiplicative convolution with row 5 of Table 1 yields:
**Proposition 4.16**.: _Let \(n\geq 4\), \(k\in\mathbb{Z}_{n}\), and \(t\in\mathbb{Z}_{n}\cup\mathbb{R}_{>n-2}\)._
_The polynomial_
\[{}_{3}\mathcal{F}_{2}\!\left(\begin{matrix}-n,\ a_{1},\ a_{2}\\ b_{1},\ b_{2}\end{matrix};x\right)\in\mathbb{P}_{n}(\mathbb{R}_{<0})\]
_if \(a_{2}<-n+1\), \(b_{2}>0\), and additionally, one of the following conditions holds:_
1. \(a_{1}=k+1/2\) _and either_ \(b_{1}=2-b_{2}>0\) _or_ \(b_{1}=1-b_{2}>0\)_;_
2. \(a_{1}=b_{1}+k-1/2\)_, and_ \(b_{1}=(b_{2}+t+1)/2\)_;_
3. \(a_{1}=(b_{1}+1)/2+k\)_, and_ \(b_{1}=2(b_{2}-1+t)\)_;_
4. \(a_{1}=b_{1}/2+k\) _and_ \(b_{1}=2(b_{2}+t)-1\)_._
_The polynomial_
\[{}_{3}\mathcal{F}_{2}\!\left(\begin{matrix}-n,\ a_{1},\ a_{2}\\ b_{1},\ b_{2}\end{matrix};x\right)\in\mathbb{P}_{n}(\mathbb{R})\]
_if \(a_{2}<-n+1\) and one of the following conditions holds:_
1. \(a_{1}=b_{2}-1/2\)_,_ \(b_{1}=2b_{2}-2\)_, and_ \(b_{2}\in(0,1)\)_;_
2. \(a_{1}=b_{2}-1/2\)_,_ \(b_{1}=2b_{2}-1\)_, and_ \(b_{2}\in(-1,0)\)_;_
3. \(a_{1}=b_{2}+k-1/2\)_,_ \(b_{1}=2b_{2}-2\)_, and_ \(b_{2}\in(1/2,1)\)_;_
4. \(a_{1}=k+1/2\)_, and_ \(b_{1}+b_{2}\in\{1,2\}\)_, if_ \(b_{2}\in(-1,0)\)_._
Once again, we gather these domains of parameters in Table 6.
A particular case of the assertions (vi) and (ii) (with \(k=t=0\)) of Proposition 4.16 is a generalization of [20, Theorem 8]. Furthermore, taking \(b_{1}=b_{2}=1\) and \(k=t=0\) in assertion (ii) of Proposition 4.16, we conclude that
\[p(x)=\ {}_{3}\mathcal{F}_{2}\!\left(\begin{matrix}-n,\ 1/2,\ -n\pm 1/2\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1,\ \ 1\end{matrix};x\right)\in\mathbb{P}_{n}(\mathbb{R}_{<0});\]
the real-rootedness of this polynomial was conjectured by B. Ringeling and W. Zudilin5.
Footnote 5: Personal communication.
**Case 3:** we consider the representation
\[{}_{3}\mathcal{F}_{2}\!\left(\begin{matrix}-n,\ a_{1},\ a_{2}\\ b_{1},\ b_{2}\end{matrix};x\right)=\ {}_{3}\mathcal{F}_{1}\!\left( \begin{matrix}-n,\ a_{1},\ a_{2}\\ b_{1}\end{matrix};x\right)\boxtimes_{n}\ {}_{1}\mathcal{F}_{1}\!\left( \begin{matrix}-n\\ b_{2}\end{matrix};x\right)\!, \tag{76}\]
combined with the results from Table 4 and the Laguerre polynomials, rows 2-3 from Table 1. Notice that if \(b_{2}>0\), it is sufficient for the \({}_{1}F_{1}\) polynomial to be real rooted. On the other hand, using Remark 4.9 and taking the reciprocal polynomials for both terms in (75) we can show that Case 3 is in a certain sense dual to Case 2. Thus, many of the results below can be derived alternatively using this duality.
For instance, combining the results of Theorem 4.2 and Corollary 4.14 using (76) we get
**Proposition 4.17**.: _Let \(n\geq 4\), \(k\in\mathbb{Z}_{n}\), and \(t\in\mathbb{Z}_{n}\cup\mathbb{R}_{>n-2}\)._
_The polynomial_
\[{}_{3}\mathcal{F}_{2}\!\left(\begin{matrix}-n,\ a_{1},\ a_{2}\\ b_{1},\ b_{2}\end{matrix};x\right)\in\mathbb{P}_{n}(\mathbb{R}_{<0})\]
_if \(a_{2}<-n+1\), \(b_{2}>0\), and additionally, one of the following conditions holds:_
1. \(b_{1}=-n-k+1/2\) _and either_ \(a_{1}=-a_{2}-2n<-n+1\) _or_ \(a_{1}=-a_{2}-2n+1<-n+1\)_;_
2. \(b_{1}=a_{1}-k+1/2\)_, and_ \(a_{1}=(a_{2}-n-t)/2\)_;_
3. \(b_{1}=(a_{1}-n)/2-k\)_, and_ \(a_{1}=2a_{2}+n+1-2t\)_;_
4. \(b_{1}=(a_{1}-n+1)/2-k\) _and_ \(a_{1}=2a_{2}+n-2t\)_._
_Moreover, the polynomial_
\[{}_{3}\mathcal{F}_{1}\!\left(\begin{matrix}-n,\ a_{1},\ a_{2}\\ b\end{matrix};x\right)\in\mathbb{P}_{n}(\mathbb{R})\]
_if_ \(b_{2}>0\) _and one of the following conditions holds:_
1. \(b_{1}=a_{2}+1/2\)_,_ \(a_{1}=2a_{2}+n+1\)_, and_ \(a_{2}\in(-n,-n+1)\)_;_
2. \(b_{1}=a_{2}+1/2\)_,_ \(a_{1}=2a_{2}+n\)_, and_ \(a_{2}\in(-n+1,-n+2)\)_;_
3. \(b_{1}=a_{2}-k+1/2\)_,_ \(a_{1}=2a_{2}+n+1\)_, and_ \(a_{2}\in(-n,-n+1/2)\)_;_
4. \(b_{1}=-n-k+1/2\)_, and_ \(a_{1}+a_{2}+2n\in\{0,1\}\)_, if_ \(a_{2}\in(-n+1,-n+2)\)_._
We summarize these results in Table 7.
As we have mentioned a few times, we do not claim that our approach is universal, and there are several results in the literature that we do not yet know how to prove using our method. To bridge this gap, we could try to use a combination of additive and multiplicative convolutions, similar to what was done in Proposition 4.10. However, we have not been able to find new combinations of parameters that lead to real-rooted families using this idea.
Also, it is tempting to use the identity
\[{}_{3}\mathcal{F}_{2}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a_{1},\ a_{2}}{b_{1},\ b_{2}};x \right)=\ {}_{3}\mathcal{F}_{2}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a_{1},\ a_{3}}{b_{1},\ b_{2}};x\right)\boxtimes_{n}\ {}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a_{1}}{a_{3}};x \right)\!, \tag{77}\]
to extend previous assertions to some other combination of parameters. Nevertheless, at this stage, it does not lead to new results. The reason is that all real-rooted \({}_{3}\mathcal{F}_{2}\) polynomials obtained so far are a result of the multiplicative convolution of more elementary "blocks", so that by replacing a parameter in a factor we get another factorization that was already considered.
However, we can use (77) to extend some results in the literature to wider regions of parameters. We finish this section by illustrating it with some examples from [18].
For example, in [18, Theorem 3.6] it was proved that
\[{}_{3}\mathcal{F}_{2}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ n+1,\ 1/2}{b_{1},\ 2-b_{1}};x \right)\in\mathbb{P}_{n}(0,1)\]
if \(b_{1}\in(0,2)\). A multiplicative convolution with the polynomial
\[{}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a}{n+1};x\right)\in \mathbb{P}_{n}(\mathbb{R}_{>0})\quad\text{for }a>2n-1,\]
satisfying conditions from Row 2 of Table 2, shows that for \(a>2n-1\),
\[{}_{3}\mathcal{F}_{2}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a,\ 1/2}{b_{1},\ 2-b_{1}};x \right)\in\mathbb{P}_{n}(\mathbb{R}_{>0}).\]
Analogously, by [18, Theorem 3.3],
\[{}_{3}\mathcal{F}_{2}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,2b_{1}+n,\ b_{1}-1/2}{b _{1},\ 2b_{1}-1};x\right)\in\mathbb{P}_{n}(\mathbb{R}_{>0})\]
if \(b_{1}>0\). Thus, a multiplicative convolution with the polynomial
\[{}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a_{1}}{2b_{1}+n};x \right)\in\begin{cases}\mathbb{P}_{n}(\mathbb{R}_{<0})&\text{for }a_{1}<-n+1,\\ \mathbb{P}_{n}(\mathbb{R}_{>0})&\text{for }a_{1}>2(b_{1}+n-1),\end{cases}\]
satisfying conditions from Rows 1 and 2 of Table 2, shows that for \(a>2(b_{1}+n-1)\),
\[{}_{3}\mathcal{F}_{2}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,a_{1},\ b_{1}-1/2}{b _{1},\ 2b_{1}-1};x\right)\in\begin{cases}\mathbb{P}_{n}(\mathbb{R}_{<0})&\text{for }a_{1}<-n+1,\,b_{1}>0,\\ \mathbb{P}_{n}(\mathbb{R}_{>0})&\text{for }a_{1}>2(b_{1}+n-1),\,b_{1}>0.\end{cases}\]
Finally, by [18, Theorem 3.4],
\[{}_{3}\mathcal{F}_{2}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,2b_{1}+n-1,\ a_{2}}{b _{1},\ b_{2}};x\right)\in\mathbb{P}_{n}(\mathbb{R}_{>0})\]
if \(a_{2}=b_{1}-1/2\), \(b_{2}=2b_{1}-2\), and \(b_{1}>1\). Taking a multiplicative convolution with the polynomial
\[{}_{2}\mathcal{F}_{1}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a_{1}}{2b_{1}+n-1};x \right)\in\begin{cases}\mathbb{P}_{n}(\mathbb{R}_{<0})&\text{for }a_{1}<-n+1,\\ \mathbb{P}_{n}(\mathbb{R}_{>0})&\text{for }a_{1}>2(b_{1}+n)-3,\end{cases}\]
satisfying conditions from Rows 1 and 2 of Table 2, it shows that
\[{}_{3}\mathcal{F}_{2}\!\left(\genfrac{.}{.}{0.0pt}{}{-n,\ a_{1},\ a_{2}}{b_{1},\ b_{2}};x\right)\in\begin{cases}\mathbb{P}_{n}(\mathbb{R}_{<0})&\text{for }a_{1}<-n+1,\\ \mathbb{P}_{n}(\mathbb{R}_{>0})&\text{for }a_{1}>2(b_{1}+n)-3,\end{cases}\]
again, if \(a_{2}=b_{1}-1/2\), \(b_{2}=2b_{1}-2\), and \(b_{1}>1\).
We summarize these final results in Table 8.
## 5. Finite free probability and asymptotics
### Free probability
The goal of this section is to briefly explain how we can recast the previous results in the framework of free probability. Free probability is a theory that studies non-commutative random variables, and it is especially useful in the study of the spectra of large random matrices.
There are several natural parallels between the commutative and non-commutative theories. Since the concept of independence in classical probability theory is commutative in nature, it is replaced by the notion of "freeness" or free independence, which is better suited to non-commutative random variables. Moreover, the central operations are the free additive \(\boxplus\) and the free multiplicative convolution \(\boxtimes\) of the measures which naturally correspond to the sum and multiplication of free random variables. The study of the free convolutions \(\boxplus\) and \(\boxtimes\) can be addressed either using the original Voiculescu's analytic tools such as \(R\)-transform and \(S\)-transform, or by the combinatorial theory developed by Nica and Speicher that makes use of free cumulants and noncrossing set partitions. Throughout this section, we assume that the reader has some familiarity with the theory of free probability; the standard references are [53] for the analytical perspective and [44] for the combinatorics perspective.
The connection between the convolutions of polynomials and free probability (reason for the name of "finite free" convolutions) was first noticed by Marcus, Spielman, and Srivastava in [40], when they used Voiculescu's \(R\)-transform and \(S\)-transform to improve the bounds on the largest root of a convolution of two real-rooted polynomials. This connection was explored further in [38], where Marcus defined a finite \(R\)-transform and \(S\)-transform that are related to Voiculescu's transforms in the limit. Using finite free cumulants, Arizmendi and Perales [3], showed that finite free additive convolution becomes a free additive convolution. This was later proved for the multiplicative convolution by Arizmendi, Garza-Vargas and Perales [2].
There is a natural way to associate a probability measure with a polynomial: given a polynomial \(p\) of degree \(n\) and roots \(\lambda_{j}(p)\), \(j=1,\ldots,n\) (not necessarily all distinct), its (normalized) **zero counting measure** (also known in this context as the **empirical root distribution** of \(p\)) is
\[\mu(p):=\frac{1}{n}\sum_{j=1}^{n}\delta_{\lambda_{j}(p)}, \tag{78}\]
where \(\delta_{z}\) is the Dirac delta (unit mass) placed at the point \(z\). The corresponding moments of \(\mu(p)\) (which we also call the moments of \(p\), stretching the terminology a bit) are
\[m_{k}(p):=\frac{1}{n}\sum_{j=1}^{n}\lambda_{j}^{k}(p)=\int x^{k}\,d\mu_{p}, \quad k=0,1,2,\ldots\]
As mentioned above, the connection between finite and standard free probability is revealed in the asymptotic regime, when we let the degree \(n\to\infty\). We say that the sequence of polynomials \(\mathfrak{p}=(p_{n})_{n=1}^{\infty}\) such that each \(p_{n}\) is real-rooted and of degree exactly \(n\)**(weakly) converges** (or converges in moments) if there is a probability measure \(\nu(\mathfrak{p})\) on \(\mathbb{R}\) with all its moments finite such that
\[\lim_{n\to\infty}m_{k}(p_{n})=m_{k}(\nu(\mathfrak{p})),\qquad k=0,1,2,\ldots\]
Note that if the moment problem for \(\nu(\mathfrak{p})\) is determined, this implies the weak-* convergence of the sequence \(\mu(p_{n})\) to \(\nu(\mathfrak{p})\).
**Proposition 5.1** (Corollary 5.5 in [3], and Theorem 1.4 in [2]).: _Let \(\mathfrak{p}:=(p_{n})_{n=1}^{\infty}\) and \(\mathfrak{q}:=(q_{n})_{n=1}^{\infty}\) be two sequences of real-rooted polynomials as above, and let \(\nu(\mathfrak{p})\) and \(\nu(\mathfrak{q})\) be two compactly supported probability Borel measures on \(\mathbb{R}\) such that \(\mathfrak{p}\) (respectively, \(\mathfrak{q}\)) weakly converges to \(\nu(\mathfrak{p})\) (respectively, \(\nu(\mathfrak{q})\)). Then_
1. \((p_{n}\boxplus_{n}q_{n})_{n=1}^{\infty}\) _weakly converges to_ \(\nu(\mathfrak{p})\boxplus\nu(\mathfrak{q})\)_._
2. _if, additionally, for all sufficiently large_ \(n\)_,_ \(p_{n},q_{n}\subset\mathbb{P}_{n}(\mathbb{R}_{>0})\) _then_ \((p_{n}\boxtimes_{n}q_{n})_{n=1}^{\infty}\) _weakly converges to_ \(\nu(\mathfrak{p})\boxtimes\nu(\mathfrak{q})\)_._
These results imply that, in the limit \(n\to\infty\), we can replace the finite free convolution with the standard free convolution of measures. Thus, by combining this property with the results of this paper, we can systematically study the asymptotics of the root counting measures of families of hypergeometric polynomials.
In the rest of this section, we illustrate these ideas in the simplest cases. A deeper analysis (in particular, with applications in approximation theory) is one of the goals of future work.
### Parameter rescaling
In order to obtain nontrivial sequences of weakly converging hypergeometric polynomials, we need to allow the parameters \(\boldsymbol{a}\) and \(\boldsymbol{b}\) to depend on degree \(n\). To simplify the presentation, we introduce the following notation:
**Notation 5.2**.: _Given \(i,j,n\in\mathbb{N}\), \(\boldsymbol{a}=(a_{1},\ldots,a_{i})\in\mathbb{R}^{i}\) and \(\boldsymbol{b}=(b_{1},\ldots,b_{j})\in\mathbb{R}^{j}\), we denote by \(\mathcal{H}_{n}\Big{[}\begin{smallmatrix}\boldsymbol{b}\\ \boldsymbol{a}\end{smallmatrix}\Big{]}(x)\) the unique monic polynomial of degree \(n\) with coefficients in representation (1) given by_
\[e_{k}\left(\mathcal{H}_{n}\Big{[}\begin{smallmatrix}\boldsymbol{b}\\ \boldsymbol{a}\end{smallmatrix}\Big{]}\right):=\binom{n}{k}\frac{( \boldsymbol{b}n)^{\underline{k}}}{(\boldsymbol{a}n)^{\underline{k}}},\qquad \text{for $k=1,\ldots,n$.}\]
In order to avoid indeterminacy, in this section we assume that
\[a_{s}\not\in\left\{\tfrac{1}{n},\tfrac{2}{n},\ldots,\tfrac{n-1}{n}\right\}, \quad s=1,\ldots,i.\]
There is a direct connection of the polynomials we just introduced with the hypergeometric polynomials in standard normalization:
\[\begin{split}\mathcal{H}_{n}\Big{[}\begin{smallmatrix} \boldsymbol{b}\\ \boldsymbol{a}\end{smallmatrix}\Big{]}(x)&=\frac{(-1)^{n}}{( \boldsymbol{a}n)^{\underline{n}}}\ i_{+1}\mathcal{F}_{j}\binom{-n,\boldsymbol{a }n-n+1}{\boldsymbol{b}n-n+1};x\bigg{)}\\ &=\frac{(-1)^{n}\,(\boldsymbol{b}n)^{\underline{n}}}{(\boldsymbol {a}n)^{\underline{n}}}\ i_{+1}F_{j}\binom{-n,\boldsymbol{a}n-n+1}{\boldsymbol {b}n-n+1};x\bigg{)},\end{split} \tag{79}\]
where \(\boldsymbol{c}n-n+1\) means that we multiply each entry of \(\boldsymbol{c}\) by \(n\) and then add \(-n+1\).
With the new notation, the simplest families of real rooted polynomials look as follows:
**Identity for the multiplicative convolution:**\(\mathcal{H}_{n}\big{[}\begin{smallmatrix}\boldsymbol{a}\\ \boldsymbol{a}\end{smallmatrix}\big{]}(x)=\mathcal{H}_{n}\Big{[}\begin{smallmatrix} -\\ -\end{smallmatrix}\Big{]}(x)=(x-1)^{n}\).
**Identity for the additive convolution:**: \(\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{0}{a}(x)=x^{n}\).
**Laguerre polynomials:**: \(\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{b}{-}\),
* \(\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{b}{-}\in\mathbb{P}(\mathbb{R}_{>0})\) when \(b>1-\frac{1}{n}\).
* \(\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{b}{-}\in\mathbb{P}(\mathbb{R}_{\geq 0})\) when \(b\in\{\frac{1}{n},\frac{2}{n},\ldots,\frac{n-1}{n}\}\), with a multiplicity of \((1-b)n\) at \(0\).
* \(\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{b}{-}\in\mathbb{P}(\mathbb{R})\) when \(b\in(\frac{n-2}{n},\frac{n-1}{n})\).
**Bessel polynomials:**: \(\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{-}{a}\),
* \(\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{-}{a}\in\mathbb{P}(\mathbb{R}_{<0})\) when \(a<0\).
* \(\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{-}{a}\in\mathbb{P}(\mathbb{R})\) when \(a\in(0,\frac{1}{n})\).
**Jacobi polynomials:**: \(\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{b}{a}\),
* \(\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{b}{a}\in\mathbb{P}([0,1])\) when \(b>1\) and \(a>b+1\).
* \(\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{b}{a}\in\mathbb{P}(\mathbb{R}_{<0})\) when \(b>1\) and \(a<0\).
* \(\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{b}{a}\in\mathbb{P}(\mathbb{R}_{>0})\) when \(a<0\) and \(b<a-1\).
Take note that the case of Jacobi polynomials does not cover all the combination of parameters that lead to real-rooted polynomials; the reader is referred to Table 2 for further details.
One combination that is particularly interesting corresponds to polynomials with only roots at \(1\) and \(0\):
\[\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{k/n}{1}(x)=(x-1)^{k}x^{n-k},\qquad\text {for }k=0,1,2,\ldots,n.\]
_Remark 5.3_.: In the realm of finite free probability, the Laguerre polynomials were first studied by Marcus [38, Section 6.2.3] using the finite \(R\)-transform and later in [3] using finite free cumulants. To our knowledge, the families of Bessel and Jacobi polynomials have not been studied in this context, except for some particular cases, such as Gegenbauer (or ultraspherical) polynomials that appeared in [26, Section 6].
Notice that our previous results can be easily rewritten in the new notation. For instance, consider tuples \(\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3},\mathbf{b}_{1},\mathbf{b}_{2},\mathbf{b}_{3}\) of sizes \(i_{1},i_{2},i_{3},j_{1},j_{2},j_{3}\), respectively, then two reciprocal polynomials from Remark 4.9 are of the form
\[\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{\mathbf{b}}{a}(x)\qquad\text{and}\qquad \mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{-\mathbf{a}+1-1/n}{-\mathbf{b}+1-1/n}\left[(-1) ^{i+j}x\right]. \tag{80}\]
The multiplicative convolution (Theorem 3.1) works in exactly the same way:
\[\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{\mathbf{b}_{1}}{a_{1}}\boxtimes_{n} \mathcal{H}_{n}\Big{[}\begin{matrix}\mathbf{b}_{2}\\ \mathbf{a}_{2}\end{matrix}\Big{]}=\mathcal{H}_{n}\genfrac{[}{]}{0.0pt}{}{\mathbf{b}_{1 },\;\mathbf{b}_{2}}{a_{1},\;\mathbf{a}_{2}}. \tag{81}\]
And the additive convolution, specifically Corollary 3.5, can be rephrased as follows: assume that the following factorization holds,
\[{}_{j_{1}}F_{i_{1}}\genfrac{(}{]}{0.0pt}{}{-n\mathbf{b}_{1}}{-n\mathbf{a}_{1}};x\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The same applies to examples from Section 3.2 that provide nontrivial cases with interesting interpretation from the point of view of finite free probability:
* From Example 3.7 it follows that \[\mathcal{H}_{n}\Big{[}\genfrac{[}{]}{0.0pt}{}{b_{1}}{-}\Big{]}\boxplus_{n} \mathcal{H}_{n}\Big{[}\genfrac{[}{]}{0.0pt}{}{b_{2}}{-}\Big{]}=\mathcal{H}_{n} \Big{[}\genfrac{[}{]}{0.0pt}{}{b_{1}+b_{2}}{-}\Big{]},\qquad b_{1},b_{2}\in \mathbb{R}.\] The Laguerre polynomials are one of the basic families of polynomials studied within the framework of finite free probability. This result is a direct consequence of the finite \(R\)-transform of the Laguerre polynomials calculated by Marcus [38, Section 6.2.3]. Equivalently, this result follows from the fact that all finite free cumulants of a Laguerre polynomial \(\mathcal{H}_{n}\Big{[}\genfrac{[}{]}{0.0pt}{}{b}{-}\Big{]}\) are all equal to \(b\)[3, Example 6.2].
* Example 3.8 we obtain \[\mathcal{H}_{n}\Big{[}\genfrac{[}{]}{0.0pt}{}{a_{1}+a_{2}-b}{-}\Big{]}\boxplus_{n} \mathcal{H}_{n}\Big{[}\genfrac{[}{]}{0.0pt}{}{b-a_{1},\;b-a_{2}}{b}\Big{]}= \mathcal{H}_{n}\big{[}\genfrac{[}{]}{0.0pt}{}{a_{1},\;a_{2}}{b}\big{]},\qquad a _{1},a_{2},b\in\mathbb{R}.\]
* Example 3.9 yields \[\mathcal{H}_{n}\bigg{[}\genfrac{[}{]}{0.0pt}{}{a,\;b}{a+b-\frac{1}{2n}}\bigg{]} ^{(\boxplus_{n})2}=\mathcal{H}_{n}\bigg{[}\genfrac{[}{]}{0.0pt}{}{2a,\;2b,\;a +b}{a+b-\frac{1}{2n},\;2a+2b}\bigg{]},\qquad a,b\in\mathbb{R}.\]
* By Example 3.10, \[\mathcal{H}_{n}\Big{[}\genfrac{[}{]}{0.0pt}{}{-}{2a}\Big{]}\boxplus_{n} \mathcal{H}_{n}\Big{[}\genfrac{[}{]}{0.0pt}{}{-}{2b}\Big{]}=\mathrm{Dil}_{4} \ \mathcal{H}_{n}\bigg{[}\genfrac{[}{]}{0.0pt}{}{a+b,\;a+b-\frac{1}{2n}}{2a,\;2b,\; a+2b-\frac{1}{n}}\bigg{]},\qquad a,b\in\mathbb{R},\] where \[\mathrm{Dil}_{s}p(x):=s^{n}p(x/s),\quad s>0.\] (84)
In what follows, we derive some asymptotic formulas for the zero distribution of hypergeometric polynomials that can be represented in terms of some more elementary "building blocks". Thus, we start by discussing the zero distribution of the most basic sequences of real-rooted polynomials.
### Asymptotic results and new insights in free probability
Proposition 5.1 allows us to infer the asymptotic zero distribution of a sequence of polynomials that can be represented as a finite free convolution of simpler components, such as Laguerre, Bessel and Jacobi polynomials. We use the following notation for reparameterized polynomials:
\[\widehat{L}_{n}^{(a)} :=\mathrm{Dil}_{1/n}\mathcal{H}_{n}\Big{[}\genfrac{[}{]}{0.0pt}{}{ a}{-}\Big{]}=\frac{1}{n^{n}}\mathcal{H}_{n}\Big{[}\genfrac{[}{]}{0.0pt}{}{a}{-} \Big{]}(nx),\] \[\widehat{B}_{n}^{(a)}(x) :=\mathrm{Dil}_{n}\mathcal{H}_{n}\Big{[}\genfrac{[}{]}{0.0pt}{}{ a}{-}\Big{]}=n^{n}\mathcal{H}_{n}\Big{[}\genfrac{[}{]}{0.0pt}{}{-}{a}\Big{]}(x/n),\] \[\widehat{J}_{n}^{(b,a)}(x) :=\mathcal{H}_{n}\Big{[}\genfrac{[}{]}{0.0pt}{}{b}{a}\Big{]}(x),\]
where the dilation operator is defined in (84).
Rescaling is needed in the case of Laguerre and Bessel polynomials, where otherwise the zeros would not be uniformly bounded (and weak compactness of the zero-counting measures is not guaranteed). With this definition, actually all three sequences, of Laguerre \(\widehat{L}^{(b)}:=\Big{(}\widehat{L}_{n}^{(b)}\Big{)}_{n=1}^{\infty}\), Bessel \(\widehat{B}^{(a)}:=\Big{(}\widehat{B}_{n}^{(a)}\Big{)}_{n=1}^{\infty}\) and Jacobi \(\widehat{J}^{(b,a)}:=\Big{(}\widehat{J}_{n}^{(b,a)}\Big{)}_{n=1}^{\infty}\) polynomials, whenever real-rooted, are weakly converging. Their limiting measures are well known and can be computed using standard arguments from the theory of orthogonal polynomials:
* Laguerre polynomials \(\widehat{L}^{(b)}:=\left(\widehat{L}^{(b)}_{n}\right)_{n=1}^{\infty}\): for \(b>1\), the limiting measure is \(\nu(\widehat{L}^{(b)})=\mu_{\mathrm{MP}_{b}}\), the Marchenko-Pastur law with parameter \(b\), which is an absolutely continuous probability measure on \([r_{-},r_{+}]\), with \[d\mu_{\mathrm{MP}_{b}}=\frac{1}{2\pi}\frac{\sqrt{(r_{+}-x)(x-r_{-})}}{x}dx, \qquad\text{where}\qquad r_{\pm}=b+1\pm 2\sqrt{b}.\] This distribution has been rediscovered many times and can be obtained from different perspectives: as the equilibrium measure on \(\mathbb{R}_{+}\) in presence of an external field (see [48, Ch. IV]), from the integral representation of the Laguerre polynomials [5, 15, 25], or from their differential equation [41, 42]. For \(b\in(0,1)\), formulas (16)-(17) suggest that in the limit we get the Marchenko-Pastur distribution with an additional atom (mass point or Dirac delta) at \(x=0\). This is indeed the case, as it can be easily established by either of the methods mentioned above.
* Bessel polynomials is \(\widehat{B}^{(a)}:=\left(\widehat{B}^{(a)}_{n}\right)_{n=1}^{\infty}\): for \(a<0\), the limiting measure \(\mu_{\mathrm{RMP}_{a}}\) is the reciprocal of a Marchenko-Pastur law of parameter \(1-a\): \[d\mu_{\mathrm{RMP}_{a}}=\frac{-a}{2\pi}\frac{\sqrt{(r_{+}-x)(x-r_{-})}}{x^{2} }dx,\qquad\text{where}\qquad r_{\pm}=\frac{1}{a-2\pm 2\sqrt{1-a}},\] which is a simple consequence of their connection with the Laguerre polynomials.
* Jacobi polynomials \(\widehat{J}^{(b,a)}:=\left(\widehat{J}^{(b,a)}_{n}\right)_{n=1}^{\infty}\): their asymptotic zero distribution \(\mu_{b,a}:=\nu(\widehat{J}^{(b,a)})\) depends on which of the three major parameters regions we are considering:
* When \(b>1\) and \(a>b+1\),
* When \(b>1\) and \(a<0\),
* When \(a<0\) and \(b<a-1\). \[d\mu_{b,a}=\frac{-ax}{4\pi}\frac{\sqrt{(r_{+}-x)(x-r_{-})}}{x-1}dx, \qquad\text{where}\qquad r_{\pm}=\left(\frac{b-1}{\sqrt{(a-1)b}\mp\sqrt{a-b}} \right)^{2}.\]
* As in the case of the Laguerre polynomials, these results follow considering either the weighed equilibrium problem for the logarithmic potential in an external field [48, Ch. IV]), their integral representation [15], their differential equation [41, 42], or even from their orthogonality relations [37] combined with the Riemann-Hilbert method [36]. The distribution \(\mu_{b,a}\) in case (J1) has already been studied in the realm of free probability in [55, Definition 3.10].6 In that work, for \(c,d>1\), the free beta distribution is given by \(f\beta(c,d)=\mu_{c,c+d}\). This is hardly a surprise, since a direct consequence of equation (81) is the following identity:
Footnote 6: We thank Katsunori Fujie and Yuki Ueda for mentioning to the third author this reference and the possible connection.
\[\widehat{J}^{(c,c+d)}_{n}\boxtimes_{n}\widehat{L}^{(c+d)}_{n}=\widehat{L}^{(c )}_{n},\]
which can be informally restated as that the Jacobi polynomials can be obtained as a quotient (in the free convolution sense) of two Laguerre polynomials. By letting \(n\to\infty\) we see that Yoshida's free beta distribution satisfies
\[f\beta(a,b)\boxtimes\operatorname{MP}_{a+b}=\operatorname{MP}_{a},\]
where as before, \(\operatorname{MP}_{c}\) denotes the Marchenko-Pastur distribution of parameter \(c\). This is consistent with the fact that the free beta distribution can be obtained as a quotient of variables distributed according to a Marchenko-Pastur of different parameters.7
Footnote 7: We did not find an explicit formula but this is inferred implicitly in [55] due to the relation between the free beta and the free beta prime distributions.
In [55], the parameters are also allowed to be in the larger set \(c,d>0\) (instead of \(c,d>1\)) with the additional condition that \(c+d>1\), in which case, as the formulas (23)-(25) for Jacobi polynomials suggest, the distribution can have atoms. Once again, this fact is rigorously established using any of the asymptotic methods mentioned above.
After the previous discussion, one should be convinced that the Jacobi polynomials are the finite free analogue of the free beta distribution. This parallel can be observed also in random matrix theory. It is well known that the Hermite polynomials are tied to the study of the Gaussian Orthogonal Ensemble and the Laguerre polynomials are related to the real Wishart matrices (for a discussion of this in the realm of finite free probability see [2, Section 5]). In the same spirit, the Jacobi polynomials are related to the Jacobi ensembles, which are precisely those that can be constructed by taking the quotient of two Wishart ensembles. We refer the reader to [10] for a detailed study of eigenvalues of Jacobi ensembles using Jacobi polynomials and free probability.
Our previous analysis combined with the results from Sections 3-4, allows us to write the asymptotic zero distribution of diverse families of real-rooted hypergeometric polynomials in terms of free convolution of explicit distributions (Marchenko-Pastur, reciprocal Marchenko-Pastur, and Free Beta) enumerated above. We present some examples without going into explicit calculations, starting with the sequences of polynomials
\[p:=\mathcal{H}_{n}\Big{[}\begin{matrix}b\\ a\end{matrix}\Big{]},\quad\text{with }\boldsymbol{a}=\left(a_{1},\ldots,a_{i} \right),\;\boldsymbol{b}=\left(b_{1},\ldots,b_{j}\right). \tag{86}\]
* If \(b_{1},\ldots,b_{j}>1\) and \(a_{1},\ldots,a_{i}<0\), then the sequence (86) is weakly converging. By Theorem 4.6, its asymptotic zero distribution can be written as \[\nu(p)=\mu_{\operatorname{RMP}a_{1}}\boxtimes\cdots\boxtimes\mu_{\operatorname {RMP}a_{i}}\boxtimes\mu_{\operatorname{MP}b_{1}}\boxtimes\cdots\boxtimes\mu _{\operatorname{MP}b_{j}}.\]
* If \(j\geq i\), \(b_{1},\ldots,b_{j}>0\), and \(a_{1},\ldots,a_{i}\in\mathbb{R}\) such that \(a_{s}\geq b_{s}+1\) for \(s=1,\ldots,i\), then the sequence (86) is weakly converging. By Theorem 4.7, its asymptotic zero distribution can be written as \[\nu(p)=f\beta(b_{1},a_{1}-b_{1})\boxtimes\cdots\boxtimes f\beta(b_{i},a_{i}-b _{i})\boxtimes\mu_{\operatorname{MP}b_{i+1}}\boxtimes\cdots\boxtimes\mu_{ \operatorname{MP}b_{j}}.\]
Other examples of asymptotic distributions can be obtained from the multiplicative convolution discussed at the end of Section 5.2:
* For \(a_{1},a_{2},b\in\mathbb{R}\) that satisfy the following conditions \(a_{1},a_{2}>1\), \(b>a_{1}+1\), \(b>a_{2}+1\), and \(a_{1}+a_{2}-b>1\), we have the following relation between real rooted Laguerre and Jacobi polynomials:
In the limit, this translates into a relation between the Marchenko-Pastur distribution and the free beta distribution: \[\mu_{\mathrm{MP}a_{1}+a_{2}-b}\boxplus(f\beta(b-a_{1},a_{1})\boxtimes\mu_{ \mathrm{MP}b-a_{2}})=f\beta(a_{1},b-a_{1})\boxtimes\mu_{\mathrm{MP}a_{2}}.\]
* For \(a,b\in\mathbb{R}\) such that \(a,b>1\), \(a>b+1\), the following identity in terms of the real-rooted Laguerre and Jacobi polynomials holds \[\left(\mathcal{H}_{n}\!\left[\genfrac{}{}{0.0pt}{}{a}{a+b-\frac{1}{2n}}\right] \boxtimes_{n}\mathcal{H}_{n}\!\left[\genfrac{}{}{0.0pt}{}{b}{-}\right]\right)^ {(\boxplus_{n})2}=\mathcal{H}_{n}\!\left[\genfrac{}{}{0.0pt}{}{2b}{a+b-\frac{1} {2n}}\right]\boxtimes_{n}\mathcal{H}_{n}\!\left[\genfrac{}{}{0.0pt}{}{2a}{2a+ b}\right]\boxtimes_{n}\mathcal{H}_{n}\!\left[\genfrac{}{}{0.0pt}{}{a+b}{-}\right]\!.\] In the limit it becomes \[(f\beta(a,b)\boxtimes\mu_{\mathrm{MP}b})^{\boxplus 2}=f\beta(2b,a-b)\boxtimes f\beta(2a,2 b)\boxtimes\mu_{\mathrm{MP}a+b}.\]
* For \(a<0\) and \(b<a-1\), we have the following relation between real-rooted Bessel and Jacobi polynomials: \[\mathcal{H}_{n}\!\left[\genfrac{}{}{0.0pt}{}{-}{2a}\right]\boxplus_{n}\mathcal{ H}_{n}\!\left[\genfrac{}{}{0.0pt}{}{-}{2b}\right]=\mathcal{H}_{n}\!\left[ \genfrac{}{}{0.0pt}{}{a+b}{2a}\right]\boxtimes_{n}\mathcal{H}_{n}\!\left[ \genfrac{}{}{0.0pt}{}{a+b-\frac{1}{2n}}{2a+2b-\frac{1}{n}}\right]\boxtimes_{ n}\mathcal{H}_{n}\!\left[\genfrac{}{}{0.0pt}{}{-}{2b}\right]\!.\]
Leaving \(n\to\infty\) this yields the following identity:
\[\mu_{\mathrm{RMP}a}\boxplus\mu_{\mathrm{RMP}b}=\mu_{a+b,2a}\boxtimes\mu_{a+b, 2a+2b}\boxtimes\mu_{\mathrm{RMP}2b},\]
where \(\mu_{\mathrm{RMP}c}\) stands for the reciprocal Marchenko-Pastur distribution of parameter \(c\) and \(\mu_{c,d}\) is the distribution obtained in Equation (85).
## Acknowledgments
The first author was partially supported by Simons Foundation Collaboration Grants for Mathematicians (grant 710499). He also acknowledges the support of the project PID2021-124472NB-I00, funded by MCIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe", as well as the support of of Junta de Andalucia (research group FQM-229 and Instituto Interuniversitario Carlos I de Fisica Teorica y Computacional).
The third author was partially supported by the Simons Foundation via Michael Anshelevich's grant. He expresses his gratitude for the warm hospitality and stimulating atmosphere at Baylor University.
This work has greatly benefited from our discussions with multiple colleagues: Octavio Arizlemendi, Kathy Driver, Katsunori Fujie, Jorge Garza-Vargas, Kerstin Jordaan, Dmitry Karp, Vladimir Kostov, Ana Loureiro, Boris Shapiro, Yuki Ueda, Wadim Zudilin, to mention a few (this list is incomplete). The discussions that originated this project started at a Brazos Analysis Seminar, and we are grateful to the organizers for giving us this opportunity.
|
2307.16675 | Poly-MOT: A Polyhedral Framework For 3D Multi-Object Tracking | 3D Multi-object tracking (MOT) empowers mobile robots to accomplish
well-informed motion planning and navigation tasks by providing motion
trajectories of surrounding objects. However, existing 3D MOT methods typically
employ a single similarity metric and physical model to perform data
association and state estimation for all objects. With large-scale modern
datasets and real scenes, there are a variety of object categories that
commonly exhibit distinctive geometric properties and motion patterns. In this
way, such distinctions would enable various object categories to behave
differently under the same standard, resulting in erroneous matches between
trajectories and detections, and jeopardizing the reliability of downstream
tasks (navigation, etc.). Towards this end, we propose Poly-MOT, an efficient
3D MOT method based on the Tracking-By-Detection framework that enables the
tracker to choose the most appropriate tracking criteria for each object
category. Specifically, Poly-MOT leverages different motion models for various
object categories to characterize distinct types of motion accurately. We also
introduce the constraint of the rigid structure of objects into a specific
motion model to accurately describe the highly nonlinear motion of the object.
Additionally, we introduce a two-stage data association strategy to ensure that
objects can find the optimal similarity metric from three custom metrics for
their categories and reduce missing matches. On the NuScenes dataset, our
proposed method achieves state-of-the-art performance with 75.4\% AMOTA. The
code is available at https://github.com/lixiaoyu2000/Poly-MOT | Xiaoyu Li, Tao Xie, Dedong Liu, Jinghan Gao, Kun Dai, Zhiqiang Jiang, Lijun Zhao, Ke Wang | 2023-07-31T13:51:24Z | http://arxiv.org/abs/2307.16675v1 | # Poly-MOT: A Polyhedral Framework For 3D Multi-Object Tracking
###### Abstract
3D Multi-object tracking (MOT) empowers mobile robots to accomplish well-informed motion planning and navigation tasks by providing motion trajectories of surrounding objects. However, existing 3D MOT methods typically employ a single similarity metric and physical model to perform data association and state estimation for all objects. With large-scale modern datasets and real scenes, there are a variety of object categories that commonly exhibit distinctive geometric properties and motion patterns. In this way, such distinctions would enable various object categories to behave differently under the same standard, resulting in erroneous matches between trajectories and detections, and jeopardizing the reliability of downstream tasks (navigation, etc.). Towards this end, we propose Poly-MOT, an efficient 3D MOT method based on the Tracking-By-Detection framework that enables the tracker to choose the most appropriate tracking criteria for each object category. Specifically, Poly-MOT leverages different motion models for various object categories to characterize distinct types of motion accurately. We also introduce the constraint of the rigid structure of objects into a specific motion model to accurately describe the highly nonlinear motion of the object. Additionally, we introduce a two-stage data association strategy to ensure that objects can find the optimal similarity metric from three custom metrics for their categories and reduce missing matches. On the NuScenes dataset, our proposed method achieves state-of-the-art performance with 75.4% AMOTA. The code is available at [https://github.com/likiaoyu2009/Poly-MOT](https://github.com/likiaoyu2009/Poly-MOT).
## I Introduction
Multi-Object Tracking (MOT) is a critical component of environment perception systems in autonomous robots. It provides valuable information on the motion of tracked objects over time, enabling robots to predict the future motion patterns of surrounding objects effectively. Compared with 2D MOT [1, 2, 25], 3D MOT [3] offers more explicit and convenient spatial information about objects, culminating in more reliable and accurate tracking. Typically, current 3D MOT techniques can be divided into "Tracking-By-Detection" (TBD) [4, 5] and "Joint Detection and Tracking" (JDT) [6, 7, 8]. Due to the data-driven nature of JDT, it is generally less precise and robust than TBD, and consequently, the majority of 3D MOT approaches adhere to the TBD architecture.
In the most previous works [3, 4, 9], KITTI [10] and MOT15 [11] are employed to evaluate algorithm performance. Under these platforms, trackers are usually required to track only a single category of objects. Therefore, these works simply use a single linear motion model and similarity metric for state prediction and construct the cost matrix between trajectories and detections. However, with the advent of large-scale datasets such as NuScenes [12] and changeable real scenes, a long-ignored yet fundamental fact must be carefully considered: _there are multiple object categories in real scenes, and objects of different categories often exhibit various geometric features and motion patterns._ A single prediction and matching criterion is unsuitable for distinct object categories, which distorts the affinities between trajectories and detections, resulting in false matches and compromising the stability of subsequent tasks (navigation, prediction, etc.).
To the best of our knowledge, only a few recent works [4, 5] have optimized the MOT problem in multi-category settings. These methods prevent correlation between different categories by masking [5] or removing [4] invalid costs in the cost matrix calculated under the same standard. However, these methods can not tackle the issue of accurate tracking in multi-category settings fundamentally due to the _inaccuracy of the cost matrix_ induced by _unreliable prediction_ and _irrationality metric_. On the one hand1, as shown in Fig. 1, due to the distinct and nonlinear motion patterns of different object categories, utilizing the same linear motion model for state prediction will result in an unreliable estimation of motion. Moreover, due to variances in geometric features,
Fig. 1: **Trajectory state estimation of our proposed (_CTRA and bicycle Model_) and existing (_CA Model_) motion model on _Car_ and _Motorcycle_. For trackers equipped with different motion models, we truncate the tracking process from the same timestamp, which means that the trajectory can receive detection updates before this timestamp. In contrast, the trajectory can only use historical information to predict the future state after this timestamp. (a) _CTRA Model_ exhibits a significantly higher prediction accuracy for _Car_ than other models. This is particularly useful for recovering historical mismatch trajectories when objects are occluded, or detectors miss detections. (b) _Bicycle Model_ is more suitable for _Motorcycle_ due to different object categories exhibiting distinct motion patterns.**
different object categories are susceptible to various similarity metrics and correlation thresholds. As presented in Table I, we conduct a simple and intuitive experiment confirming that a single similarity metric cannot perform well in all object categories. Thus, precise yet reliable motion prediction and affinity calculation for various object categories is a vital step toward deploying 3D MOT methods in real scenes.
To this end, we introduce Poly-MOT, a polyhedral framework for 3D MOT under multi object category scenes following the TBD framework. Specifically, to ensure accurate motion prediction in such scenes, we introduce geometry constraints to the motion model and establish multiple motion models (_CTRA_ and _Bicycle_ model) based on the distinct features of each object category. For accurate object matching, we design three similarity metrics and then introduce categorical data association, in which the tracker selects the optimal similarity metric for different categories to achieve accurate affinity calculation. We also employ a technique that combines Non-Maximum Suppression (NMS) and Score Filter (SF) to preprocess detections at each frame to eliminate the gap between detection task and tracking task. Finally, we additionally employ a combined count-based and confidence-based strategy so that Poly-MOT can handle the lifecycle of trajectories with various matching statuses.
Poly-MOT is learning-free and not data-driven, using only detection results as input and achieving state-of-the-art performance and manageable real-time performance without substantial computational resources, as shown in Tables II and III. Thanks to the TBD framework, Poly-MOT achieves stable tracking performance with multiple 3D detectors (CenterPoint [8], etc.). **With 75.4% AMOTA, our technique achieves state-of-the-art performance on the NuScenes test set.** We anticipate that Poly-MOT can provide an effective 3D MOT baseline algorithm for the community. The primary contributions of this work are as follows:
* We propose Poly-MOT, an efficient 3D MOT approach for multiple object category scenes based on the TBD framework.
* We introduce geometry constraints to the motion model and establish multiple motion models (_CTRA_ and _Bicycle_ model) according to the distinct features of different object categories, enabling capture motion pattern differences between categories.
* We design three custom similarity metrics and a novel two-stage data association strategy to ensure that various objects can identify the optimal similarity metric for their categories, thus reducing missing matches.
## II Related Work
**3D Multi-Object Tracking**. Weng [3] pioneers the application of the TBD framework to the 3D MOT method, using Linear Kalman Filter and 3D IOU to build an advance and fast 3D MOT system. The TBD framework divides the tracker into four steps: (1) Receiving and preprocessing the 3D detection, (2) Predicting motion for active trajectories, (3) Correlating and matching trajectory with detection, (4) Managing the lifecycle of all state trajectories. Simple-Track [9] applies simple techniques to analyze and improve each of these four parts, resulting in impressive tracking performance. EagerMOT [4] takes the lead in employing result-level fusion to integrate 2D and 3D detections, improving the robustness of tracker to false negatives from different sensor modalities. In addition to TBD, the JDT framework processes tracking and detection tasks in a single Neural Network(NN). Feature alignment between multiple modalities is an important yet difficult point of JDT.
**Data Association in 3D MOT.** Data association is the core of MOT, as it is accomplished by calculating a cost matrix between trajectories and detections with a similarity metric and then applying a matching algorithm to obtain the final associations. Geometry-based and appearance-based are two common types of similarity metrics. The former leverages location and motion information to boost the performance under occlusion, and common metrics include 3D IOU [3], 3D GIOU [5, 9]. Appearance-based metrics, which utilize appearance information, can achieve more robust results in cases of large distance movement or low frame rate, as demonstrated in several studies [5, 7, 13]. Multi-modal 3D MOT methods typically use multi-level correlation [4, 5] (applying multiple metrics to match objects multiple times) to fuse different modalities and improve performance. Poly-MOT demonstrates the benefits of multi-level correlation in reducing FN matches in LiDAR-only methods. Hungarian algorithm [3, 5] and greedy algorithm [8] are commonly used to solve the cost matrix. A concern is that existing methods use a single similarity metric for all object categories, despite the differences in geometric and appearance features among them. In contrast, Poly-MOT enables the tracker to select the optimal metric for each category based on its characteristics.
**Motion module in 3D MOT.** The motion module predicts the state of active trajectories, maintaining temporal consistency with detection. Motion prediction techniques can be divided into learning-based and filter-based methods. The former usually uses NN to predict the inter-frame displacement. CenterPoint [8] uses a center-based detector to output 3D detections and predicts the displacement of objects between frames by adding a regression branch. Filter-based methods use real-world physical models for state transitions, exhibit better robustness and real-time performance, and are widely adopted by most methods. Kalman Filter is a widely used method. Most Filter-based methods typically use _CA_[5] or _Constant Velocity (CV)_[3, 8, 9] model as the motion model. However, these models assume that the movements
of objects on each coordinate axis are independent, ignoring nonlinear motion patterns constrained by geometry and differences in motion patterns across categories. Therefore, to ensure accurate prediction in multi-category scenes, we introduce geometry constraints and establish multiple models based on distinct features of each category.
## III Method
Poly-MOT can be divided into four parts: the pre-processing module, multi-category trajectory motion module, multi-category data association module, and trajectory management module, as shown in Fig. 2.
### _3D Detector and Pre-processing Module_
Existing 3D detectors [8, 19, 27] generate numerous low-confidence bounding boxes to ensure high recall, but applying these detections directly to update trajectories can result in severe ID switches (IDS). To tackle this issue, raw detections \(D^{\prime}_{t}\) must be preprocessed to reduce false-positive matches. We apply Non-Maximum Suppression (NMS) to \(D^{\prime}_{t}\) at each frame to remove bboxes with high similarity, improving precision without significant loss of recall. Nevertheless, each frame of the large-scale dataset (Waymo [14], NuScenes [12], etc.) and real scenes usually contains a large number of objects while the number of \(D^{\prime}_{t}\) is significant. Directly applying NMS to \(D^{\prime}_{t}\) would lead to substantial computational overhead, as illustrated in Table V. Before NMS, we apply a filtering process called Score Filter (SF) to remove detections \(D^{\prime}_{t}\) with confidence scores less than \(\theta_{SF}\). SF can efficiently remove apparent false-positive detections, improving the inference speed of the algorithm. After preprocessing, we obtain \(D_{t}\), which includes the center of geometry position \((x,y,z)\), 3D size (width, length, height) \((w,l,h)\), heading angle \(\theta\), and velocity \((v_{x},v_{y})\) on the ground plane. Note that whether velocity information is included or not depends on the dataset.
### _Multi-Category Trajectory Motion Module_
Most previous methods [5, 9] employ a uniform _CA_ or _CV_ model to predict the trajectories of all objects, whereas they fail to capture the highly nonlinear motion features of objects and ignore the differences in motion patterns across categories. To address this issue, we propose Multi-Category Trajectory Motion Module that utilizes different motion models (_CTRA Model_ and _Bicycle Model_) for various object categories to characterize distinct types of motion accurately. In addition, we also introduce the constraint of the rigid structure of objects into a specific model to accurately describe the highly nonlinear motion of the object. Notably, our motion models are formulated in the East(x)- North(y)-Up(z) coordinate system, which follows the right-hand rule.
**CTRA Model.** For the _CTRA model_, the turn rate \(\omega\) and acceleration \(a\) of the object are considered constant. As shown in Fig. 3 (a), the heading angle and motion pattern of objects are tightly coupled in the _CTRA model_, which means the directions of the heading angle \(\theta\), velocity \(v\), and acceleration \(a\) of the object are on the same straight line. _CTRA model_ is suitable for _car-like_ objects and \(pedestrian\). We formulate the state of an object trajectory as a 10-dimensional vector \(T^{CTRA}=[x,y,z,v,a,\theta,\omega,w,l,h]\) in the _CTRA model_, where \((x,y,z)\) represent the location of the geometric center of objects in the 3D space, \((w,l,h)\) represent the 3D size of objects.
**Bicycle Model.** For the _Bicycle model_, it maintains the rigid structure of objects and enables the velocity direction and heading angle of objects to vary, rendering it suitable for objects that behave like bicycles, as illustrated in Fig. 3 (b). Meanwhile, we assume that the steering angle and velocity of the object remain constant. The state of the trajectory is also represented by a 10-dimensional vector \(T^{BIC}=[x^{\prime},y^{\prime},z,v,a,\theta,\delta,w,l,h]\), where \((x^{\prime},y^{\prime})\) represents the location of the gravity center of the object on the ground, \(\delta\) represents the steering angle of the object, the
Fig. 2: **The Pipeline Of Our Proposed Method At Frame 1.** (I) Previous active trajectory \(T_{t-1}\) is divided into \(T^{CTRA}_{t-1}\) and \(T^{BIC}_{t-1}\) according to the different motion patterns. State predictions for \(T^{CTRA}_{t-1}\) and \(T^{BIC}_{t-1}\) are then made based on the distinct and nonlinear motion models using the EKF. (II) Raw detections output by 3D detector are subjected to NMS and SF to reduce false positives to obtain \(D_{t}\). (III) The prediction states \(T^{CTRA}_{t,t-1}\), \(T^{BIC}_{t,t-1}\), and \(D_{t}\) are input to the Class Filter to classify the category. The first association is implemented within each category using the optimal similarity metric and a category-specific threshold. For unmatched trajectories \(T^{u}_{t-1}\) and unmatched detections \(D^{1u}_{t}\), the second association is implemented with a distinct metric than before and a strict threshold. The final matched pairs \(DT^{m}_{t}\) are used to update the corresponding trajectory. (IV) \(D^{u}_{t}\) are initialized as new active trajectories. Part of \(T^{u}_{t-1}\) are discarded based on the _count-based_ strategy, while others are added to the active trajectories again after the confidence score decays. Still active trajectories will be output to the result file. Eventually, all active trajectories \(T_{t}\) will be passed to the next frame \(t+1\).
remaining variables have the same meaning as the variables in \(T^{CTRA}\).
**Model Establishment and State Prediction.** Due to the nonlinear property of the motion models, we leverage the Extended Kalman Filter (EKF) to estimate the trajectory state. The prediction process can be described by:
\[T_{t,t-1}=f(T_{t-1}),\ P_{t,t-1}=F_{t}P_{t-1}F_{t}^{T}+Q, \tag{1}\]
where \(T_{t-1}\) denotes \(T^{CTRA}\) or \(T^{BIC}\), depending on the motion model of objects. \(P_{t-1}\) is the covariance matrix at the previous moment \(t-1\). \(T_{t,t-1}\) is the predict state of \(T_{t-1}\) at the current moment \(t\). \(Q\) is the process noise, which has an artificially set value. \(f(\cdot)\) is the state transition function that is established from the motion model, reflecting the changes of all state variables of the trajectory between two consecutive frames. \(F_{t}\) is the Jacobian matrix obtained through the partial derivative of \(f(\cdot)\) with respect to \(T_{t-1}\).
During the state transition of all motion models, the variables \(a,z,w,l,h\) are assumed to remain constant.
The object location transition process as components of \(f(\cdot)\) can be formulated as:
\[\hat{x}_{t,t-1}=\hat{x}_{t-1}+\int_{(t-1)\sigma}^{t\sigma}v(\tau)cos(\eta( \tau))d\tau, \tag{2}\]
\[\hat{y}_{t,t-1}=\hat{y}_{t-1}+\int_{(t-1)\sigma}^{t\sigma}v(\tau)sin(\eta( \tau))d\tau, \tag{3}\]
where \(\sigma\) is the interval between two adjacent frames of the LiDAR scan. Depending on the choice of motion model, the geometric center \((x,y)\) or gravity center \((x^{\prime},y^{\prime})\) of the object can be represented uniformly by \((\hat{x},\hat{y})\). To better illustrate the state transition process of variables over time in each motion model, we introduce the time interval \(\Delta t\), which is defined as follows:
\[\Delta t=\tau-(t-1)\sigma. \tag{4}\]
\(\Delta t\) is the distance between the integral variable \(\tau\) and the integral lower limit \((t-1)\sigma\) during the integration process. A tricky problem is that directly setting each variable in (2) and (3) to be time-varying would result in non-integrable outcomes. A key insight is to leverage various motion models to simplify the complex nonlinear motion of objects to varying degrees, while accurately capturing the distinct motion patterns of different object categories. The velocity transition function \(v(\tau)\) is formulated as:
\[v(\tau)=\begin{cases}v_{t-1}+a\Delta t&if\ \ T=T^{CTRA}\\ v_{t-1}&if\ \ T=T^{BIC}\end{cases}, \tag{5}\]
Fig. 3 illustrates the angle \(\eta\) between the velocity of the object and the _X-axis_ of the coordinate system, and its state transition process is described by:
\[\eta(\tau)=\begin{cases}\theta(\tau)&if\ \ T=T^{CTRA}\\ \theta(\tau)+\beta(\tau)&if\ \ T=T^{BIC}\end{cases}, \tag{6}\]
where \(\beta\) represents the slip angle between the velocity and heading of the object, which can be calculated from assumed constant steering angle \(\delta\) according to:
\[\beta(\tau)=tan^{-1}(\frac{l_{r}}{\gamma l}tan(\delta(\tau))), \tag{7}\]
where \(\gamma\) is the ratio of the wheelbase to object length \(l\). \(l_{r}\) denotes the distance between the gravity center and the rear tire of the object, which is artificially set to 0.4-0.5 times the wheelbase. (7) is the embodiment of retaining the rigid structure of the object, and it also constitutes the major distinction between _CTRA Model_ and _Bicycle Model_. The reason for introducing \(\beta\) is that the instantaneous center of the object in the _Bicycle Model_ is not on the body of the object. In addition, incorporating \(l\) in (7) signifies a deeper utilization of object observation and state information, enhancing motion accuracy. However, a crucial observation that follows is that _Bicycle Model_ is susceptible to erroneous predictions caused by incorrect object structure information, thereby rendering it unsuitable for object categories where detectors tend to produce inaccurate detections.
\(\theta(\tau)\) represents the heading angle transition function of an object, which is expressed uniformly in all models as:
\[\theta(\tau)=\theta_{t-1}+\omega(\tau)\Delta t. \tag{8}\]
\(\omega(\tau)\) in (8) describes the turn rate transition function, which is formulated as:
\[\omega(\tau)=\begin{cases}\omega_{t-1}&if\ T=T^{CTRA}\\ \frac{v(\tau)sin(\beta(\tau))}{l_{r}}&if\ T=T^{BIC}\end{cases}, \tag{9}\]
which is actually constant in all motion models. (2)-(9) are the complete expression of state transition function \(f(\cdot)\).
### _Multi-Category Data Repetition Association Module_
In the data association process, a crucial but frequently disregarded fact exists: _Different object categories are sensitive to various similarity metrics and association thresholds as a result of their unique geometric characteristics_. However, most existing 3D MOT methods [3, 9] leverage a single tracking standard for each category in multi-category scenarios, resulting in inferior tracking performance due to the lack
Fig. 3: **Representation of _CTRA Model_ and _Bicycle Model_ in 2D and _3D space._**
of category-specific pertinence. To address these issues, we introduce Multi-Category Data Repetition Association Module that enables the tracker to choose the optimal similarity metric from a set of custom multiple metrics for each object category, thereby improving the accuracy and robustness of the MOT system. In addition, a two-stage association strategy based on different similarity metrics is applied to the module to reduce false negative matches.
**First Association.** After obtaining \(T_{t,t-1}\) and \(D_{t}\), affinity between \(T_{t,t-1}\) and \(D_{t}\) need to be calculated at each frame \(t\). We first design three robust similarity metrics for distinct object categories to construct the first motion cost matrix \(C_{t}^{1}\in R^{N_{cls}\times N_{data,t}\times N_{tra,t-1}}\) between \(D_{t}\) and \(T_{t,t-1}\). \(N_{tra,t-1}\) and \(N_{det,t}\) represent the number of \(T_{t,t-1}\) and \(D_{t}\), respectively. \(N_{cls}\) is the number of categories in the dataset. We propose two similarity metrics (11), (12), (13) by the first time. In addition, we introduce a rotation angle penalty factor in a specific metric to avoid false-positive associations in the opposite direction. These three similarity metrics, including 3D Generalized Intersection over Union (\(gIo_{3d}\)), BEV Generalized Intersection over Union (\(gIo_{bv}\)), and Euclidean Distance (\(d_{eucl}\)), are described as follows:
\[gIoU_{3d}(B_{i},B_{j})=IoU_{3d}(B_{i},B_{j})+\frac{V(B_{i}\cup B_{j})}{V_{3dhull }(B_{i},B_{j})}-1, \tag{10}\]
\[gIoU_{bev}(B_{i},B_{j})=IoU_{bev}(B_{i},B_{j})+\frac{A(B_{i}\cup B_{j})}{A_{ bevhull}(B_{i},B_{j})}-1, \tag{11}\]
\[d_{eucl}(B_{i},B_{j})=d(B_{i},B_{j})*(2-cos|\Delta\theta|), \tag{12}\]
\[d(B_{i},B_{j})=\gamma_{geo}||B_{i}^{wlh}-B_{j}^{wth}||_{2}+\gamma_{dis}||B_{i} ^{xyz}-B_{j}^{xyz}||_{2}, \tag{13}\]
where \(B\) is formulated as a high-dimensional vector representing the states of \(T_{t,t-1}\) or \(D_{t}\), which contain the 3D size and 3D center position. \(IoU_{3d}\) and \(IoU_{bev}\) are Intersection over Union in the 3D and bird's-eye view (BEV) representation space. \(V(B_{i}\cup B_{j})\) and \(A(B_{i}\cup B_{j})\) are the union volume and area of \(B_{i}\) and \(B_{j}\). \(V_{3dhull}(B_{i},B_{j})\) and \(A_{bevhull}(B_{i},B_{j})\) are the convex hulls computed by \(B_{i}\) and \(B_{j}\) in the 3D and BEV representation space. \(B^{xyz}\) and \(B^{wlh}\) are the vectors containing the 3D center position and 3D size of \(B\). \(\gamma_{geo}\) and \(\gamma_{dis}\) are geometric and spatial distance ratios to the overall distance. \(\Delta\theta\in[0,\pi]\) is the heading angle difference between \(B_{i}\) and \(B_{j}\). \(||\cdot||_{2}\) is the 2-norm function.
For each category, we obtain the cost matrix \(C_{t,cls}^{1}\in R^{N_{det,t}\times N_{tra,t-1}}\) by utilizing its optimal-performing similarity metric to compute the affinity of this category between \(D_{t}^{cls}\) and \(T_{t,t-1}^{cls}\)2. After aggregating \(C_{t,cls}^{1}\), we end up with \(C_{t}^{1}\). Hungarian algorithm [15] is employed to match \(D_{t}\) and \(T_{t,t-1}\) based on \(C_{t}^{1}\). To account for the geometric size differences between objects of different categories, we employ different association thresholds \(\theta_{fm}=\left(\theta_{fm}^{1},\cdots,\theta_{fm}^{N_{cls}}\right)\) to constrain the matching process. After matching, we obtain three classes of matching instances, including matched pairs \(DT_{t}^{1m}=\left\{\left(D_{t}^{i},T_{t,t-1}^{j}\right),\cdots\right\}\), unmatched detections \(D_{t}^{1u}\subseteq D_{t}\), and unmatched trajectories \(T_{t-1}^{1u}\subseteq T_{t-1}\). \(D_{t}^{1u}\) and \(T_{t-1}^{1u}\) will be further associated in the second stage.
Footnote 2: Costs between different categories are filled with invalid values.
**Second Association.** To reduce false-negative associations, we use \(gIoU_{bev}\) for objects of all categories3 to construct the cost matrix \(C_{t}^{2}\in R^{N_{update,t}\times N_{untra,t-1}}\) between \(D_{t}^{1u}\) and \(T_{t-1}^{1u}\) in the second stage3. \(N_{umdet,t}\) and \(N_{umtra,t-1}^{1u}\) are the number of \(D_{t}^{1u}\) and \(T_{t-1}^{1u}\), respectively. We use the Hungarian Algorithm with a strict threshold \(\theta_{sm}\) based on the cost matrix \(C_{t}^{2}\) to match \(D_{t}^{1u}\) and \(T_{t-1}^{1u}\). After aggregating the matching results of the two-stage association, we obtain the final matched pairs \(DT_{t}^{m}\), unmatched detections \(D_{t}^{u}\subseteq D_{t}\), and unmatched trajectories \(T_{t-1}^{u}\subseteq T_{t-1}\).
Footnote 3: If an object utilizes \(gIoU_{bev}\) in the first association, then \(gIoU_{3d}\) will be applied in the second stage, as the core of multi-stage association is to use different metrics to perform repeated associations.
### _Trajectory Management Module_
Following most 3D MOT methods [3, 4], the trajectory management module is also responsible for four key functions, which include trajectory updating, trajectory initialization, trajectory death, and output file organization.
**Trajectory Update.** We utilize the detection in \(DT_{t}^{m}\) and the standard update process of EKF to update the state of the corresponding trajectory and covariance matrix. It is important to note that in the state-measurement transition
function \(h(\cdot)\) of _Bicycle model_, the geometric center of objects should be calculated based on the gravitational center.
**Trajectory Initialization.** We employ the _count-based_ approach to initialize \(D_{t}^{u}\) as new tentative trajectories \(T_{ten,t}\). If the \(j\)-th \(T_{ten,t}\) is continuously hit in the next \(hit_{min}\) frames, \(T_{ten,t}^{j}\) will change to an activate trajectory and be merged into still active trajectories.
**Trajectory Death.** We adopt the _count-based_ scheme to discard \(T_{t-1}^{u}\). Part of the trajectory in \(T_{t-1}^{u}\) will be discarded if it has not been updated in the last _max-age_ frames. Trajectories that are not deleted are still considered active, but we penalize the confidence scores of these trajectories using \(\alpha_{pun}\) and the exponential function \(exp(\cdot)\).
**Result Output.** After obtaining all active trajectories \(T_{t}\) at the current frame \(t\), the updated trajectories (estimated motion state), newly initialized trajectories, and parts of the penalized trajectories are output to the result file. Note that, to reduce false-positive predictions, we only output \(N_{pun}\) frames of the penalized trajectories' predicted state to the result file, and also apply NMS with \(\theta_{nms}=0.08\) to all output trajectory states.
## IV Experiments
### _Datasets_
**NuScenes.** NuScenes [12] contains 850 training sequences and 150 test sequences, each comprising approximately 40 frames showcasing diverse scenarios such as rainy days and nights. The keyframes are sampled at a frequency of 2Hz, and annotation information is provided for each keyframe. However, this keyframe frequency poses a challenge for precise motion model prediction, leading to significant inter-frame displacement. The official evaluator utilizes AMOTA as the primary evaluation metric [3].
### _Implementation Details_
**NuScenes.** Our tracking method is implemented in Python under the Intel(r) 9940X without any GPU. Hyperparameters are chosen based on the best AMOTA identified in the validation set. We utilize \(\theta_{nms}=0.08\) for all categories and 3D detectors. \(\theta_{SF}\) is detector-specific. \(IoU_{bev}\) is used as the metric in NMS. During NMS process, objects of all categories are blended together. We employ _Bicycle model_ with \(\gamma=0.8\) for _(bicycle, motorcycle)_ and _CTRA model_ for the remaining categories. The similarity metric for _bus_ and _(bicycle, motorcycle, car, trailer, truck, pedestrian)_ are \(gIoU_{bev}\) and \(gIoU_{3d}\), respectively. We apply \(\theta_{fm}=(1.6,1.4,1.3,1.3,1.3,1.2,1.7)\) and \(\textit{max-age}=(10,20,10,15,10,20,10)\) for _bicycle, motorcycle, bus, car, trailer, truck, pedestrian_ and \(\theta_{sm}=1\) for all seven categories in the data association module. For trajectory management, we set \(hit_{min}=0\), \(\alpha_{pun}=0.05\), \(N_{pun}=1\).
### _Experimental Results_
#### Iv-C1 Run-time discussion
To solve the real-time challenge caused by extensive affinity calculations brought by a large number of objects, we first proposed the half-parallel4\(gIoU\) operator under the Python implementation. On the NuScenes, Poly-MOT can run at 3 FPS (Frame Per Second) on Intel 9940X, which has surpassed most advanced 3D MOT methods (SimpleTrack 0.51 FPS, Minkowski Tracker 1.7 FPS).
Footnote 4: Since convex hull and rotation IoU calculations are still serial.
#### Iv-C2 Comparative Evaluation
We compare Poly-MOT to published and peer-reviewed state-of-the-art methods on the test and validation sets of the NuScenes dataset.
**NuScenes Test Set.** Among all 3D MOT methods, Poly-MOT **ranks first** on the NuScenes tracking benchmark test set, i.e., 75.4% AMOTA, exceeding most 3D MOT methods. As shown in Table II, Poly-MOT achieves an impressively low IDS 292 while maintaining the highest AMOTA (75.4%) among all modal methods, which indicates that Poly-MOT is capable of achieving stable tracking without loss of recall. Without any image data as additional input, Poly-MOT still acquires state-of-the-art performance, surpassing the best-performing multi-modal tracker CAMO-MOT, which leverages a more superior integrated detector through [16, 17]. Additionally, Poly-MOT outperforms competing algorithms by a significant margin in the crucial category (_Car_). Compared to learning-based methods [5, 6, 8], Poly-MOT incurs minimal computational overhead and delivers a more impressive performance, highlighting the promising potential of integrating filter-based 3D MOT methods into practical robotic systems. Notably, the IDS of Poly-MOT is slightly inferior to that of OGR3MOT [22]. However, the FN/FP in Table II shows that Poly-MOT can offer the same robust continuous tracking capability without compromising recall.
**NuScenes Val Set.** As presented in Table III, Poly-MOT outperforms other trackers in terms of both higher AMOTA and lower IDS when adopting the same detector (CenterPoint [8]). Moreover, Poly-MOT yields an incredible tracking performance when assembled with a more strong LiDAR-only detector [19], i.e., 75.2% AMOTA, exceeding the best validation set accuracy reported by most methods.
#### Iv-C3 Ablation Studies
In this part, we conduct extensive ablation experiments to evaluate the individual performance of proposed modules in Poly-MOT. We select CenterPoint [8] as the 3D detector and employ _CA Model_ with Linear Kalman Filter to predict the trajectory state from the Origin State (OS). We leverage \(gIoU_{3d}\) and \(\theta\) set to 0.14 as the similarity metric and association threshold, respectively. A series of experiments are then performed on the NuScenes validation set using various module combinations.
**The effect of Pre-processing Module**. The significant gap between "Os" and "Os+Pre" in Table IV showcases the impact of leveraging Pre-processing Module on the overall performance. We can observe that "Os+Pre" provides a +4% AMOTA boost and a 93 IDS drop, resulting in a significant performance boost. The reason is that SF can filter out low-score bounding boxes while NMS can remove duplicate bounding boxes with high confidence, which makes the remaining bounding boxes have superior quality. In addition, using SF before NMS brings inference 40% reduction in pre-processing inference time while boosting AMOTA by 1.3% compared with only using NMS, as demonstrated in Table V.
**The effect of Multi-Category Trajectory Motion Module.** In Table IV, we demonstrate the impact of the Multi-Category Trajectory Motion Module. "Os+Pre+Mo+Ass" achieves an AMOTA improvement of +1.1% and an IDS decrease of 129 compared to "Os+Pre+Ass". Benefiting from improved trajectory estimation, we can apply stricter thresholds to filter FP (-2495) in complex scenes (objects are dense and numerous, detectors exhibit poor performance, etc.) to achieve more stable tracking (-129 IDS) without incurring a significant loss in recall (+1658 FN). In addition, an intriguing observation is that while "Os+Pre+Mo" yields a +0.5% AMOTA boost over "Os+Pre" alone, it also causes more ID switches (+69). The key insight is that the more accurate motion models change the bias distribution between predictions and ground truths for individual object categories, which makes a single metric and threshold unable to accurately capture inter-object affinities, thereby obtaining false matches and leading to IDS. Moreover, Table VI reveals that using an inappropriate motion model for objects would decrease tracking performance, underscoring the importance of carefully deciding the motion model for each category.
**The effect of Multi-Category Data Repetition Association Module.** As shown in Table IV, "Os+Pre+Mo+Ass" achieves a +1.2% AMOTA improvement and a -162 IDS reduction compared to "Os+Pre+Mo". This shows our proposed two-stage categorical association strategy can better capture the affinity between tracklet and detection of each category, enabling a more accurate matching relationship, improved tracking results and reduced FN matches.
### _Visualization_
We qualitatively compare our Poly-MOT (LiDAR-only version) and advanced multi-modal 3D MOT method CBMOT on the NuScenes val set. As shown in Fig. 4 (a), when the object moves intensely and quickly, CBMOT has ID switches (ID changes from _20_ to _247_), while the Poly-MOT can still achieve stable tracking. As shown in Fig. 4 (b), when objects are dense and have irregular movement, CBMOT not only has ID switches (ID changes from _37_ to _25_) but also fails to effectively suppress false-positive detection (ID: _231_ at Frame 12), while Poly-MOT still maintains stable tracking. The above comparison results show that Poly-MOT can alleviate the problem that LiDAR-only trackers cannot accurately track objects with large inter-frame displacements. In addition, Poly-MOT can also achieve stable tracking when the object suffers from occlusion.
## V Conclusions
In this work, we introduce Poly-MOT, a polyhedral framework for 3D MOT under multi object category scenarios following the TBD framework. Poly-MOT achieves accurate matches between tracklets and detections in multi-category scenarios by ensuring prediction reliability and metric rationality, including: (1) Two distinct and nonlinear motion models (_CTRA_ and _Bicycle_ Model) are established to represent the motion patterns of different object categories; (2) Three similarity metrics (\(gIoU_{3d}\), \(gIoU_{bew}\), \(d_{eucl}\)) are designed to calculate the affinity of different object categories. Besides, a two-stage association strategy and confidence-based pre-processing module are applied to the tracker to reduce FN matches and eliminate the gap between detection and tracking. Without requiring additional training and GPU, Poly-MOT achieves state-of-the-art tracking performance with 75.4% AMOTA on the NuScenes dataset while achieving
an impressive inference speed. Our method can be easily combined with multiple detectors, and we envision it serving as a general baseline for future 3D MOT methods.
|
2309.10964 | Revealing the conduction band and pseudovector potential in 2D moiré
semiconductors | Stacking monolayer semiconductors results in moir\'e patterns that host many
correlated and topological electronic phenomena, but measurements of the basic
electronic structure underpinning these phenomena are scarce. Here, we
investigate the properties of the conduction band in moir\'e heterobilayers
using submicron angle-resolved photoemission spectroscopy with electrostatic
gating, focusing on the example of WS2/WSe2. We find that at all twist angles
the conduction band edge is the K-point valley of the WS2, with a band gap of
1.58 +- 0.03 eV. By resolving the conduction band dispersion, we observe an
unexpectedly small effective mass of 0.15 +- 0.02 m_e. In addition, we observe
replicas of the conduction band displaced by reciprocal lattice vectors of the
moir\'e superlattice. We present arguments and evidence that the replicas are
due to modification of the conduction band states by the moir\'e potential
rather than to final-state diffraction. Interestingly, the replicas display an
intensity pattern with reduced, 3-fold symmetry, which we show implicates the
pseudo vector potential associated with in-plane strain in moir\'e band
formation. | Abigail J. Graham, Heonjoon Park, Paul V. Nguyen, James Nunn, Viktor Kandyba, Mattia Cattelan, Alessio Giampietri, Alexei Barinov, Kenji Watanabe, Takashi Taniguchi, Anton Andreev, Mark Rudner, Xiaodong Xu, Neil R. Wilson, David H. Cobden | 2023-09-19T23:29:03Z | http://arxiv.org/abs/2309.10964v1 | # Revealing the conduction band and pseudovector potential in 2D moire semiconductors
###### Abstract
Stacking monolayer semiconductors results in moire patterns that host many correlated and topological electronic phenomena, but measurements of the basic electronic structure underpinning these phenomena are scarce. Here, we investigate the properties of the conduction band in moire heterobilayers using submicron angle-resolved photoemission spectroscopy with electrostatic gating, focusing on the example of WS\({}_{2}\)/WSe\({}_{2}\). We find that at all twist angles the conduction band edge is the K-point valley of the WS\({}_{2}\), with a band gap of 1.58 \(\pm\) 0.03 eV. By resolving the conduction band dispersion, we observe an unexpectedly small effective mass of 0.15 \(\pm\) 0.02 \(m_{e}\). In addition, we observe replicas of the conduction band displaced by reciprocal lattice vectors of the moire superlattice. We present arguments and evidence that the replicas are due to modification of the conduction band states by the moire potential rather than to final-state diffraction. Interestingly, the replicas display an intensity pattern with reduced, 3-fold symmetry, which we show implicates the pseudo vector potential associated with in-plane strain in moire band formation.
## Introduction
The diverse ramifications of moire superstructures in two-dimensional (2D) van der Waals heterostructures are of great current interest. Most famously, stacks of graphene sheets with appropriate rotational misalignment between the layers exhibit moire superlattices that create nearly flat bands and lead to correlated insulating states, superconductivity, Chern insulators, and more[1, 2, 3, 4]. The existence of these graphene moire bands, and of correlation-induced spectral gaps within them, has been directly confirmed by submicron-scale angle-resolved photoemission spectroscopy[5, 6] (\(\upmu\)ARPES) and scanning tunneling microscopy[7, 8].
Artificial bilayers of two-dimensional (2D) semiconductors also exhibit moire superlattices[9, 10] leading to exciton arrays[11, 12, 13, 14], Mott insulating states and generalized Wigner crystals[15, 16], excitonic insulators[17, 18], tuneable magnetism[16, 19, 20], Kondo lattices[21], and very recently fractional quantum anomalous Hall states[22, 23, 24]. In the present work we use \(\upmu\)ARPES to probe the band structure of such 2D moire semiconductors. Although the conduction bands (CBs) play a crucial role in many of the above phenomena, ARPES detects only occupied states and thus is normally limited to probing the valence bands[25, 26, 27]. To overcome this limitation, we incorporate a metallic gate electrode under the
heterstructure, which allows electrostatic doping and thus detection of the CB edges [28] as well as changes in the bands resulting from doping [29; 30; 31] or electric field [32].
We focus on WS\({}_{2}\)/WSe\({}_{2}\) heterobilayers, where the moire potential is known to be strong at small twist angles [33; 16]. We determine the fundamental conduction band parameters, show that the band alignment of the separate monolayers is maintained independent of twist angle, determine the band gap and the CB effective mass, and observe perturbing effects of the moire potential on the CB. The latter manifest as multiple replicas of the original CB displaced by reciprocal lattice vectors of the moire superlattice. We consider the expected relative contributions of such moire potential-induced reconstruction of the CB states and of "final-state diffraction" of photoemitted electrons by the moire potential as they exit the material, concluding that the CB state reconstruction effect should be dominant at small twist angles. We notice that the replicas display a pronounced alternation of intensity when tracing them around the K-point, exhibiting only 3-fold rotational symmetry rather than the 6-fold symmetry one would expect for scattering of a free particle off a triangular or honeycomb lattice. For a model starting with circularly symmetric dispersion, as appropriate for low energies near the bottom of the WS\({}_{2}\) conduction band, we show that such an intensity pattern with reduced rotation symmetry cannot be produced within a model that incorporates a purely scalar moire potential. Our results thus reveal a significant influence of the moire pseudovector potential that is expected to be present as a result of strain.
_Samples and measurements._ Devices were fabricated by mechanical exfoliation, dry transfer, and electron-beam patterning of metal electrodes as in previous work [28]. A graphene top contact, pre-shaped into a comb-like pattern using atomic force microscope (AFM)-based electrochemical patterning [34], overlaps the heterobilayer which lies on a thin flake of insulating hexagonal boron nitride (hBN) over a graphite back gate, supported in turn on a SiO\({}_{2}\)/Si chip (see Fig. 1a and Methods). Each semiconductor heterobilayer was constructed by placing monolayer flakes with their straight edges subtending a target angle. The actual angles obtained were determined from the \(\mu\)ARPES spectra via identification of the constituent layers' valence band edges at their zone corners, but could be inferred from the moire period revealed by piezo-force microscopy [35] (PFM; see SI Sec. 1).
The lattice constants of relaxed monolayer WS\({}_{2}\) and WSe\({}_{2}\) are \(a_{\text{WSe}_{2}}=0.315\) nm and \(a_{\text{WSe}_{2}}=0.328\) nm respectively, and the lattice mismatch parameter is \(\delta=(a_{\text{WSe}_{2}}/a_{\text{WSe}_{2}})-1=0.041\). The moire lattice constant [36], \(a_{\text{m}}=a_{\text{WSe}_{2}}[2(1+\delta)(1-\cos\phi)+\delta^{2}]^{-1/2}\), has its largest value of \(a_{\text{WSe}_{2}}/\delta\approx 8\) nm [16] at \(\phi=0\). On device 1, photoluminescence measurements (SI Sec. 2) show enhancements in intensity at gate voltages corresponding to the integer filling of a moire unit cell of area consistent with the twist angle independently inferred by PFM (\(a_{\text{m}}\)\(\sim\)2.8 nm, \(\phi=6^{\circ}\)). Note, however, that we cannot tell whether the stacking is in the antiparallel (centrosymmetric) or parallel (polar) configuration from any of these measurements.
For \(\mu\)ARPES measurements, a device would be mounted on the temperature stage at \(\sim\)100 K with the top graphene connected to ground through a current amplifier and a voltage \(V_{\text{g}}\) applied to the back gate. At high \(V_{\text{g}}\) photoemission near the Fermi level \(E_{\text{F}}\) from the semiconductors can only obtained when the submicron beam spot (27 eV photon energy) is focused between the teeth of the graphene comb, as illustrated in Fig. 1b (see SI Sec. 3 for details).
**Results and Discussion**
_Valence bands._ Figure 1c is a sketch of the Brillouin zones for the heterobilayer in device 1. Figures 1d and e show energy-momentum slices measured at \(V_{\text{g}}=0\) along the high symmetry directions \(\mathbf{\Gamma}-\mathbf{K}_{\text{WSe}_{2}}\) and \(\mathbf{\Gamma}-\mathbf{K}_{\text{WSe}_{2}}\), respectively. As usual, we plot the energy relative to \(E_{\text{F}}\), i.e., \(E-E_{\text{F}}\) (Methods). The bands near the zone corner closely match the spin-split valence bands (VBS) of isolated WS\({}_{2}\) and WSe\({}_{2}\) monolayers (SI Sec. 4), implying weak hybridization far from \(\mathbf{\Gamma}\), as expected. The overlaid dotted
lines are fits to the upper WSe\({}_{2}\)-like band (red) and the upper and lower WS\({}_{2}\)-like bands (blue), yielding hole effective masses of \(0.47\pm.02\,m_{e}\), \(0.38\pm.01\,m_{e}\), and \(0.56\pm.01\,m_{e}\) respectively (\(m_{e}\) is the free electron mass). The WS\({}_{2}\) spin-orbit splitting is \(\Delta_{SO}^{WS_{2}}=0.44\pm 0.04\) eV, the same as in the monolayer[37, 38, 39]. The VB edge is the WSe\({}_{2}\)-like band at \(\mathbf{K}_{\text{WSe}_{2}}\), which is \(0.58\pm 0.04\) eV above the WS\({}_{2}\)-like band maximum at \(\mathbf{K}_{\text{WS}_{2}}\). These band parameters do not vary noticeably with twist angle (Sl Sec. 5). When the WS\({}_{2}\) is on top, the WS\({}_{2}\)-like bands near the zone boundary are more intense; this is explained by weak interlayer hybridization and the rapid fall-off of photoemission strength with depth. Indeed, when the WSe\({}_{2}\) is on top the converse is seen (see Sl Sec. 6). In contrast, near \(\mathbf{\Gamma}\) two bands with similar intensity are seen; this is explained by strong interlayer hybridization at \(\mathbf{\Gamma}\)[25].
_Band gap._ Figs. 1f and g show corresponding measurements made at a positive voltage \(V_{\text{g}}=+3\) V, which capacitively induces electron doping \(n_{\text{g}}=(6.4\pm 0.4)\times 10^{12}\) cm\({}^{-2}\) (Methods). Photoemission can now be seen from the CB edge near \(E_{\text{F}}\). Note that there is a broadening of all features relative to the \(V_{\text{g}}=0\) data which can be explained by the varying electrostatic potential over the beam spot associated with the in-plane current flow that is required to replenish the photoemitted charge. Strong CB emission is seen at \(\mathbf{K}_{\text{WS}_{2}}\) in Fig. 1d, while much weaker emission is seen at \(\mathbf{Q}_{\text{WS}_{2}}\) implying that the CB minimum at \(\mathbf{Q}_{\text{WS}_{2}}\) is close to but higher than the one at \(\mathbf{K}_{\text{WS}_{2}}\) (by \(\sim\)10-20 meV; see Methods). The absolute band gap at this doping is \(E_{\text{g}}=1.58\pm 0.03\) eV, while the intralayer gap between the WS\({}_{2}\)-like bands at \(\mathbf{K}_{\text{WS}_{2}}\) is \(2.04\pm 0.03\) eV, consistent with the gap of monolayer WS\({}_{2}\) measured at \(n_{\text{g}}=(1.0\pm 0.2)\times 10^{12}\) cm\({}^{-2}\) in prior work[28]. These CB parameters, like the VB ones mentioned above, did not vary detectably with twist angle (Sl Sec. 5).
Figure 1: **Valence and conduction bands in WS\({}_{2}\)/WSe\({}_{2}\):** moiré heterobilayer device 1, twist angle \(\mathbf{\phi}=6^{\circ}\). (a) Schematic of a device and the \(\mu\)ARPES measurement. (b) Optical image (left) and integrated photoemission maps at labeled \(V_{\text{g}}\) (right) of the same region of a device (see Sl Sec. 2). Drawn lines denote edges of WS\({}_{2}\) (blue), WSe\({}_{2}\) (red), graphene top contact (dashed white), and graphite back gate (black). At \(V_{\text{g}}=+2\) V, photoemission is only seen between the graphene comb teeth. Scale bars: 10 \(\mu\)m. (b) Schematics of the structure and Brillouin zones of a WSe\({}_{2}\)/WSe\({}_{2}\) bilayer with \(6^{\circ}\) twist. Blue and red indicate WS\({}_{2}\) and WSe\({}_{2}\), respectively. (d) and (e), Energy-momentum slices along \(\mathbf{\Gamma}-\mathbf{K}_{\text{WS}_{2}}\) and \(\mathbf{\Gamma}-\mathbf{K}_{\text{WSe}}\), respectively, at \(V_{\text{g}}=0\). (f) and (g), Similar measurements at \(V_{\text{g}}=+3\) V. Dotted lines in (d) and (e) are fits to the WS\({}_{2}\)-like bands (blue) and the WSe\({}_{2}\)-like bands (red) near the zone boundary and to the hybridized bands near \(\mathbf{\Gamma}\) (black). The same fits are overlaid and shifted vertically in (f) and (g) to best match the data. The intensity near \(E_{\text{F}}\) is plotted in logarithmic grayscale. (h) EDCs (red) at a series of fixed momenta equally spaced across the range of the dashed box in (f). Each trace is averaged over a 0.01 Å\({}^{-1}\) momentum interval and fitted with the product of a Gaussian and a Fermi function (blue lines). (i) Energy of peak photoemission intensity (solid red circles) and the conduction band energy (empty black circles) extracted from the data in (h). The blue and grey parabolas are least-squares fits.
_Conduction bands._ The gate doping achieved in device 1 was sufficient to determine the CB curvature, which has not been done before. Fig. 1h shows energy dispersion curves (EDCs) through the CB feature in the dashed box in the top right corner of Fig. 1f. The energy where the intensity is maximum, plotted as solid red circles in Fig. 1h, passes through a minimum at \(\mathbf{K}_{\text{WS}_{2}}\). The spin splitting of the CB is several times \(k_{\text{B}}T\) at 100 K [37, 40], so we assume the lower spin branch is mainly populated and derive its dispersion by fitting the EDC at each momentum to the product of a Fermi function (\(T=100\) K, \(E_{\text{F}}=0\)) and a Gaussian (width 160 meV) treating the Gaussian center \(E_{\text{c}}\) as a fitting parameter (see Methods ). The resulting \(E_{\text{c}}\) values are plotted as open circles in Fig. 1i. Fitting a parabola (black line) yields an effective mass \(m_{\text{e}^{*}}=0.15\pm 0.02\ m_{e}\). We note that this is substantially smaller than first-principles predictions for monolayer WS\({}_{2}\)[37, 41], which lie in the range \(0.24-0.27\ m_{e}\).
_Replicas._ In Fig. 1g, we discern an additional spot of emission near the Fermi energy, labeled \(\mathbf{X_{1}}\), that does not correspond to the band edge of either constituent monolayer. Figure 2 is a constant-energy map at \(E=E_{\text{F}}\) in which \(\mathbf{X_{1}}\)is seen as one of three satellite spots situated near the corners of a hexagon centered on \(\mathbf{K}_{\text{WS}_{2}}\). The two others are labelled \(\mathbf{X_{2}}\) and \(\mathbf{X_{3}}\). These spots appear at \(E_{\text{F}}\) simultaneously with the CB minimum at \(\mathbf{K}_{\text{WS}_{2}}\), as illustrated by the momentum slice passing through \(\mathbf{K}_{\text{WS}_{2}}\) and \(\mathbf{X_{2}}\) shown in the upper inset. To within uncertainty, they are displaced from \(\mathbf{K}_{\text{WS}_{2}}\)by moire reciprocal lattice vectors \(\mathbf{G_{m}}\). The latter are determined by the relation \(\mathbf{G_{m}}=\mathbf{G}_{\text{WSe}_{2}}-\mathbf{G}_{\text{WS}_{2}}\), where \(\mathbf{G}_{\text{WSe}_{2}}\) and \(\mathbf{G}_{\text{WS}_{2}}\) are reciprocal lattice vectors of the two layers, as illustrated in the lower inset. In this device, \(G_{\text{m}}=2\pi/a_{\text{m}}=2.5\) nm\({}^{\text{-1}}\). The corresponding value of \(a_{m}\) of 2.5 nm was confirmed by PFM imaging of the device. We deduce that the satellite spots are replicas of the CB minimum related to the moire pattern. Similar moire-related replicas of the VB have been reported in photoemission from WSe\({}_{2}\) under graphene[42] and WS\({}_{2}\) under graphene[43] and interpreted in terms of miniband formation.
Replicas of the CB were also seen in device 2 (\(\phi\sim 2^{\circ}\), \(G_{\text{m}}=0.9\) nm\({}^{\text{-1}}\)). Fig. 3a is an energy-momentum slice from the heterobilayer in device 2 along \(\mathbf{\Gamma}-\mathbf{K}_{\text{WS}_{2}}\) at \(V_{\text{g}}=+2.5\) V (\(n_{\text{g}}=(4.2\pm 0.4)\times 10^{12}\) cm\({}^{\text{-2}}\)), and Fig. 3b is a constant-energy map around \(\mathbf{K}_{\text{WS}_{2}}\) at \(E=E_{\text{F}}\). The CB feature here
Figure 2: Moiré replicas of the conduction band in device 1. Main: photoemission intensity (in logarithmic scale) at the Fermi energy for \(V_{\text{g}}=+3\) V. Observable replicas of the \(\mathbf{K}_{\text{WS}_{2}}\) conduction band are labelled \(\mathbf{X_{1}}\), \(\mathbf{X_{2}}\), and \(\mathbf{X_{3}}\). Lower inset: part of the Brillouin zones of the two layers showing construction of a moiré reciprocal lattice vector, \(\mathbf{G_{m}}\) and a “moiré star” (orange arrows) of 6-fold counterparts of \(\mathbf{G_{m}}\) centered on \(\mathbf{K}_{\text{WS}_{2}}\). Upper inset: energy-momentum slice along the line through \(\mathbf{K}_{\text{WS}}\), and \(\mathbf{X_{2}}\).
has three lobes that are consistent with partially resolved replicas of a central spot displaced by three moire reciprocal lattice vectors, one of which is constructed Fig. 3c. Notably, replicas were also seen in the spectrum of graphene overlapping the heterobilayer. Its Brillouin zone (rotated by 19\({}^{\circ}\) relative to the WS\({}_{2}\)) is also shown in Fig. 3c. Fig. 3d shows an energy-momentum slice through the graphene zone corner, \(\mathbf{K_{g}}\), and Fig. 3e shows corresponding constant-energy maps at the indicated energies. In addition to the ordinary Dirac cone centered at \(\mathbf{K_{g}}\) there is a set of replicas around it that form a slightly distorted triangular array. The same three moire vectors match the heterobilayer CB replicas in Fig. 3b and the more intense graphene replicas in Fig. 3e, implying that all are related to the WS\({}_{2}\)/WSe\({}_{2}\) moire pattern. Similar patterns were seen before in ARPES measurements[44] on (ungated) graphene on WS\({}_{2}\)/WSe\({}_{2}\), where the authors also pointed out that the distortion could be due to anisotropic strain. We saw a similar pattern again in measurements on graphene overlapping a WS\({}_{2}\)/MoSe\({}_{2}\) heterobilayer (SI Sec. 7).
In higher-twist device 3 (\(\phi=9^{\circ}\), \(G_{m}=3.6\) nm\({}^{-1}\); see SI Sec. 8) no CB replicas were visible. On the other hand, this device was the only one that exhibited VB replicas. This could be just a matter of energy resolution: for example, we estimate that for \(\phi=6^{\circ}\) a resolution of \(\sim\)100 meV is needed to distinguish VB replicas compared with \(\sim\)400 meV for CB replicas, because of the smaller dispersion of the VB. Under no conditions did we see replicas associated with moire wavevectors of the graphene/WS\({}_{2}\) interface where the large lattice mismatch should make moire modulations very small. This argues in favor of a role for scattering from the moire potential.
Figure 3: **Moiré replicas in device 2, with twist angle \(\phi=2^{\circ}\). All data were taken at \(\mathbf{V_{g}}=+2.5\) V. (a) Momentum slice along \(\mathbf{\Gamma-K_{WS_{2}}}\) in the WS\({}_{2}\)/WSe\({}_{2}\) region (between the graphene comb teeth). (b) Constant-energy map centered on \(\mathbf{K_{WS_{2}}}\) at \(\mathbf{E_{F}}\), averaged over 0.4 eV. Color indicates linear and grayscale indicates logarithmic intensity scale. (c) Brillouin zones of the WS\({}_{2}\) (blue), WSe\({}_{2}\) (red), and overlapping graphene (gray), showing construction of one of the three moiré vectors, \(\mathbf{G_{m}}\), that are superimposed (orange) on panels (b) and (e). (d) Momentum slice through the graphene zone corner \(\mathbf{K_{g}}\) in the graphene/WS\({}_{2}\)/WSe\({}_{2}\) region. (e) Constant-energy maps centered on \(\mathbf{K_{g}}\) at binding energies indicated by dashed red lines, \(E-E_{F}=-0.10\), -0.25 and -0.55 eV, each averaged over 0.1 eV. The location of the slice in (d) is shown as a solid black line in the top panel of (e).**
_Origin of the CB replicas._ All of the replica features mentioned above appear to be copies of the parent bands translated by reciprocal lattice vectors of the moire pattern of the heterobilayer. In general, these replicas result from the combination of moire potential-induced modifications of the system's Bloch states ("initial-state modification" or "miniband formation"), as indicated schematically in Fig. 4a, and scattering of the photoexcited electrons by the moire potential as they leave the sample ("final-state diffraction") [45, 46, 47, 48, 49], as indicated in Fig. 4b. We now briefly discuss the qualitative features of these two contributions, and the factors that point to initial-state modification as the dominant source of the replicas. Our discussion applies both to replicas seen in the CB of WS\({}_{2}\) and to those observed for the graphene on top of the WS\({}_{2}\)/WSe\({}_{2}\) heterostructure as seen in Fig. 3.
Initial-state modification results from electrons coherently scattering on the moire potential. New Bloch states of the superlattice are formed by hybridizing states in the original bands of the material at crystal momentum values offset by integer linear combinations of the moire reciprocal lattice vectors \(\{\mathbf{G_{m}}\}\); see Fig. 4c (where we show the six shortest \(\mathbf{G_{m}}\)). From perturbation theory, it is straightforward to see that this hybridization is strongest when the energy differences between states offset by a moire wavevector are small (compared with the strength of the effective moire potential, \(|U|\)). Thus, initial state modification is stronger when the moire reciprocal lattice vectors are shorter, that is, for smaller twist angles. Indeed, the CB replicas are strongest in device 2 (\(\phi=2^{\circ}\); Fig. 3), weaker in device 1 (\(\phi=6^{\circ}\); Fig. 2), and not detectable in device 3 (\(\phi=9^{\circ}\)).
The magnitudes of the final-state diffraction contributions are determined by the corresponding differential cross-sections for the photo-emitted electrons to scatter from the moire potential. Although the interaction between the photo-emitted electron and the material may be strong, due to
Figure 4: **Origin of the moiré replicas.** Illustrations with Dirac cones represent the behavior within the graphene layer on top of the WS\({}_{2}\)/WSe\({}_{2}\) heterostructure, but the same mechanisms apply for the WS\({}_{2}\) layer. (a) Initial-state modification. Bloch states of the superlattice associated with the moiré potential V(r) are formed from superpositions of states in the unperturbed conduction band, offset by reciprocal lattice vectors of the moiré pattern, \(\{\mathbf{G_{m}}\}\). Photoemission from the superlattice Bloch states thus carries in-plane momentum contributions both from a central peak corresponding to the original conduction band and satellites (replicas) of weaker intensity that map out the momentum space structure of the reconstructed conduction band. (b) Final-state diffraction. Ignoring the effects of the moire superlattice on the conduction band states themselves, the moire potential may also scatter photoemitted electrons during their escape from the material, producing replica intensity spots displaced from the main peak by moiré reciprocal lattice vectors. As described in the text and SI Sec. 9, for small twist angles and high photoexcitation energies, we expect the observed replica intensity to be dominated by the initial state modification effect. (c) The six shortest reciprocal lattice vectors of the \(C_{3}\)-symmetric moiré pattern. (d) Schematic representation of the moiré unit cell of the WS\({}_{2}\)/WSe\({}_{2}\) heterostructure with \(C_{2n}\) symmetry. The shading indicates different values of the scalar moiré potential \(U(\mathbf{r})\) near the high symmetry regions _MM_ where the metal (W) atoms are vertically aligned, and the MX and XM regions where the metal atom of one layer sits directly above or below the chalcogen atom of the opposite layer.
the emitted electron's high velocity the interaction time is short. In terms of the moire potential amplitude \(U\) (see below for further microscopic details), the amplitude corresponding to the scattering process is controlled by the parameter \(Ud/(hv_{out})\), where \(v_{out}\) is the velocity of the emitted electron and \(d\) is the distance over which the moire potential acts. For \(Ud/(hv_{out})\ll 1\), the scattering amplitude may be estimated as \(\mathcal{A}_{fin}\sim Ud/(hv_{out})\). For comparison, consider an electronic state at momentum \(\mathbf{k}\) (in the absence of the moire potential); in the presence of the moire potential, the wave function of this state obtains a component at momentum \(\mathbf{k}+\mathbf{G}_{\mathrm{m}}\) that in the perturbative regime can be estimated as \(\mathcal{A}_{inl}\sim\frac{U}{[\varepsilon(\mathbf{k})-\varepsilon(\mathbf{k}+ \mathbf{G}_{\mathrm{m}})]}\), where \(\varepsilon(\mathbf{k})\) is the electronic dispersion. Crucially, \(\mathcal{A}_{inl}\) grows large for small \(|\mathbf{G}_{\mathrm{m}}|\), while \(\mathcal{A}_{fin}\) is insensitive to \(|\mathbf{G}_{\mathrm{m}}|\) in this limit. For small twist angle \(\mathbf{\phi}\) (small \(|\mathbf{G}_{\mathrm{m}}|\)) and moderate-energy outgoing electrons, \(\mathcal{A}_{inl}/\mathcal{A}_{fin}\gg 1\), the contribution from initial state modification is expected to be the dominant source of moire replica intensity in the ARPES spectrum.
In situations where the photoemitted electrons originate from a lower monolayer and pass through an upper monolayer, we often observe replicas that are best explained by final-state diffraction from the lattice of the upper layer. For example, in device 3 (\(\mathbf{\phi}=9^{\circ}\)) we saw replicas of the WSe\({}_{2}\) valence band shifted by reciprocal lattice vectors of the upper WS\({}_{2}\) layer (Sl Sec. 8). The scattering wave vectors here are long and there is no large parameter that ensures that initial state modification dominates, while the amplitude of the scattering potential can be of atomic scale. A collection of examples of this phenomenon we have seen in 2D heterostructures will be presented elsewhere.
In Figs. 3e-f we see replicas of emission from the capping graphene layer matching the moire structure of the heterobilayer beneath it. Due again to the small \(|\mathbf{G}_{\mathrm{m}}|\) and the fact that the emission is from the topmost layer, these replicas very likely reflect the modification of Bloch states within the graphene layer[44]. The similarity in intensity of the graphene replicas to the parent Dirac cone implies that \(\mathcal{A}_{inl}\) here is of the order of unity, indicating that the graphene electrons feel a moire potential on the order of magnitude of 100 meV. One would expect to see anticrossings on the same energy scale between the replicas and the original bands which would be a clear signature of mini-band formation. The absence of anticrossings in Fig. 3d could be a limitation of the \(\sim\)100 meV energy resolution.
_6-fold symmetry breaking._ The replicas of both the WS\({}_{2}\) and the capping graphene bands in Fig. 3 exhibit an approximate _3-fold_ rotational symmetry. Commonly, the moire superlattice is modeled using a real-valued scalar potential \(U(\mathbf{r})\), with \(\mathrm{C}_{3v}\) symmetry. Since \(U(\mathbf{r})\) is real-valued, and hence its Fourier components satisfy \(\vec{U}_{-\mathbf{G}_{\mathrm{m}}}=\vec{U}_{\mathbf{G}_{\mathrm{m}}}^{*}\), one might expect the replica intensity pattern to have _6-fold_ symmetry. For example, consider a low-energy effective model for the electronic states within one valley, described by the Hamiltonian \(H=hv(-i\mathbf{\nabla}\cdot\mathbf{\sigma})+\frac{1}{2}\Delta\,\sigma_{x}+U(\mathbf{r})\), where \(\mathbf{\sigma}=(\sigma_{x},\sigma_{y})\) and \(\sigma_{x}\) are Pauli matrices representing the orbital pseudospin degree of freedom, \(v\) is a velocity and \(\Delta\) is a gap. In Fig. 4d we show a schematic representation of \(U(\mathbf{r})\) in the moire unit cell. Using the reflection symmetry of the moire potential across the \(y\)-axis (the vertical mirror plane \(\mathcal{M}_{y}\) of the \(\mathrm{C}_{3v}\) point group), \(U(x,y)=U(-x,y)\), the model Hamiltonian above is symmetric under the reflection operation \(x\rightarrow-x\) followed by complex conjugation. As a result, the moire replicas centered at \(\mathbf{G}_{1}\) and \(\mathbf{G}_{-1}=-\mathbf{G}_{1}\) are represented with equal probability in the modified (perturbed) "initial state" centered around \(k=0\) (i.e., at the valley center). Combined with the 120\({}^{\circ}\) rotational symmetry of the system, this would yield a 6-fold symmetric moire replica pattern.
Crucially, in-plane distortions of the atomic lattice break the mirror symmetry of the system [see, e.g., Ref. [50]]. The resulting local strain fields are manifested in the low-energy effective Hamiltonian through a term of the form \(-hv\mathbf{A}(\mathbf{r})\cdot\mathbf{\sigma}\), where \(\mathbf{A}(\mathbf{r})\) is a moire (pseudo) vector potential. Physically, this emergent vector potential captures the additional phases acquired by a Bloch wave near the center of one valley as it travels between atomic sites in the strained regions, compared to the phases
acquired during hopping in the un-distorted structure. The sign of the moire pseudo vector potential is _opposite_ in valleys **K** and **K'**. This moire vector potential perturbation breaks the \(\mathcal{M}_{y}\) reflection followed by complex conjugation symmetry of the system that on its own endows the replica intensity pattern with a 6-fold rotation symmetry. The in-plane distortions of the crystal lattice thereby break this 6-fold symmetry down to a 3-fold symmetric pattern. At higher energies where the low-energy effective model is not valid, i.e., sufficiently far from the k point, other factors such as trigonal warping can also give a 3-fold symmetric replica pattern even with only a scalar moire potential \(U(\mathbf{r})\). However, the moire pseudo-vector potential induced 6-fold symmetry breaking persists even close to the K-point. In SI Sec. 9 we analyze this quantitatively within the low-energy continuum model.
## Conclusions
Underpinning much recent work on correlated and topological states in twisted semiconductor bilayers is the assumption that, far from the zone centre, the bands of the two layers are only weakly hybridized and thus correspond closely to those of the separate monolayers simply superposed. Our results confirm this assumption. In the case of WS2/WSe2, the band alignment is such that the VB edge is at the K-points in the WSe2 layer, the CB edge is at the K-points in the WS2 layer (with the WS2 Q-point minima just above), and the net band gap is \(1.58\pm 0.03\) eV, all independent of twist angle. In one sample (with 6\({}^{\circ}\) twist) we made the first determination of the CB effective mass, finding it to be \(0.15\pm 1\,m_{e}\) (smaller than predicted). In addition, we observed replicas of the CB shifted in momentum by moire wavevectors. After theoretically considering the relative contributions of initial-state modification and final-state diffraction, we conclude that the replicas reflect modification of the Bloch states by the moire potential. The same goes for corresponding replicas of the Dirac cones seen in graphene capping the bilayers. Finally, we consistently observed a three-fold (as opposed to 6-fold) symmetry of the replica pattern which implies that the pseudo-vector potential, and therefore periodic strain, plays a vital role in modifying the Bloch states in moire structures.
|
2309.04911 | A Review of Machine Learning-based Security in Cloud Computing | Cloud Computing (CC) is revolutionizing the way IT resources are delivered to
users, allowing them to access and manage their systems with increased
cost-effectiveness and simplified infrastructure. However, with the growth of
CC comes a host of security risks, including threats to availability,
integrity, and confidentiality. To address these challenges, Machine Learning
(ML) is increasingly being used by Cloud Service Providers (CSPs) to reduce the
need for human intervention in identifying and resolving security issues. With
the ability to analyze vast amounts of data, and make high-accuracy
predictions, ML can transform the way CSPs approach security. In this paper, we
will explore some of the most recent research in the field of ML-based security
in Cloud Computing. We will examine the features and effectiveness of a range
of ML algorithms, highlighting their unique strengths and potential
limitations. Our goal is to provide a comprehensive overview of the current
state of ML in cloud security and to shed light on the exciting possibilities
that this emerging field has to offer. | Aptin Babaei, Parham M. Kebria, Mohsen Moradi Dalvand, Saeid Nahavandi | 2023-09-10T01:52:23Z | http://arxiv.org/abs/2309.04911v1 | # A Review of Machine Learning-based Security in Cloud Computing
###### Abstract
Cloud Computing (CC) is revolutionizing the way IT resources are delivered to users, allowing them to access and manage their systems with increased cost-effectiveness and simplified infrastructure. However, with the growth of CC comes a host of security risks, including threats to availability, integrity, and confidentiality.
To address these challenges, Machine Learning (ML) is increasingly being used by Cloud Service Providers (CSPs) to reduce the need for human intervention in identifying and resolving security issues. With the ability to analyze vast amounts of data, and make high-accuracy predictions, ML can transform the way CSPs approach security.
In this paper, we will explore some of the most recent research in the field of ML-based security in Cloud Computing. We will examine the features and effectiveness of a range of ML algorithms, highlighting their unique strengths and potential limitations. Our goal is to provide a comprehensive overview of the current state of ML in cloud security and to shed light on the exciting possibilities that this emerging field has to offer.
Cloud Computing, Machine Learning, Cloud Security.
## I Introduction
Cloud computing is a paradigm for delivering information technology services through the internet, rather than through a direct connection to a server. This delivery model allows on-demand access to computing resources, such as storage, networking, software, analytics, and intelligence, without needing physical infrastructure. Typically, customers are charged for cloud computing services based on usage, rather than a fixed rate, and can be classified into three Service Model categories: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
There are four deployment models for cloud computing: public, private, community, and hybrid. These models are comprised of various components, including providers, consumers, auditors, brokers, and carriers, each with distinct responsibilities [1].
As cloud computing continues to gain popularity and expand, security concerns regarding the confidentiality, integrity, and availability of user data remain a pressing issue. To mitigate these concerns, it is imperative that cloud service providers adopt state-of-the-art technology and techniques to prevent data loss and unauthorized access while ensuring the availability of their services [2].
In this paper, we aim to conduct a comprehensive review of cloud computing service and deployment models, with a focus on security challenges and attacks. Furthermore, we will examine the role of machine learning in enhancing cloud security. Machine learning, a subfield of artificial intelligence, allows systems to learn and improve from experience without explicit programming. It involves the development of algorithms and models that can identify patterns in data and make predictions or decisions without human intervention.
Our objective is to thoroughly examine the various machine learning techniques employed to detect, prevent, and resolve cloud security vulnerabilities. Despite numerous studies in this area, there is a lack of a comprehensive examination of the available machine learning algorithms in the context of cloud security. This paper will draw upon relevant literature and studies related to cloud computing and security, as well as the use of machine learning algorithms in cloud security.
In section III, we will review different types of machine learning algorithms, including supervised, unsupervised, semi-supervised, and reinforcement algorithms, and discuss their potential benefits in enhancing cloud computing security. We will also examine some of the most commonly used machine learning algorithms and their features. Finally, in section IV, we will provide a conclusion to our review and outline potential avenues for future research.
## II Cloud Computing
Cloud Computing has emerged as a transformative technology in the field of Information Technology (IT) over the past decade. It represents a paradigm shift in the way IT services are delivered and consumed by end-users. In Cloud Computing, a vast array of computing resources, including storage, computing power, and applications, are made available over the Internet on a pay-per-use basis. The computing resources are abstracted from the underlying physical infrastructure and made available to end-users through virtualized computing environments [3].
Cloud Computing is characterized by its scalability and on-demand availability. End-users can consume and scale computing resources according to their requirements without the need for investing in expensive physical infrastructure. This has the potential to offer significant cost savings for end-users, especially for small and medium-sized businesses. For example, online retailers can increase their computing resources during peak periods to meet the increased demand
for their services, and then reduce them during periods of low demand, thus avoiding the costs associated with owning and maintaining physical infrastructure [4].
From the perspective of Cloud Service Providers (CSPs), Cloud Computing is delivered through one of four deployment models: Public, Private, Hybrid, or Community Cloud. The deployment model chosen by a CSP depends on factors such as the level of control desired by the end-user, the level of trust required, and the level of customization required. Additionally, Cloud Computing can be further categorized into three main service models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). In IaaS, end-users are responsible for managing their own virtual machines, operating systems, and applications. In contrast, in SaaS, the CSP is responsible for providing and maintaining the underlying infrastructure and software, leaving end-users with the minimum level of control over the computing environment [5].
While Cloud Computing offers many benefits, it also presents new security challenges. Ensuring the confidentiality, integrity, and availability of end-users' data are of paramount importance for CSPs. Attacks on Cloud Computing environments can have significant consequences, including data loss, unauthorized access, and disruption of service. To mitigate these risks, CSPs must implement robust security measures and technologies to protect their environment. One such technology is Machine Learning, which has the potential to significantly enhance the security of Cloud Computing. Machine Learning algorithms can automatically learn from data, identify patterns and anomalies, and make predictions or decisions without human intervention [5, 6].
### _Cloud Computing Service Models_
There are a few models that the Internet/Cloud providers are using to present their cloud services. Figure 1 shows IaaS, PaaS, and SaaS which are the most common models and represent the level of responsibilities of users and the service providers. Apart from these, there are more models like Content as a Service (CaaS), Database as a Service (DBaaS), Function as a service/Serverless (FaaS), etc [7].
#### Ii-A1 IaaS (Infrastructure as a Service)
The closest model to the on-site or on-premises IT infrastructure is IaaS. The cloud provider is responsible for the hardware infrastructure, including CPU, Storage, and Networking, and they should be taking care of related maintenance. The user would be responsible for installing the Operating System and all the applications related to their functions. This model has the highest level of responsibility for the users. Most of the providers of other models (PaaS, SaaS) use this model to sell their products to the end users. Microsoft and Amazon are some providers with this model in their portfolio and customers can use it. The providers use virtualization technologies to share their hardware with multiple customers [6].
#### Ii-A2 PaaS (Platform as a Service)
Customers who don't want to deal with hardware and operating system management, can use this model which the providers offer development tools, configuration management, and deployment platforms. Facebook development platform and Microsoft Windows platform are some examples of this model. Website developers who don't want to spend time installing software or OS can use this model [5].
#### Ii-A3 SaaS (Software as a Service)
The lowest level of responsibility for the users (mostly organizations or domestic users) can be found in this model. The users don't need to purchase software product licenses and they are not responsible for the maintenance or updating of them. Office 365 is an example of this model which provides hassle-free software to use on the web [8].
### _Cloud Computing Deployment Models_
From the service/cloud providers' perspective, the Cloud Computing has four different deployment models as followings:
#### Ii-B1 Private Cloud
A private or internal cloud platform is designed specifically to be used by an organization and the services are not available to others. Enterprise companies or government departments with several branches are more suitable consumers for this model. This cloud model usually sits inside the enterprise or company itself which increases the security level and prevents external access but the investment cost is very high and not affordable for most consumers [9].
#### Ii-B2 Public Cloud
A public or external cloud platform is available for all users rather than their use. It is usually built in multiple data centers for a greater level of redundancy. The users don't have any control over where their data gets saved. The service provider could use the geographically closer data center to save the users' data to improve the latency of transmitting data [10].
#### Ii-B3 Hybrid
A combination of private and public clouds with automation and orchestration facility, provides a hybrid Cloud
Fig. 1: Different Cloud Computing Service Models and the level of responsibility for both parties
Computing for consumers [7]. A hybrid cloud offers security and control like the private cloud also cost and elasticity similar to a public cloud. There are some concerns about data privacy and integrity while the data is traveling between the public and private sectors as they have different security levels and measurements.
#### Ii-C4 Community
When a group of organizations has common interests like security requirements, policy, or compliance, they can use a community cloud model. In this model, the cloud is either owned by one of the organizations or a third party. In terms of the location, the community cloud infrastructure could be on-premise of one or multiple members of the community members or on a third-party's location [8].
Table I presents some advantages and disadvantages for each of the above models. [11].
### _Cloud Computing Components_
Cloud Computing as it is shown in figure 2 is a complex network including the following components [11]:
* Cloud Provider: This is an organization that creates a service that interested parties can access it.
* Cloud Consumer: This is the user of the Cloud Provider service.
* Cloud Auditor: A party that is responsible to check the data framework, performance, and security of the cloud service.
* Cloud Broker: An entity that manages the relationship between Cloud Provider and Consumer also deals with utilization, performance and delivery of service to the consumers.
* Cloud Carrier: An infrastructure or medium which provides the network and transportation of cloud service to the Cloud Consumers from the Cloud Provider.
### _Cloud Computing Security Challenges_
Data security has always been one of the major topics in IT industry. Nowadays CSPs are responsible to provide the highest level of security for their consumer if they want to survive in this market. CSPs focus on three major factors to secure their cloud system: Confidentiality, Integrity and Availability (CIA) [12].
#### Ii-D1 Confidentiality
Confidentiality is referring to the protection of data against unauthorized users. In a cloud environment, some users might try to access other users' data without permission. As the attacker could be on the same virtual machine as the victim user, the CSP is responsible to prevent unauthorized access from one tenant to another [8]. CSP should provide a clear strategy for Data Storage, Deletion, Backup, Encryption, and Access privilege. If the stored data location is different from the user's location, in case of any attack, different laws would exist for each country. Also, if the deleted users' data, after canceling a subscription or account, get recovered by unauthorized users, the CSP is responsible for such un-trusted access [13].
#### Ii-D2 Integrity
Integrity makes sure that the data has not been modified by an unauthorized user. Compared to confidentiality, integrity is focused on keeping the data intact from getting changed or modified by un-trusted users [14].
The attacker could use techniques like SQL injection to modify the database content or they can launch their malicious instance, instead of the user Virtual Machine (VM) instance on the cloud system. VM replication is another opportunity for the attacker which they can use to modify the data. If there are no proper security measures while the VM instance is getting replicated, the attacker can manipulate the process and modify the data during the replication. There are other VM attacks like rollback, escape, and hopping which are discussed in [8].
#### Ii-D3 Availability
This factor by far could be the most important security aspect for the service providers and their users. The CSPs need to make sure that the service is available close to 100%. There are attacks like Denial of Service (DoS) that the attacker tries to make the service unavailable by sending rouge requests in bulk to occupy the servers' CPU and memory. The cloud providers try to make their VMs available by using several data centers which are connected with high bandwidth links to each other. This way they can spread the servers and mitigate the security risks. Using techniques like "Virtual IP Address", "HoneyPot Zone" and "DDoS Mitigation" is helping the CSPs to overcome this challenge [15].
Fig. 2: Cloud Computing Components
### _Cloud Computing Attacks_
There are a few vulnerabilities in Cloud Computing that could cause major threats which are virtualization, Unauthorized Access, Application Programming Interface (API), and browser vulnerabilities [16]. These vulnerabilities give the attacker a chance to launch an attack which could be categorized as followings based on the nature of the attack:
#### Ii-E1 Denial of Service (DoS) attack
As we discussed in the previous section, this is an attack to attempt to the availability of the service. As the source of this attack can be found, the attackers used Distributed Denial of Service to launch DoS attacks from multiple systems [17].
#### Ii-E2 Zombie attack
In this scenario, the attacker uses an innocent user to storm the victim with bulk requests. This attack can affect the availability of the cloud system [18].
#### Ii-E3 Man-In-The-Middle attack
As the name suggests, the attacker tries to get in the path between the users or between the user and the cloud system to manipulate the data or simply steal it [19].
Besides the above classification, we can categorize the attacks based on the cloud infrastructure section which the attackers aim for. In this case, the attacks can be categorized as [10]:
#### Ii-E1 Network-based Attacks
The attacker aims to eavesdrop on the network traffic or try to restrict or alter it. Port Scanning, Spoofing, and Spamming are some examples in this category [20].
#### Ii-E2 VM-based Attacks
Here the attackers try to manipulate the Virtual Machines' (VM) images. This can be done by injecting codes and/or changing the settings. VM Hyper Jacking and VM Escape are some examples of VM-based attacks [21].
#### Ii-E3 Storage-based Attacks
Invading stored user information could be one of the most harmful attacks in which the attackers could get access to all the users' data and then steal, change, or delete them. Data De-duplication, Data recovery, Data Scavenging, and Data Backup are some most known attacks in this category [22].
#### Ii-E4 Application-based Attacks
The application which is running on the cloud is vulnerable to attacks that can cause performance drops or information leakage. Malware infusion, Web Services, and Shared designs are some examples here [23].
## III Machine Learning in Cloud Security
As Cloud Computing grows so fast, network management and controlling security are among the biggest concerns for the providers. In this circumstance automation especially using Machine Learning is a fast-growing technique to predict and prevent security risks and threats.
Machine Learning (ML) is a sub-field of "Artificial Intelligence" responsible to fabricate underlying computational formulation and develop a statistical model based on the available dataset referred to as "Training Data". There are four main types of learning in ML which are Supervised, Unsupervised, Semi-supervised, and Reinforcement learning.
### _Types of Machine Learning Algorithms_
Based on the learning type, the Machine Learning algorithms can be categorized as the following which is illustrated in figure 3 as well [10]:
#### Iii-A1 Supervised Learning
In supervised learning, the system uses the provided training dataset as an instructor. This means the training dataset is already classified with an obvious and straightforward answer for each case or scenario. Thereafter the ML is given a new set of data to assess the outcome based on what it learned [24]. This algorithm can train the model with labeled data for intrusion and/or anomalies over the cloud network besides the normal data. There are several common datasets available for this model like KDD, DRAPA, and UNSW. Based on the output, supervised learning can be categorized into the followings:
* Classification: If the output data is categorical and from a certain list (True/False, Yes/No, Dog/Cat/Mouse) it is called a classification problem which could be a binary or multi-class problem. There are a few common examples for this category such as Naive Bayes, Decision Tree, Support Vector Machine (SVM), and Logistic Regression. According to [25, 26] Logistic Regression and Decision Tree are among the top three algorithms in terms of accuracy rate while the UNSW and ISOT datasets have been used.
* Regression: If the problem output is continuous and numeric (e.g. a number between 0.0 to 100.0), the ML algorithm to use is called regression. There are algorithms like Linear Regression, Decision Tree, Lasso, and multivariate Regression in this category. In [9], it has been mentioned that the Linear Regression had a 94.36%
Fig. 3: Different types of Machine Learning algorithms.
accuracy result compared to other algorithms used in the study [26].
#### Iii-A2 Unsupervised Learning
In this learning type, the system does not have any current labeled data to learn the pattern and prepare itself, instead, it has been designed to extract the pattern, classes, or categories from the data especially the data without any classification or labeling by itself [27].
* Clustering: This algorithm tries to group similar data which has the same characteristics. Clustering is being used in image analysis, pattern recognition, and Machine Learning for its investigative data mining characteristic [28].
* Anomaly Detection: In this algorithm, the system tries to identify instances that differ from the rest of the data. These differences could indicate potential issues or suggest interesting patterns be investigated [29].
* Association: The goal of this algorithm is to find the relationship/dependency between the data so that they can be grouped. The results can be used for businesses to make decisions about their marketing or product portfolio. For example, Amazon recommends related books or items to the users' search history [30].
#### Iii-A3 Semi-supervised Learning
As the name implies, in this model the system is not fully supervised so part of the training is with labeled data, and for the rest, the system should use unclassified or unlabeled data to find the patterns. This approach can be considered advantageous when procuring a sufficient amount of labeled data presents a challenge or incurs a significant expense. It needs to be observed how combining labeled and unlabeled data might change the learning behavior [31].
* Self-Training: In this procedure, we can take any method of supervised learning (Classification or Regression) and modify it to work as Semi-supervised Learning. First, we use some labeled data to train the model then we apply some labels to the remaining data and try to correct the labels through multiple iterations [32, 33].
* Co-Training: In this procedure, we create two classifiers based on two different views of the same data. Then each classifier labels the data and through different iterations, they help each other to improve the prediction accuracy. Finally, the two updated classifiers combine their results to create one classification output [34, 35].
#### Iii-A4 Reinforcement Learning
Reinforcement learning is a sub-field of machine learning that deals with the problem of training autonomous agents to perform actions in an environment to optimize some notion of cumulative reward. This approach involves the agent continuously learning from its interactions with the environment to refine its decision-making policy [36].
The core idea behind reinforcement learning is to provide an agent with a scalar reward signal that it can use to learn an optimal behavior. The agent interacts with its environment by taking action, observing the resulting state, and receiving a reward. Over time, the agent uses this reward information to update its policy, which maps states to actions, to maximize the cumulative reward.
Reinforcement learning has been successfully employed in a variety of real-world applications, including robotics, control systems, cloud security, and game-playing. It is particularly well-suited for problems in which the optimal solution is difficult to define, and the best course of action must be learned through trial and error and experience. [10].
* Model-Based: This methodology involves constructing a model of the environment in which the agent operates, which can be utilized to simulate the outcomes of different actions. Model-based reinforcement learning can be applied to plan and estimate the expected reward of different sequences of actions before their execution [37].
* Model-Free: This approach focuses on directly learning a policy that maps states to actions without constructing a model of the environment. Examples of model-free reinforcement learning algorithms include Q-Learning and State-Action-Reward-State-Action(SARSA) [38].
Table II is showing some examples of Machine Learning algorithms with their advantages and disadvantages.
### _Benefits of using Machine Learning in Cloud Security_
As has been mentioned in the previous sections, Cloud Computing is growing so fast and more consumers are joining this trend to use its benefits. Following this growth, Cloud Providers are under pressure to provide the required resources also maximize the security level for their users. In these circumstances, they would need some automation or new evolving technology to minimize human interaction, for faster action and less cost [39]. Machine Learning has been used by most of the market leaders for cloud computing like AWS, Azure, IBM, and others. Below we can see some of the most important benefits of using Machine Learning in cloud security which have been illustrated briefly in figure 4:
#### Iii-B1 Automation
Using Machine Learning algorithms to automatically detect and block malicious network traffic, by analyzing the network traffic and identifying patterns that indicate a potential security threat. Also to automatically identify and respond to suspicious activity on cloud resources, by analyzing the usage patterns of the resources and identifying patterns that deviate from the norm. Once an unusual pattern is detected,
Fig. 4: Machine Learning benefits in Cloud Security
the algorithm can automatically take appropriate action, such as revoking access or initiating an incident response. This can help organizations save time and resources by reducing the need for manual monitoring and intervention. Supervised learning algorithms such as Random Forest or Support Vector Machines to classify network traffic can be used here [40].
#### Iv-A2 Scalability
Using Machine Learning algorithms to automatically scale cloud resources based on usage patterns, by analyzing the usage data from multiple sources, such as logs, network traffic, and system metrics, and identifying patterns that indicate when more resources are needed. Also to analyze large amounts of log data in real-time, by processing the log data and identifying patterns that indicate a potential security threat. Machine Learning algorithms such as Anomaly detection and Clustering can be used for this purpose [41].
#### Iv-A3 Adaptability
Machine Learning algorithms can adapt to changing patterns in the data, making them more effective at detecting new and evolving threats. For example, Machine Learning algorithms can be trained on a dataset of known malware samples to detect new types of unknown malware or to identify new patterns of malicious behavior. This allows organizations to stay ahead of evolving threats and to respond quickly to new security risks. This can be done by using supervised learning algorithms such as Random Forest, Support Vector Machines, and Naive Bayes to classify files as malicious or benign [42].
#### Iv-A4 Proactivity
Using a supervised learning algorithm to predict potential security breaches, and to take preventive measures to minimize the risk of a successful attack. Also benefiting from a neural network to identify the patterns of unusual activity in cloud systems, such as login attempts from unusual locations or unexpected network traffic [43].
#### Iv-A5 Efficiency
Machine Learning algorithms can be used to optimize cloud infrastructure and resources such as virtual machines, storage accounts, and network bandwidth; increasing the efficiency of the cloud environment and reducing costs. For example, Machine Learning algorithms such as linear regression, decision tree, and random forest can be used to automatically scale cloud resources based on usage patterns, or to identify and resolve inefficiencies in the cloud environment [17].
#### Iv-A6 Accurate and unbiased
Machine Learning algorithms can analyze large amounts of data to identify patterns that may be difficult for humans to detect, which can lead to more accurate security assessments. For example, a Machine Learning model can be trained to recognize patterns of network traffic that are commonly associated with a particular type of attack, such as a denial of service (DoS) attack. Once the model has been trained, it can then be used to analyze live network traffic and automatically block any traffic that matches the patterns associated with a known attack. Additionally, Machine Learning can improve the accuracy of security assessments by reducing the number of false positives generated by security systems. By analyzing data from multiple sources, Machine Learning models can more accurately identify security threats and reduce the number of false alarms [26].
Machine learning can also reduce the potential for bias in security decision-making by providing objective and data-driven insights into security-related activity. For example, Machine Learning can be used to analyze user behavior and identify patterns that indicate a potential security threat, such as a potential account compromise or insider threat, regardless of the user's identity, role, or privilege [44].
#### Iv-A7 Personalization
Machine Learning algorithms can be used to create user profiles that capture the specific patterns of behavior of individual users or groups. This information can then be used to create more accurate and tailored security measures that are better suited to the needs of those users.
Another way that Machine Learning can be used to provide personalization in cloud security is through the use of anomaly detection. Anomaly detection algorithms can be used to automatically identify and flag unusual or suspicious behavior, which can help to identify potential threats and respond to them more quickly and effectively.
Machine Learning can also be used to create more sophisticated access controls that take into account the specific roles and permissions of different users. This can help to ensure that only authorized users have access to sensitive data and resources, while also helping to prevent unauthorized access [45].
### _Examples of Machine Learning Algorithms used in Cloud Security_
In this section, we are going to review some of the most common Machine Learning algorithms which have been used in Cloud Security. Besides the following examples, different vendors and organizations might use other algorithms or combine some of them to achieve their desired goals. Figure 5 shows these examples briefly.
#### Iv-C1 Random Forest Algorithm
Random Forest is a Machine Learning algorithm that is used for both classification and regression tasks and it combines the predictions of multiple decision trees to make a final prediction. Each decision tree in the random forest is built using a different random subset of the training data, and the final prediction is made by taking a majority vote on the predictions of all the decision trees.
Random Forest is a robust algorithm that can handle large datasets and high dimensional feature spaces. It is also able to handle missing or incomplete data, which is a common problem in cloud security. One of the key advantages of Random Forest is that it can identify the most important features in the dataset, which can be used to focus on the most critical areas of the network and improve the overall security of the system [46].
In terms of specific use cases, Random Forest can be used in Cloud Security for a variety of tasks, such as:
* Intrusion detection: Random Forest can be trained on a dataset of normal network activity, and then used to identify patterns of activity that deviate from the normal behavior. These patterns can indicate the presence of an intrusion attempt, and the algorithm can alert the security team to take action.
* Anomaly detection: Random Forest can be used to identify unusual or unusual patterns in data that might indicate a security threat. This can include identifying unusual behavior in network traffic, such as a sudden increase in traffic from a specific IP address, or identifying unexpected changes in the configuration of a system.
* Classification of network traffic: Random Forest can be used to classify network traffic as normal or malicious, based on features such as source IP address, destination IP address, port number, and packet size. This can help to quickly identify and respond to potential security threats.
* Identifying the type of attack: Random Forest can also be used to classify the type of attack, whether it's a DDoS, malware, or phishing attack. The Random Forest algorithm is also a powerful tool for feature selection, which is the process of identifying the most important features in a dataset. It can be used to identify the most important features in network traffic data, such as specific IP addresses or ports, that are most likely to indicate a security threat. This can help to focus security efforts on the most critical areas of the network and improve the overall security of the system.
#### Iv-C2 Decision Trees Algorithm
Decision Trees are simple and intuitive algorithms that can be easily understood and interpreted by humans. The tree-like structure of the algorithm makes it easy to visualize the decision-making process, and the branches of the tree represent the different decisions that are made based on the input features. This makes the algorithm well-suited for applications where transparency is important, such as cloud security [25].
A Decision Tree is used for both classification and regression tasks in Machine Learning. It is a type of supervised learning algorithm that can be used to make predictions based on input data. The decision tree algorithm works by recursively partitioning the data into smaller subsets based on the values of the input features. At each step, the algorithm selects the feature that best separates the data into the target classes. The process continues until a stopping criterion is met, such as a maximum tree depth or a minimum number of samples in a leaf node. One of the key advantages of Decision Trees is that they can handle both continuous and categorical data, making them versatile and suitable for a wide range of applications. They can also handle missing data, a common cloud security problem. Decision Trees Algorithm can be used in similar scenarios as the Random Forest Algorithm, such as Intrusion and Anomaly detection, Classification of network traffic, and Identifying the type of attack and suspicious behavior [47].
#### Iv-C3 K-Means Clustering Algorithm
K-Means Clustering is an unsupervised Machine Learning algorithm used for clustering data into groups or clusters which is widely used in Machine Learning and data mining [48]. Some more details about the algorithm include:
* Centroid-based: K-Means Clustering is a centroid-based algorithm, which means that it works by defining clusters based on the mean of the points in the cluster. The algorithm initializes k cluster centroids, where k is the number of clusters that you want to create, and then assigns each data point to the nearest centroid.
* Iterative: The algorithm is iterative, meaning that it repeatedly updates the cluster centroids and reassigns data
Fig. 5: Examples of Machine Learning Algorithms used in Cloud Security
points to clusters until the centroids no longer change or a maximum number of iterations is reached.
* Determining k: One of the challenges in using K-Means Clustering is determining the optimal number of clusters (k) for a given dataset. Several methods can be used to determine the optimal number of clusters, such as the elbow method, silhouette analysis, and gap statistics.
* Assumptions: K-Means Clustering makes some assumptions about the data, such as the clusters being spherical and having roughly the same size and density. These assumptions may not hold true for all datasets, and in such cases, other clustering algorithms such as hierarchical clustering or density-based clustering may be more appropriate.
* Computationally efficient: K-Means Clustering is a computationally efficient algorithm, making it suitable for large datasets. However, storing the cluster centroids and data points requires a large amount of memory.
* Limitations: K-Means Clustering is sensitive to the initial placement of the centroids, and can lead to suboptimal solutions if the initial centroids are not chosen carefully. Additionally, it is not well suited for datasets with non-numeric variables or categorical data, and it also assumes that the clusters have similar sizes and densities, which may not be the case for all datasets.
K-Means Clustering is a widely used algorithm for clustering data into groups or clusters. It is computationally efficient, making it well-suited for large datasets. However, it has some limitations and assumptions that should be considered when using it, such as the difficulty of determining the optimal number of clusters and the sensitivity to the initial placement of the centroids [49].
In terms of use cases, K-Means Clustering can be used in cloud security for a variety of tasks, such as Anomaly detection, Identifying similar patterns for grouping similar malicious IP addresses, or for grouping similar types of attacks, also Identifying clusters of users based on their behavior such as a user accessing sensitive data at odd hours or from an unusual location.
#### Iii-B4 Support Vector Machine (SVM) Algorithm
Support Vector Machines (SVMs) are a type of supervised learning algorithm that can be used for classification and regression tasks. They work by mapping the input data into a high-dimensional feature space, where a hyperplane can be used to separate the different classes. The key idea is to find the hyperplane that maximizes the margin, which is the distance between the hyperplane and the closest data points from each class, also known as support vectors. These support vectors define the decision boundary, and any new data point can be classified based on which side of the boundary it falls on [17]. One of the main advantages of SVMs is that they can handle non-linearly separable data by transforming the input data into a higher dimensional space using a technique called the kernel trick. This allows the algorithm to find a linear boundary in the transformed space that separates the data.
In cloud security, SVMs can be used for Intrusion Detection by analyzing network traffic and identifying patterns that indicate malicious activity. The algorithm can be trained on labeled data, such as normal network traffic and known intrusion attempts, to learn the characteristics of each class. After training, it can classify new, unseen network packets as normal or abnormal. This can help to detect and prevent attacks on the cloud infrastructure, such as denial of service (DoS) attacks, malware infections, and unauthorized access attempts [27].
Additionally, SVMs can be used to classify and detect malicious files, such as malware, in the cloud. By analyzing the content of the files and extracting features, the algorithm can be trained on labeled data to learn the characteristics of benign and malicious files.
## IV Conclusion
In this paper, we have reviewed Cloud Computing technology and talked about different Service and Deployment Models. Then we reviewed the Security Challenges and common attacks in the Cloud Computing environment. After that, the focus was on Machine Learning and we reviewed the types of Machine Learning Algorithms, and the benefits of using ML in Cloud Security, finally, we discussed the most common ML algorithms used in Cloud Security.
Besides the points that we covered, there are more use cases like Cloud-based security orchestration and automation (SAO), assessment, incident, compliance and/or configuration management, Cloud Security Analytics, Governance, Monitoring, Cloud Access Security Brokers (CASB), and others which we are going to focus on them in future works. Also, we are going to dive deep into some of the ML algorithms and try to implement some scenarios with publicly available datasets.
|
2303.17936 | Defining differential equations for modular forms and Jacobi forms | It is well known that every modular form~$f$ on a discrete subgroup
$\Gamma\leqslant \textrm{SL}(2, \mathbb R)$ satisfies a third-order nonlinear
ODE that expresses algebraic dependence of the functions~$f$, $f'$, $f''$
and~$f'''$. These ODEs are automatically invariant under the Lie group
$\textrm{SL}(2, \mathbb R)$, which acts on the solution spaces thereof with an
open orbit (and the discrete stabiliser~$\Gamma$ of a generic solution).
Similarly, every modular form satisfies a fourth-order nonlinear ODE that is
invariant under the Lie group $\textrm{GL}(2, \mathbb R)$ acting on its
solution space with an open orbit. ODEs for modular forms can be compactly
expressed in terms of the differential invariants of these actions. The
invariant forms of both ODEs define plane algebraic curves naturally associated
with every modular form; the corresponding ODEs can be seen as modular
parametrisations of the associated curves.
After reviewing examples of nonlinear ODEs satisfied by classical modular
forms (such as Eisenstein series, modular forms on congruence subgroups of
level two and three, theta constants, and some newforms of weight two), we
generalise these results to Jacobi forms; these satisfy involutive third-order
PDE systems that are invariant under the Lie group $\textrm{SL}(2, \mathbb
R)\ltimes H$ where $H$ is the Heisenberg group. | Stanislav Opanasenko, Evgeny Ferapontov | 2023-03-31T10:03:21Z | http://arxiv.org/abs/2303.17936v2 | # Defining differential equations for modular forms and Jacobi forms
###### Abstract
It is well known that every modular form \(f\) on a discrete subgroup \(\Gamma\leqslant\operatorname{SL}(2,\mathbb{R})\) satisfies a third-order nonlinear ODE that expresses algebraic dependence of the functions \(f\), \(f^{\prime}\), \(f^{\prime\prime}\) and \(f^{\prime\prime\prime}\). These ODEs are automatically invariant under the Lie group \(\operatorname{SL}(2,\mathbb{R})\), which acts on the solution spaces thereof with an open orbit (and the discrete stabiliser \(\Gamma\) of a generic solution). Similarly, every modular form satisfies a fourth-order nonlinear ODE that is invariant under the Lie group \(\operatorname{GL}(2,\mathbb{R})\) acting on its solution space with an open orbit. ODEs for modular forms can be compactly expressed in terms of the differential invariants of these actions. The invariant forms of both ODEs define plane algebraic curves naturally associated with every modular form; the corresponding ODEs can be seen as modular parametrisations of the associated curves.
After reviewing examples of nonlinear ODEs satisfied by classical modular forms (such as Eisenstein series, modular forms on congruence subgroups of level two and three, theta constants, and some newforms of weight two), we generalise these results to Jacobi forms; these satisfy involutive third-order PDE systems that are invariant under the Lie group \(\operatorname{SL}(2,\mathbb{R})\ltimes H\) where \(H\) is the Heisenberg group.
Introduction
Modular forms on a discrete subgroup \(\Gamma\leqslant\mathrm{SL}(2,\mathbb{R})\) are holomorphic functions \(f(\tau)\) on the upper half-plane \(\mathcal{H}\) that satisfy the modular transformation property
\[f\left(\frac{a\tau+b}{c\tau+d}\right)=(c\tau+d)^{k}f(\tau),\qquad\left(\begin{array} []{cc}a&b\\ c&d\end{array}\right)\in\Gamma,\]
where \(k\) is the weight of a modular form; we refer to [6] for a general theory. It is well known that every modular form satisfies a third-order nonlinear ODE that expresses algebraic dependence of the functions \(f\), \(f^{\prime}\), \(f^{\prime\prime}\) and \(f^{\prime\prime\prime}\), which is shown in, e.g., [40, Theorem 2] based on the prior idea of [23] that two meromorphic functions on a Riemann surface are algebraically dependent. What was not explicitly noted however is that every such ODE is automatically \(\mathrm{SL}(2,\mathbb{R})\)-invariant and can be represented by an algebraic relation
\[F(I_{k},J_{k})=0\]
where \(I_{k}\) and \(J_{k}\) are differential invariants of a certain \(\mathrm{SL}(2,\mathbb{R})\)-action,
\[I_{k}=\frac{kff^{\prime\prime}-(k+1)f^{\prime 2}}{f^{2+\frac{4}{k}}},\quad J_{ k}=\frac{k^{2}f^{2}f^{\prime\prime\prime}-3k(k+2)ff^{\prime}f^{\prime \prime}+2(k+1)(k+2)f^{\prime 3}}{f^{3+\frac{6}{k}}},\]
see Section 2.1 for their expressions in terms of the Rankin-Cohen brackets. Thus, there is a plane algebraic curve \(C\colon F(I_{k},J_{k})=0\) naturally associated with every modular form (note that relation \(F\) depends on the modular form \(f\)). The case of particular interest in number theory is where \(f\) is a newform of weight \(k=2\) on a congruence subgroup \(\Gamma_{0}(N)\). In this case, modular functions \(I_{2}\) and \(J_{2}\) provide a modular parametrisation of \(C\). For special values of \(N\geq 11\), one obtains elliptic curves over \(\mathbb{Q}\) with a modular parametrisation; these curves \(C\) have the same \(j\)-invariants as the elliptic curves associated with the modular form \(f(\tau)\) via the Taniyama-Shimura-Weil conjecture, see e.g. [3, 41, 42] and Examples of Section 2.2 for some explicit formulae and further discussion.
Similarly, every modular form \(f(\tau)\) of weight \(k\) satisfies a fourth-order ODE, compare with [40, Remark, p. 342],
\[\mathcal{F}(P_{k},Q_{k})=0,\]
where \(P_{k}\) and \(Q_{k}\) are differential invariants of the order \(3\) and \(4\) of a certain \(\mathrm{GL}(2,\mathbb{R})\)-action,
\[P_{k}=\frac{(k^{2}f^{2}f^{\prime\prime\prime}-3k(k+2)ff^{\prime}f^{\prime \prime}+2(k+1)(k+2)f^{\prime 3})^{2}}{(kff^{\prime\prime}-(k+1)f^{\prime 2})^{3}},\]
\[Q_{k}=\frac{f^{2}\left(k(k{+}1)ff^{\prime\prime\prime\prime}-4(k{+}1)(k{+}3)f^ {\prime}f^{\prime\prime\prime}+3(k{+}2)(k{+}3)f^{\prime\prime 2}\right)}{(kff^{ \prime\prime}-(k+1)f^{\prime 2})^{2}},\]
see Section 2.1 for their expressions in terms of the Rankin-Cohen brackets. Thus, there is another plane algebraic curve \(\mathcal{C}:\mathcal{F}(P_{k},Q_{k})=0\) naturally associated with every modular form
(the relation \({\cal F}\) depends on the modular form \(f\)). Note that the invariants \(P_{k}\) and \(Q_{k}\) are modular functions for every \(k\), and provide a modular parametrisation of \({\cal C}\). There is a natural covering map \(C\to{\cal C}\), see Remark 2 in Section 2.1. In all examples discussed in this paper the second curve \({\cal C}\) turns out to be rational (even for congruence subgroups \(\Gamma_{0}(N)\) of higher genus), although we have no general explanation of this fact.
The origin of ODEs for classical modular forms on \({\rm SL}(2,{\mathbb{Z}})\) are the Ramanujan equations for the Eisenstein series. In what follows, we use the notation \(q={\rm e}^{2\pi i\tau}\) and denote by prime the operator \(q\frac{d}{dq}=\frac{1}{2\pi i}\frac{d}{d\tau}\). The Eisenstein series are defined as
\[E_{k}(\tau)=1-\frac{2k}{B_{k}}\sum_{n=1}^{\infty}\sigma_{k-1}(n)q^{n},\]
where \(B_{k}\) are the Bernoulli numbers and \(\sigma_{k-1}(n)\) denotes the sum of the \((k-1)\)st powers of the positive divisors of \(n\). Explicitly, we have
\[E_{2}(\tau)=1-24\sum_{n=1}^{\infty}\sigma_{1}(n)q^{n}=1-24q-72q^{ 2}-\ldots,\] \[E_{4}(\tau)=1+240\sum_{n=1}^{\infty}\sigma_{3}(n)q^{n}=1+240q+21 60q^{2}+\ldots,\] \[E_{6}(\tau)=1-504\sum_{n=1}^{\infty}\sigma_{5}(n)q^{n}=1-504q-16 632q^{2}-\ldots,\]
note that \(E_{2}\) is only quasi-modular. The Eisenstein series \(E_{2}\), \(E_{4}\) and \(E_{6}\) satisfy the Ramanujan system of ODEs,
\[E_{2}^{\prime}=\frac{E_{2}^{2}-E_{4}}{12},\qquad E_{4}^{\prime}=\frac{E_{2}E_{ 4}-E_{6}}{3},\qquad E_{6}^{\prime}=\frac{E_{2}E_{6}-E_{4}^{2}}{2}, \tag{1.1}\]
which is invariant under the action of the Lie group \({\rm SL}(2,{\mathbb{R}})\) defined as
\[\tilde{\tau}=\frac{a\tau+b}{c\tau+d},\quad\tilde{E}_{2}=(c\tau+d)^{2}E_{2}+12 c(c\tau+d),\quad\tilde{E}_{4}=(c\tau+d)^{4}E_{4},\quad\tilde{E}_{6}=(c\tau+d)^{6 }E_{6}.\]
Every modular form \(f\) on \({\rm SL}(2,{\mathbb{Z}})\) is a homogeneous polynomial in \(E_{4}\) and \(E_{6}\). Differentiation of \(f\) with the help of (1.1) gives four polynomial expressions for \(f\), \(f^{\prime}\), \(f^{\prime\prime}\), \(f^{\prime\prime\prime}\) in terms of \(E_{2}\), \(E_{4}\) and \(E_{6}\). The elimination of \(E_{2}\), \(E_{4}\), \(E_{6}\) leads to a third-order nonlinear ODE for \(f\), which inherits \({\rm SL}(2,{\mathbb{R}})\)-symmetry from the Ramanujan equations.
In special cases, both (third- and fourth-order) ODEs for modular forms and their invariance properties have been discussed in the literature, see, e.g., [1, 6, 26, 34]. For instance, the modular discriminant
\[\Delta=q\prod_{n=1}^{\infty}(1-q^{n})^{24}=\frac{1}{1728}(E_{4}^{3}-E_{6}^{2})\]
satisfies an \(\mathrm{SL}(2,\mathbb{R})\)-invariant third-order ODE ([40, Proposition 3])
\[36\Delta^{4}\Delta^{\prime\prime\prime 2}-14(18\Delta\Delta^{ \prime\prime}-13\Delta^{\prime 2})\Delta^{2}\Delta^{\prime}\Delta^{\prime\prime \prime}+48\Delta^{3}\Delta^{\prime\prime 3}\] \[\quad+285\Delta^{2}\Delta^{\prime 2}\Delta^{\prime\prime 2}-468 \Delta\Delta^{\prime 4}\Delta^{\prime\prime}+169\Delta^{\prime 6}+48\Delta^{7}=0,\]
as well as the more well-known van der Pol-Rankin equation, which is \(\mathrm{GL}(2,\mathbb{R})\)-invariant fourth-order ODE,
\[2\Delta^{3}\Delta^{\prime\prime\prime\prime}-10\Delta^{2}\Delta^{\prime}\Delta ^{\prime\prime\prime}-3\Delta^{2}\Delta^{\prime\prime 2}+24\Delta\Delta^{ \prime 2}\Delta^{\prime\prime}-13\Delta^{\prime 4}=0,\]
possessing an extra scaling symmetry \(\Delta\to\lambda\Delta\) that is not present in the third-order ODE; see Example 1 of Section 2.2 for the invariant forms of both equations. Note that the fourth-order ODE is a differential consequence of the third-order ODE.
It is precisely via the differential equations \(F(I_{k},J_{k})=0\) and \(\mathcal{F}(P_{k},Q_{k})=0\) that modular forms feature in various contexts in mathematical physics. In this paper, we discuss the invariance properties of ODEs for modular forms from the point of view of symmetry analysis of differential equations, as well as an extension of these results to Jacobi forms. Below we list some examples of ODEs for modular forms originating from the theory of dispersionless integrable PDEs in 3D.
**First-order integrable Lagrangians.** Paper [18] gives a characterisation of first-order integrable Lagrangians of the form \(\int F(u_{x_{1}},u_{x_{2}},u_{x_{3}})\,\mathrm{d}x\). It was observed in [16, 12] that the corresponding Lagrangian densities \(F\) are related to Picard modular forms. In particular, for densities of the form \(F=u_{x_{1}}u_{x_{2}}f(u_{x_{3}})\), the integrability conditions lead to a fourth-order \(\mathrm{GL}(2,\mathbb{R})\)-invariant ODE for \(f(\tau)\),
\[ff^{\prime\prime\prime\prime}(ff^{\prime\prime}-2f^{\prime 2})-f^{2}f^{\prime \prime\prime 2}+2f^{\prime}(ff^{\prime\prime}+4f^{\prime 2})f^{\prime\prime \prime}-9f^{\prime 2}f^{\prime\prime 2}=0,\]
whose general solution is the Eisenstein series \(E_{1,3}(\tau)\),
\[f(\tau)=E_{1,3}(\tau)=\sum_{(\alpha,\beta)\in\mathbb{Z}^{2}}q^{(\alpha^{2}- \alpha\beta+\beta^{2})}=1+6q+6q^{3}+6q^{4}+12q^{7}+.....,\]
see Example 4 of Section 2.2 for further details.
**Hirota type equations.** Paper [17] studies integrability of dispersionless Hirota-type equations in 3D, \(F(u_{x_{i}x_{j}})=0\), where \(u(x_{1},x_{2},x_{3})\) is a function of three independent variables, and \(u_{x_{i}x_{j}}\) denote second-order partial derivatives. It was shown in [11] that the 'generic' integrable Hirota master-equation is expressible via genus three theta constants. In particular, for equations of the form
\[u_{x_{3}x_{3}}-\frac{u_{x_{1}x_{2}}}{u_{x_{1}x_{3}}}-\frac{1}{6}h(u_{x_{1}x_{1 }})u_{x_{1}x_{3}}^{2}=0,\]
the integrability conditions lead to the \(\mathrm{SL}(2,\mathbb{R})\)-invariant Chazy equation for \(h(t)\)[36],
\[h_{ttt}+2hh_{tt}-3h_{t}^{2}=0,\]
whose general solution is expressed in terms of the Eisenstein series \(E_{2}\),
\[h(t)=E_{2}(it/\pi)=1-24\sum_{n=1}^{\infty}\sigma_{1}(n)\mathrm{e}^{-2nt}.\]
**Second-order quasilinear PDEs.** Paper [7] studies integrability of 3D second-order quasilinear PDEs of the form \(\sum_{i,j}f_{ij}(u_{x_{1}},u_{x_{2}},u_{x_{3}})u_{x_{i}x_{j}}=0\) where \(u(x_{1},x_{2},x_{3})\) is a function of three independent variables. In particular, for equations of the form
\[u_{xy}+(u_{x}u_{y}r(u_{t}))_{t}=0,\]
the integrability conditions result in an \(\mathrm{SL}(2,\mathbb{R})\)-invariant third-order ODE for \(r(s)\),
\[r_{sss}(r_{s}-r^{2})-r_{ss}^{2}+4r^{3}r_{ss}+2r_{s}^{3}-6r^{2}r_{s}^{2}=0,\]
which has appeared in the context of modular forms of level two [1]. Its generic solution is given by the Eisenstein series
\[r(s)=1-8\sum_{n=1}^{\infty}\frac{(-1)^{n}nw^{n}}{1-w^{n}},\quad w=\mathrm{e}^ {4s},\]
which is associated with the congruence subgroup \(\Gamma_{0}(2)\) of the modular group; see Section 2.2 for further details.
**Second-order integrable Lagrangians.** Second-order integrable Lagrangians of the form \(\int F(u_{xx},u_{xy},u_{yy})\,\mathrm{d}x\mathrm{d}y\) were investigated in [19]. Under the ansatz \(F=\mathrm{e}^{u_{xx}}g(u_{xy},u_{yy})\), the integrability conditions lead to the following constraints for \(g(z,c)\) (we set \(z=u_{xy}\), \(c=u_{yy}\)):
\[\begin{array}{c}gg_{zcc}=3g_{cc}g_{z}-2g_{zc}g_{c},\\ gg_{zzz}=g_{z}g_{zz}+4g_{zc}g-4g_{z}g_{c},\\ gg_{ccc}=g_{c}g_{cc}+2g_{cc}g_{zz}-2(g_{zc})^{2},\\ gg_{zzc}=2g_{z}g_{zc}-g_{c}g_{zz}+2gg_{cc}-2(g_{c})^{2}.\end{array} \tag{1.2}\]
This over-determined system for \(g\) is in involution and its generic solution can be represented in the form
\[g(z,c)=[\Delta(ic/\pi)]^{-1/8}\theta_{1}(ic/\pi,z)\]
where \(\Delta\) is the modular discriminant and \(\theta_{1}\) is the Jacobi theta function,
\[\theta_{1}(\tau,z)=2\sum_{n=0}^{\infty}(-1)^{n}\mathrm{e}^{\pi i(n+1/2)^{2} \tau}\sin[(2n+1)z].\]
The reason for the occurrence of modular forms in the above classification results is a remarkable construction of [31] that parametrises broad classes of dispersionless integrable
systems in 3D via generalised hypergeometric functions. For special values of the parameters (where the monodromy group of hypergeometric system is a lattice), this parametrisation leads to integrable PDEs whose coefficients are expressible via modular forms.
The structure of the paper is as follows. In Section 2 we discuss differential equations for modular forms by first describing a general construction of \(\mathrm{SL}(2,\mathbb{R})\)- and \(\mathrm{GL}(2,\mathbb{R})\)-invariant equations in terms of the differential invariants of the corresponding group actions (Section 2.1), and then illustrating the general constructions by numerous examples of modular forms on various congruence subgroups (Section 2.2). Differential systems for Jacobi forms are discussed in Section 3 by first describing differential invariants of the Jacobi group (Section 3.1) and then illustrating the general theory by several examples (Section 3.2).
## 2 Differential equations for modular forms
After reviewing differential invariants of the standard \(\mathrm{SL}(2,\mathbb{R})\)- and \(\mathrm{GL}(2,\mathbb{R})\)-actions occurring in the theory of modular forms, we provide third-order and fourth-order invariant differential equations for the Eisenstein series \(E_{4}\) and \(E_{6}\), the modular discriminant \(\triangle\), Jacobi theta constants, Eisenstein series \(E_{1,3}\), some modular forms of level two, and some newforms of weight two on various congruence subgroups.
### Differential invariants
Consider the group \(G_{k}\), which is isomorphic to \(\mathrm{SL}(2,\mathbb{R})\), of point transformations on a space with coordinates \((\tau,f)\), which is relevant for modular forms of weight \(k\),
\[\tilde{\tau}=\frac{a\tau+b}{c\tau+d},\quad\tilde{f}=(c\tau+d)^{k}f,\quad\text{ where }\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\mathrm{SL}(2,\mathbb{R}).\]
Its lowest-order differential invariants are
\[I_{k}=\frac{kff^{\prime\prime}-(k+1)f^{\prime 2}}{f^{2+\frac{4}{k}}}=\frac{[f,f]_ {2}}{(k+1)f^{2+\frac{4}{k}}}, \tag{2.3}\]
\[J_{k}=\frac{k^{2}f^{2}f^{\prime\prime\prime}-3k(k+2)ff^{\prime}f^{\prime\prime} +2(k+1)(k+2)f^{\prime 3}}{f^{3+\frac{6}{k}}}=\frac{[f,\,[f,f]_{2}]_{1}}{(k+1)f^{3+ \frac{6}{k}}},\]
where \([\cdot,\cdot]_{i}\) is the \(i\)th Rankin-Cohen bracket; we follow the notation of [6, p. 53]. Any third-order \(G_{k}\)-invariant ODE can be written in terms of these two invariants as
\[F(I_{k},J_{k})=0. \tag{2.4}\]
Note that the Lie algebra \(\mathfrak{g}_{k}\) associated with \(G_{k}\) is spanned by the vector fields \(\partial_{\tau}\), \(2\tau\partial_{\tau}-kf\partial_{f}\) and \(\tau^{2}\partial_{\tau}-k\tau f\partial_{f}\). In what follows, we will also need the easily verifiable relation
\[\frac{dI_{k}}{J_{k}}=\frac{2\pi i}{k}f^{2/k}d\tau. \tag{2.5}\]
**Theorem 1**.: _The \(G_{k}\)-action on the solution space of equation (2.4) is locally transitive, that is, it possesses an open orbit._
Proof.: Let us write equation (2.4) in the form \(f_{\tau\tau\tau}=r(f,f_{\tau},f_{\tau\tau})\). The projections of the evolutionary forms1 of the three vector fields spanning \(\mathfrak{g}_{k}\) to its three-dimensional solution space,
Footnote 1: Every vector field on a jet space with coordinates \((\tau,f)\) can be prolonged uniquely to the vector field on an infinite jet space with coordinates \((\tau,f,f_{\tau},f_{\tau\tau},\dots)\), which has an equivalent evolutionary vector field \(\sum_{i=0}^{\infty}{\rm D}_{\frac{i\cdot f}{4\tau\tau}}\chi\ \partial_{\frac{i \cdot f}{4\tau\tau}}\) where \({\rm D}\) is the total derivative, see [35, Chapter 5] for rigorous details.
\[f_{\tau}\partial_{f}+f_{\tau\tau}\partial_{f_{\tau}}+r\partial_{f _{\tau\tau}},\quad(kf+2\tau f_{\tau})\partial_{f}+((k+2)f_{\tau}+2\tau f_{ \tau\tau})\partial_{f_{\tau}}+((k+4)f_{\tau\tau}+2\tau r)\partial_{f_{\tau \tau}},\] \[(\tau^{2}f_{\tau}+k\tau f)\partial_{f}+((k+2)\tau f_{\tau}+kf+ \tau^{2}f_{\tau\tau})\partial_{f_{\tau}}+((k+4)\tau f_{\tau\tau}+2(k+1)f+\tau ^{2}r)\partial_{f_{\tau\tau}},\]
are linearly independent. Indeed, the determinant
\[\left|\begin{array}{ccc}f_{\tau}&f_{\tau\tau}&r\\ kf+2\tau f_{\tau}&(k+2)f_{\tau}+2\tau f_{\tau\tau}&(k+4)f_{\tau\tau}+2\tau r\\ \tau^{2}f_{\tau}+k\tau f&(k+2)\tau f_{\tau}+kf+\tau^{2}f_{\tau\tau}&(k+4)\tau f _{\tau\tau}+2(k+1)f_{\tau}+\tau^{2}r\end{array}\right|\] \[=k^{2}ff_{\tau}f_{\tau\tau\tau}-k((k+4)f_{\tau}^{2}+2(k+1)f^{2}) f_{\tau\tau}+2(k+1)(k+2)ff_{\tau}^{2}.\]
is not \(G_{k}\)-invariant, and thus it can not be a left-hand side of the equation satisfied by \(f\).
Consider also the group \(\mathcal{G}_{k}\), which is isomorphic to \({\rm GL}(2,\mathbb{R})\), of point transformations on a space with coordinates \((\tau,f)\),
\[\tilde{\tau}=\frac{a\tau+b}{c\tau+d},\quad\tilde{f}=(c\tau+d)^{k}f,\quad \text{where}\ \left(\begin{matrix}a&b\\ c&d\end{matrix}\right)\in{\rm GL}(2,\mathbb{R}).\]
Its lowest-order differential invariants are
\[P_{k}=\frac{(k^{2}f^{2}f^{\prime\prime\prime}-3k(k+2)ff^{\prime}f^{\prime \prime}+2(k+1)(k+2)f^{\prime 3})^{2}}{(kff^{\prime\prime}-(k+1)f^{\prime 2})^{3}}= \frac{(k+1)[f,\,[f,f]_{2}]_{1}^{2}}{[f,f]_{2}^{3}},\]
\[Q_{k}=\frac{f^{2}\left(k(k{+}1)ff^{\prime\prime\prime}-4(k{+}1)(k{+}3)f^{ \prime}f^{\prime\prime\prime}+3(k{+}2)(k{+}3)f^{\prime\prime 2}\right)}{(kff^{\prime \prime}-(k+1)f^{\prime 2})^{2}}=\frac{12(k+1)^{2}f^{2}[f,f]_{4}}{(k+2)(k+3)[f,f]_{2}^{2}}.\]
Any fourth-order \(\mathcal{G}_{k}\)-invariant ODE can be written in terms of these two invariants as
\[\mathcal{F}(P_{k},Q_{k})=0. \tag{2.6}\]
Note that the Lie algebra associated with the group \(\mathcal{G}_{k}\) is spanned by the vector fields \(\partial_{\tau}\), \(\tau\partial_{\tau}\), \(\tau^{2}\partial_{\tau}-k\tau f\partial_{f}\) and \(f\partial_{f}\). Similarly to the proof of Theorem 1, one can show that the \(\mathcal{G}_{k}\)-action on the solution space of equation (2.6) is locally transitive (possesses an open orbit).
There exists a simple link between the ODEs described above. Namely, every \(G_{k}\)-invariant third-order ODEs (2.4) possesses, as its differential consequence, a \(\mathcal{G}_{k}\)-invariant fourth-order ODE (2.6); furthermore, every \(\mathcal{G}_{k}\)-invariant fourth-order ODE (2.6) arises in this way. These results are summarised in the two propositions below.
**Proposition 1**.: _Every \(G_{k}\)-invariant third-order ODE possesses, as a differential consequence, a \(\mathcal{G}_{k}\)-invariant fourth-order ODE._
Proof.: Consider a third-order \(G_{k}\)-invariant third-order ODE of type (2.4). Let us differentiate it using the formulas
\[I_{k}^{\prime}=\frac{\sqrt{P_{k}}}{k}I_{k}^{\frac{3}{2}}f^{\frac{2}{k}},\qquad J _{k}^{\prime}=\frac{k^{2}Q_{k}-6(k+2)^{2}}{k(k+1)}I_{k}^{2}f^{\frac{2}{k}},\]
which can be verified by direct calculation. This gives
\[0=F_{I_{k}}I_{k}^{\prime}+F_{J_{k}}J_{k}^{\prime}=\frac{I_{k}^{\frac{3}{2}}f^{ \frac{2}{k}}}{k}\left(F_{I_{k}}\sqrt{P_{k}}+F_{J_{k}}\frac{k^{2}Q_{k}-6(k+2)^{ 2}}{k+1}I_{k}^{\frac{1}{2}}\right).\]
Using the relation \(J_{k}=\sqrt{P_{k}}I_{k}^{\frac{3}{2}}\) and eliminating \(I_{k}\) from the pair of equations
\[F(I_{k},J_{k})=0,\qquad F_{I_{k}}\sqrt{P_{k}}+F_{J_{k}}\frac{k^{2}Q_{k}-6(k+2) ^{2}}{k+1}I_{k}^{\frac{1}{2}}=0, \tag{2.7}\]
we obtain the required \(\mathcal{G}_{k}\)-invariant fourth-order ODE of type (2.6). Note that _algebraic_ third-order ODEs (2.4) produce _algebraic_ fourth-order ODEs (2.6).
Proposition 1 is a direct corollary of a more general group-theoretic result.
**Theorem 2**.: _Let \(H\) and \(G\) be groups of point transformations of the space \((\tau,f)\) with \(H\) being a proper subgroup of \(G\). If \(H\) is a symmetry group of an ODE \(\mathcal{E}\) for \(f(\tau)\), then there is a \(G\)-invariant differential consequence of \(\mathcal{E}\)._
Proof.: Let the space on which the groups \(H\) and \(G\) are acting be coordinatised by \((\tau,f)\), their dimensions be \(m\) and \(n\), respectively, \(n>m\), and the equation \(\mathcal{E}\) take the form \(F(\tau,f,\dots,f_{r})=0\) where \(f_{i}=\frac{\mathrm{d}^{i}f}{\mathrm{d}\tau^{i}}\). The order of a lowest-order invariant \(I_{m-1}\) of \(H\) is \(m-1\), and its higher-order invariants can be obtained from \(I_{m-1}\) with the help of the \(H\)-invariant differentiation operator D, \(I_{m+k-1}=\mathrm{D}^{k}I_{m-1}\), \(k\in\mathbb{N}\). Since equation \(\mathcal{E}\) is \(H\)-invariant it can be written as \(\bar{F}_{r}(I_{m-1},\dots,I_{r})=0\) (\(r\) is necessarily greater or equal to \(m-1\)). Differentiating \(n-r\) times the equation \(\bar{F}=0\) with the help of the operator D, we obtain a system of \(n-r+1\) equations \(\bar{F}_{r+i}(I_{m-1},\dots,I_{m-1+i})=0\), \(i=0,\dots,n-r\), on \(n-r+1\) invariants \(I_{k}\) (we assume that \(n>r\), otherwise we would have to keep differentiating). Using the implicit function theorem, the lower-order \(n-r\) invariants can be excluded, which results in a single equation containing the remaining invariants \(I_{k}\), but they can be rewritten in terms of the invariants \(J_{l}\) of \(G\). Indeed, since \(H<G\), the invariants \(J_{l}\) of \(G\) are invariants of \(H\) as well, and thus \(J_{l}\) are functions of \(I_{k}\), \(J_{i}=f_{i}(I_{m-1},\dots,I_{i})\), \(i\geqslant n-1\). Thus we have a desired \(G\)-invariant differential consequence of \(\mathcal{E}\).
The generalisation of this result to systems of PDEs is straightforward.
**Proposition 2**.: _Every \(\mathcal{G}_{k}\)-invariant fourth-order ODE is a differential consequence of some \(G_{k}\)-invariant third-order ODE._
Proof.: Given a \(\mathcal{G}_{k}\)-invariant fourth-order ODE, let us seek a third-order ODE (2.4) in the form \(F(I_{k},J_{k})=J_{k}-S(I_{k})=0\) where \(S(I_{k})\) is a function to be determined. Using the relation \(J_{k}=\sqrt{P_{k}}I_{k}^{\frac{3}{2}}\), the corresponding equations (2.7) can be written as
\[P_{k}=I_{k}^{-3}S^{2}(I_{k}),\qquad Q_{k}=6\frac{(k+2)^{2}}{k^{2}}+\frac{k+1}{k ^{2}}I_{k}^{-2}S(I_{k})\frac{\mathrm{d}S(I_{k})}{\mathrm{d}I_{k}}. \tag{2.8}\]
The substitution of these relations into the fourth-order ODE, \(\mathcal{F}(P_{k},Q_{k})=0\), gives a first-order differential equation for \(S(I_{k})\) (whose general solution depends on one arbitrary constant). Thus, there is a one-parameter family of third-order ODEs with the required property.
**Corollary 1**.: _Every modular form of weight \(k\) satisfies a \(\mathrm{GL}(2,\mathbb{R})\)-invariant fourth-order ODE._
**Remark 1**.: _For a modular form \(f(\tau)\) of weight \(k\), the corresponding \(G_{k}\)-invariant third-order ODE (2.4) is necessarily algebraic, thus, there is a (singular) plane algebraic curve \(C:F(I_{k},J_{k})=0\) associated to every modular form. Formulae (2.3) provide a local parametrisation of \(C\). Note that this parametrisation is not necessarily modular since the denominators in (2.3) are not modular forms in general. In the particularly interesting case where \(f\) is a modular form of weight \(k=2\) on a suitable congruence subgroup \(\Gamma_{0}(N)\), the functions \(I_{2}\) and \(J_{2}\) become modular,_
\[I_{2}=\frac{2ff^{\prime\prime}-3f^{\prime 2}}{f^{4}},\quad J_{2}=\frac{4f^{2}f^ {\prime\prime\prime}-24ff^{\prime}f^{\prime\prime}+24f^{\prime 3}}{f^{6}},\]
_and formula (2.5) reduces to_
\[\frac{\mathrm{d}I_{2}}{J_{2}}=\pi if(\tau)\mathrm{d}\tau.\]
_It shows that, up to a constant factor, \(f(\tau)\mathrm{d}\tau\) is a pull-back of the holomorphic differential on \(C\) (see [5, Section 10.4], for an alternative derivation of a third-order ODE for modular forms of weight \(k=2\))._
_For several examples of modular forms of weight \(k\) discussed in Section 2.2, the curve \(C\) is a nodal cubic,_
\[F(I_{k},J_{k})=J_{k}^{2}+aI_{k}^{3}+bI_{k}^{2}=0,\]
_where \(a,b\in\mathbb{Q}\) are some constants (that depend on \(k\)). In view of (2.7), the corresponding \(\mathcal{G}_{k}\)-invariant fourth-order ODE takes the form_
\[\mathcal{F}(P_{k},Q_{k})=2k^{2}Q_{k}-2(k+1)P_{k}+(k+1)a-12(k+2)^{2}=0,\]
_which is a linear relation between \(P_{k}\) and \(Q_{k}\)._
**Remark 2**.: _For every modular form \(f(\tau)\) of weight \(k\), the corresponding \(\mathcal{G}_{k}\)-invariant fourth-order ODE (2.6) is necessarily algebraic. In other words, there is another (singular) plane algebraic curve \(\mathcal{C}\colon\mathcal{F}(P_{k},Q_{k})=0\) associated to every modular form, furthermore, formulae (2.3) provide a modular parametrisation of \(\mathcal{C}\) (note that \(P_{k}\) and \(Q_{k}\) are modular functions for every \(k\)). There is a natural covering map \(C\to\mathcal{C}\) defined as_
\[P_{k}=\frac{J_{k}^{2}}{I_{k}^{3}},\qquad Q_{k}=6\frac{(k+2)^{2}}{k^{2}}-\frac{ k+1}{k^{2}}\frac{J_{k}F_{I_{k}}}{I_{k}^{2}F_{J_{k}}};\]
_use the equation \(J_{k}=\sqrt{P_{k}}I_{k}^{\frac{3}{2}}\) and the second equation (2.7). We emphasise that in all examples discussed so far, the curve \(\mathcal{C}\) has been rational. Although the rationality of \(\mathcal{C}\) clearly holds for modular forms \(f(\tau)\) on genus zero congruence subgroups \(\Gamma_{0}(N)\) (in which case both \(P_{k}\) and \(Q_{k}\) become rational functions of the corresponding Hauptmodul), it also holds for congruence subgroups of higher genus; see Examples 6-10 of Section 2.2 where we derived differential equations for newforms on genus \(1\) congruence subgroups \(\Gamma_{0}(11),\Gamma_{0}(14),\Gamma_{0}(15)\), \(\Gamma_{0}(17)\), and \(\Gamma_{0}(37)\)._
### Examples
First of all, we would like to revamp some classical results on the forms \(E_{4}\), \(E_{6}\), \(\Delta\) and \(\theta\).
**Example 1: Eisenstein series \(E_{4}\) and \(E_{6}\).** It has already been mentioned that the Eisenstein series \(E_{2}\), \(E_{4}\) and \(E_{6}\) satisfy the Ramanujan system (1.1). Eliminating consequently \(E_{4}\) and \(E_{6}\) we arrive at the Chazy equation for \(E_{2}\),
\[2E_{2}^{\prime\prime\prime}-2E_{2}E_{2}^{\prime\prime}+3(E_{2}^{\prime})^{2}=0.\]
Analogously, we can arrive at the equation for \(E_{4}\), which has appeared earlier in [40, Proposition 4], and for \(E_{6}\), a calculation which was too 'distasteful' for Resnikoff to perform (without the use of computer), see [40, p. 344]. The Eisenstein series \(E_{4}\) satisfies the third-order ODE
\[\begin{split} 80E_{4}^{2}E_{4}^{\prime\prime 2}+& 120(5E_{4}^{ \prime 3}-6E_{4}E_{4}^{\prime}E_{4}^{\prime\prime})E_{4}^{\prime\prime\prime}+576E_{4 }E_{4}^{\prime 3}-20(27E_{4}^{\prime 2}+4E_{4}^{3})E_{4}^{\prime\prime 2}\\ &+200E_{4}^{2}E_{4}^{\prime 2}E_{4}^{\prime\prime}-125E_{4}E_{4}^{ \prime 4}=0,\end{split} \tag{2.9}\]
which takes a nice form
\[5J_{4}^{2}+144I_{4}^{3}-80I_{4}^{2}=0\]
in terms of the invariants of the group \(G_{4}\). In its turn, the \(\mathcal{G}_{4}\)-invariant equation for \(E_{4}\) is
\[16Q_{4}-5P_{4}-144=0.\]
The explicit ODEs for \(E_{6}\) are indeed quite long so that we only present their invariant forms,
\[343(J_{6}^{3}-216I_{6}^{3})+2(256I_{6}^{3}+7J_{6}^{2})^{2}=0,\]
and
\[(6Q_{6}-32)^{2}-7(Q_{6}-4)P_{6}=0.\]
**Example 2: Modular discriminant \(\Delta\).** The modular discriminant \(\Delta\) is a cusp form of weight \(12\) defined by the formula
\[\Delta=q\prod_{n=1}^{\infty}(1-q^{n})^{24}.\]
It can be expressed as \(\Delta=\frac{1}{1728}(E_{4}^{3}-E_{6}^{2})\), and thus we can construct a \(G_{12}\)-invariant third-order ODE satisfied by \(\Delta\), which is
\[\begin{array}{c}36\Delta^{4}\Delta^{\prime\prime\prime 2}-14(18\Delta\Delta^{ \prime\prime}-13\Delta^{\prime 2})\Delta^{2}\Delta^{\prime}\Delta^{\prime \prime\prime}+48\Delta^{3}\Delta^{\prime\prime 3}\\ +285\Delta^{2}\Delta^{\prime 2}\Delta^{\prime\prime 2}-468\Delta\Delta^{\prime 4 }\Delta^{\prime\prime}+169\Delta^{\prime 6}+48\Delta^{7}=0.\end{array} \tag{2.10}\]
Its invariant form defines an elliptic curve (equianharmonic case),
\[J_{12}^{2}+16I_{12}^{3}+27648=0.\]
Equation (2.10) was first found in [40]. On the other hand, \(\Delta\) is known [37, 39] to satisfy the \({\cal G}_{12}\)-invariant fourth-order ODE
\[2\Delta^{3}\Delta^{\prime\prime\prime\prime}-10\Delta^{2}\Delta^{\prime} \Delta^{\prime\prime\prime}-3\Delta^{2}\Delta^{\prime\prime 2}+24\Delta \Delta^{\prime 2}\Delta^{\prime\prime}-13\Delta^{\prime 4}=0, \tag{2.11}\]
whose invariant form is
\[Q_{12}=6.\]
Note that (2.11) is a differential consequence of (2.10).
**Example 3: Jacobi theta constants.** Jacobi theta constants (thetanulls) are defined as
\[\theta_{2}=\sum_{n=-\infty}^{\infty}{\rm e}^{(n-1/2)^{2}\pi i\tau},\quad \theta_{3}=\sum_{n=-\infty}^{\infty}{\rm e}^{n^{2}\pi i\tau},\quad\theta_{4}= \sum_{n=-\infty}^{\infty}(-1)^{n}{\rm e}^{n^{2}\pi i\tau}.\]
They are known to satisfy the same third-order ODE [24]
\[(\theta^{2}\theta_{\tau\tau\tau}-15\theta\theta_{\tau}\theta_{\tau\tau}+30 \theta_{\tau}^{3})^{2}+32(\theta\theta_{\tau\tau}-3\theta_{\tau}^{2})^{3}+\pi ^{2}\theta^{10}(\theta\theta_{\tau\tau}-3\theta_{\tau}^{2})^{2}=0.\]
This equation is \(G_{1/2}\)-invariant, which reflects the fact that the thetanulls are modular forms of weight \(1/2\), and can be presented as
\[J_{1/2}^{2}+16\,I_{1/2}^{3}-\frac{1}{16}I_{1/2}^{2}=0.\]
Jacobi theta constants also satisfy a nonlinear fourth-order ODE with \({\rm GL}(2,\mathbb{R})\)-symmetry,
\[\theta^{3}(\theta\theta_{\tau\tau}-3\theta_{\tau}^{2})\theta_{\tau\tau\tau\tau }-\theta^{4}\theta_{\tau\tau\tau}^{2}+2\theta^{2}\theta_{\tau}(\theta\theta_ {\tau\tau}+12\theta_{\tau}^{2})\theta_{\tau\tau\tau}+\theta^{3}\theta_{\tau \tau}^{3}-24\theta^{2}\theta_{\tau}^{2}\theta_{\tau\tau}^{2}-18\theta\theta_{ \tau}^{4}\theta_{\tau\tau}+18\theta_{\tau}^{6}=0,\]
see [34, eq. (5.5)], whose invariant form is
\[Q_{1/2}-6P_{1/2}-102=0.\]
Now we would like to consider some recently discovered systems of ODEs for modular forms on congruence subgroups of \(\mathrm{SL}(2,\mathbb{Z})\). Thus, there are Ramanujan-like systems for modular forms of level two [38], three [22, 29, 32], five [27, 28], six [30], etc. (See many more results on various relations between modular forms on congruence subgroups of \(\mathrm{SL}(2,\mathbb{Z})\) in [13].)
**Example 4: Modular forms on \(\Gamma_{0}(2)\).** Ramanani [38] found the analogue
\[\mathcal{P}^{\prime}=\frac{\mathcal{P}^{2}-\mathcal{Q}}{4},\quad\tilde{ \mathcal{P}}^{\prime}=\frac{\mathcal{P}\tilde{\mathcal{P}}-\mathcal{Q}}{2}, \quad\mathcal{Q}^{\prime}=(\mathcal{P}-\tilde{\mathcal{P}})\mathcal{Q}, \tag{2.12}\]
of the Ramanujan system for the congruence subgroup \(\Gamma_{0}(2)\) of \(\mathrm{SL}(2,\mathbb{Z})\),
\[\Gamma_{0}(2):=\left\{\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\mathrm{SL}(2,\mathbb{Z})\mid c\equiv 0\ \mathrm{mod}\ 2\right\}.\]
Here, \(\mathcal{P}\) and \(\mathcal{Q}\) are normalized Eisenstein series on \(\Gamma_{0}(2)\) of weights \(2\) and \(4\), respectively, and \(\tilde{\mathcal{P}}\) is a modular form of weight \(2\),
\[\mathcal{P}(q)=1-8\sum_{n=1}^{\infty}\frac{(-1)^{n}nq^{n}}{1-q^{n}},\quad \mathcal{Q}(q)=1+16\sum_{n=1}^{\infty}\frac{(-1)^{n}n^{3}q^{n}}{1-q^{n}},\quad \tilde{\mathcal{P}}(q)=1+24\sum_{n=1}^{\infty}\frac{nq^{n}}{1+q^{n}}.\]
It was shown in [1] that \(\tilde{\mathcal{P}}\) can be presented as \(\tilde{\mathcal{P}}=\frac{3}{2}\mathcal{P}-\frac{1}{2}E_{2}\) where \(\mathcal{P}\) satisfies a third-order ODE,
\[2(\mathcal{P}^{2}-4\mathcal{P}^{\prime})\mathcal{P}^{\prime\prime\prime}+8( \mathcal{P}^{\prime\prime})^{2}-2\mathcal{P}^{3}\mathcal{P}^{\prime\prime}+(3 \mathcal{P}^{2}-4\mathcal{P}^{\prime})(\mathcal{P}^{\prime})^{2}=0,\]
and the weight-\(4\) modular form \(\mathcal{D}=\frac{1}{64}(\tilde{\mathcal{P}}^{2}-\mathcal{Q})\), which can be seen as an analogue of \(\Delta\) for the congruence group \(\Gamma_{0}(2)\), satisfies a fourth-order ODE,
\[(8\mathcal{D}^{4}\mathcal{D}^{\prime\prime}-10\mathcal{D}^{3} \mathcal{D}^{\prime 2})\mathcal{D}^{\prime\prime\prime\prime}-8\mathcal{D}^{4} \mathcal{D}^{\prime\prime\prime 2}+(10\mathcal{D}^{2}\mathcal{D}^{\prime 3}+16 \mathcal{D}^{3}\mathcal{D}^{\prime}\mathcal{D}^{\prime\prime})\mathcal{D}^{ \prime\prime\prime}\] \[-20\mathcal{D}^{3}\mathcal{D}^{\prime\prime 3}+39\mathcal{D}^{2} \mathcal{D}^{\prime 2}\mathcal{D}^{\prime\prime 2}-60\mathcal{D}\mathcal{D}^{\prime 4} \mathcal{D}^{\prime\prime}+25\mathcal{D}^{\prime 6}=0.\]
We can build on these results by showing additionally that \(\tilde{\mathcal{P}}\), which is a modular form of weight two, satisfies the third-order ODE
\[3\tilde{\mathcal{P}}^{2}\tilde{\mathcal{P}}^{\prime\prime\prime 2}-36( \tilde{\mathcal{P}}\tilde{\mathcal{P}}^{\prime}\tilde{\mathcal{P}}^{\prime \prime}-\tilde{\mathcal{P}}^{\prime 3})\tilde{\mathcal{P}}^{\prime\prime\prime}+32 \tilde{\mathcal{P}}\tilde{\mathcal{P}}^{\prime \prime 3}-3(12\tilde{\mathcal{P}}^{\prime 2}+\tilde{\mathcal{P}}^{4})\tilde{\mathcal{P}}^{ \prime\prime 2}+9\tilde{\mathcal{P}}^{3}\tilde{\mathcal{P}}^{\prime 2}\tilde{ \mathcal{P}}^{\prime\prime}-\frac{27}{4}\tilde{\mathcal{P}}^{2}\tilde{ \mathcal{P}}^{\prime 4}=0,\] \[\text{or, in invariant terms},\quad 3J_{2}^{2}+64I_{2}^{3}-12I_{2}^{2}=0,\]
the weight-\(4\) modular form \(\mathcal{Q}\) satisfies the third-order ODE
\[4\mathcal{Q}^{4}\mathcal{Q}^{\prime\prime\prime 2}-6\mathcal{Q}^{2} \mathcal{Q}^{\prime}(6\mathcal{Q}\mathcal{Q}^{\prime\prime}-5\mathcal{Q}^{ \prime 2})\mathcal{Q}^{\prime\prime\prime}+16\mathcal{Q}^{3}\mathcal{Q}^{ \prime\prime 3}+\mathcal{Q}^{2}(21\mathcal{Q}^{\prime 2}-4\mathcal{Q}^{3})\mathcal{Q}^{\prime \prime 2}\] \[-10\mathcal{Q}\mathcal{Q}^{\prime 2}(6\mathcal{Q}^{\prime 2}- \mathcal{Q}^{3})\mathcal{Q}^{\prime\prime}+25\mathcal{Q}^{\prime 6}-\frac{25}{4} \mathcal{Q}^{3}\mathcal{Q}^{\prime 4}=0,\] \[\text{or, in invariant terms},\quad J_{4}^{2}+16I_{4}^{3}-16I_{4}^{2}=0,\]
the weight-4 modular form \({\cal D}\) satisfies the third-order ODE
\[4{\cal D}^{4}{\cal D}^{\prime\prime\prime 2}-6{\cal D}^{2}{\cal D}^{ \prime}(6{\cal D}{\cal D}^{\prime\prime}-5{\cal D}^{\prime 2}){\cal D}^{\prime \prime\prime}+16{\cal D}^{3}{\cal D}^{\prime\prime 3}-{\cal D}^{2}(256{\cal D}^{3}-21{ \cal D}^{\prime 2}){\cal D}^{\prime\prime 2}\] \[+20{\cal D}{\cal D}^{\prime 2}(32{\cal D}^{3}-3{\cal D}^{\prime 2 }){\cal D}^{\prime\prime}+25{\cal D}^{\prime 6}-400{\cal D}^{3}{\cal D}^{ \prime 4}=0,\] \[\qquad\qquad\mbox{or, in invariant terms},\ \ \ J_{4}^{2}+16I_{4}^{3}-1024I_{4}^{2}=0,\]
and the weight-8 cusp form \(\tilde{\cal D}={\cal D}{\cal Q}\) satisfies the third-order ODE
\[16\tilde{\cal D}^{4}\tilde{\cal D}^{\prime\prime\prime 2}-30 \tilde{\cal D}^{2}\tilde{\cal D}^{\prime}(4\tilde{\cal D}\tilde{\cal D}^{ \prime\prime}-3\tilde{\cal D}^{\prime 2})\tilde{\cal D}^{\prime\prime \prime}+32\tilde{\cal D}^{3}\tilde{\cal D}^{\prime\prime 3}+117\tilde{\cal D}^{2}\tilde{ \cal D}^{\prime 2}\tilde{\cal D}^{\prime\prime 2}-8\tilde{\cal D}(16\tilde{\cal D}^{5}+27 \tilde{\cal D}^{\prime 4})\tilde{\cal D}^{\prime\prime}\] \[+81\tilde{\cal D}^{\prime 6}+144\tilde{\cal D}^{5}\tilde{\cal D}^{ \prime 2}=0,\] \[\qquad\qquad\qquad\mbox{or, in invariant terms},\ \ \ J_{8}^{2}+16I_{8}^{3}-496I_{8}=0.\]
Note that the last invariant equation defines an elliptic curve (lemniscatic case). A \({\cal G}_{8}\)-invariant ODE for \(\tilde{\cal D}\) is \(128Q_{8}-9P_{8}-912=0\). We refer to Remark 1 of Section 2.1 for the fourth-order \({\cal G}_{k}\)-invariant equations for the modular forms \(\tilde{\cal P},{\cal Q},{\cal D}\) (note that \({\cal P}\) is only quasi-modular).
**Example 5: Eisenstein series \(E_{1,3}\).** This modular form of weight 1 and level 3 is defined as
\[E_{1,3}(\tau)=\sum_{(\alpha,\beta)\in\mathbb{Z}^{2}}q^{(\alpha^{2}-\alpha\beta +\beta^{2})}=1+6q+6q^{3}+6q^{4}+12q^{7}+.....\]
Matsuda [29] presented several systems satisfied by \(E_{1,3}\), one of which was first derived by Huber in [22],
\[E^{\prime}_{1,3}=\frac{ME_{1,3}-N}{3},\ \ \ M^{\prime}=\frac{M^{2}-NE_{2}}{3},\ \ \ N^{\prime}=(M-E_{2}^{2})N,\]
where \(M\) and \(N\) are some functions whose specific forms are irrelevant here. Eliminating \(M\) and \(N\), we obtain a \(G_{1}\)-invariant third-order equation for \(f=E_{1,3}\),
\[f^{2}f^{\prime\prime\prime 2}-6f^{\prime}(3ff^{\prime\prime}-4f^{\prime 2})f^{ \prime\prime\prime}+18ff^{\prime\prime 3}-(f^{6}+27f^{\prime 2})f^{\prime \prime 2}+4f^{5}f^{\prime 2}f^{\prime\prime}-4f^{4}f^{\prime 4}=0, \tag{2.13}\]
whose invariant form is
\[J_{1}^{2}+18I_{1}^{3}-I_{1}^{2}=0.\]
It was also shown in [16] that \(f=E_{1,3}\) satisfies the fourth-order ODE
\[ff^{\prime\prime\prime\prime}(ff^{\prime\prime}-2f^{\prime 2})-f^{2}f^{\prime \prime\prime 2}+2f^{\prime}(ff^{\prime\prime}+4f^{\prime 2})f^{\prime\prime \prime}-9f^{\prime 2}f^{\prime\prime 2}=0, \tag{2.14}\]
whose invariant form is
\[Q_{1}-2P_{1}-36=0.\]
Finally, if one does not know a particular ODE or a system thereof satisfied by a given modular form, it is possible to construct a third-order ODE directly by choosing an appropriate (polynomial in \(I_{k}\) and \(J_{k}\)) ansatz. This is what we have done in the examples below.
**Example 6: Newform of weight 2 on \(\Gamma_{0}(11)\).** There is a unique cusp form of weight 2 on the congruence subgroup \(\Gamma_{0}(11)\),
\[f(\tau)=q\prod_{n=1}^{\infty}(1-q^{n})^{2}(1-q^{11n})^{2},\]
labelled as case 11.2.a.a in the LMFDB database [25]. It satisfies a \(G_{11}\)-invariant equation
\[J_{2}^{4}+32(I_{2}-8)(I_{2}^{2}+72I_{2}-944)J_{2}^{2}+256(I_{2}+8)(I_{2}^{3}+15 2I_{2}^{2}-704I_{2}+1168)(I_{2}-8)^{2}=0,\]
which defines a singular algebraic curve \(C\) of genus 1 with the \(j\)-invariant \(j=-2^{12}31^{3}11^{-5}\). Remarkably, this value coincides with the \(j\)-invariant of the curve \(y^{2}+y=x^{3}-x^{2}-10x-20\), which can be uniformised by the same cusp form \(f\), see [3, Table 1, case \(11B\)]. Thus, both curves are birationally equivalent via the map
\[I_{2}=-\frac{8x^{2}+8x-119}{(x-5)^{2}},\quad J_{2}=-\frac{44(4x-9)(2y+1)}{(x- 5)^{3}}.\]
The inverse transformation is
\[x=\frac{-11J_{2}^{2}+16(I_{2}-8)(9I_{2}^{2}-2468I_{2}+11712)}{64( I_{2}-83)(I_{2}^{2}-64)},\] \[y=\frac{(11I_{2}-264)J_{2}^{3}+176(I_{2}-8)(I_{2}^{3}+80I_{2}^{2} -5584I_{2}+43904)J_{2}}{512(I_{2}^{2}-64)^{2}(I_{2}-83)}+\frac{1}{2}.\]
Note that \({\rm d}I_{2}/J_{2}\) is the holomorphic differential on \(C\). By Remark 1 of Section 2.1, it equals \(\pi if(\tau){\rm d}\tau\). The form \(f(\tau)\) also satisfies the fourth-order \({\cal G}_{2}\)-invariant ODE
\[5616022359375P_{2}^{4}-2^{4}3^{5}5^{3}(34618195Q_{2}-763426383)P_ {2}^{3}\] \[-2^{8}3^{3}(173368000Q_{2}^{3}-8479136175Q_{2}^{2}+183916606320Q_ {2}-1561600055241)P_{2}^{2}\] \[+2^{12}3^{2}(64349800Q_{2}^{4}-3828348951Q_{2}^{3}+88775864253Q_{2 }^{2}-1000262056761Q_{2}+4759648412715)P_{2}\] \[+131072(4Q_{2}-63)(5329Q_{2}^{3}-204861Q_{2}^{2}+2745099Q_{2}-1403 9703)(2Q_{2}-105)^{2}=0,\]
which defines a singular rational curve \({\cal C}\) in the plane \((P_{2},Q_{2})\).
**Example 7: Newform of weight 2 on \(\Gamma_{0}(14)\).** A unique cusp form of weight 2 on the congruence subgroup \(\Gamma_{0}(14)\), labelled as 14.2.a.a in [25],
\[f(\tau)=q\prod_{n=1}^{\infty}(1-q^{n})(1-q^{2n})(1-q^{7n})(1-q^{14n}),\]
satisfies the following third-order ODE,
\[J_{2}^{4}+32(I_{2}-5)(I_{2}^{2}+49I_{2}-350)J_{2}^{2}+256(I_{2}+20)(I_{2}-4)(I_ {2}^{2}+86I_{2}-199)(I_{2}-5)^{2}=0.\]
The genus of this singular algebraic curve \(C\) is 1, the \(j\)-invariant is \(j=5^{3}11^{3}31^{3}2^{-3}7^{-6}\) and the holomorphic differential is \({\rm d}I_{2}/J_{2}\). This curve is birationally equivalent to the curve [3, Table 1, case 14D], \(y^{2}+(x+1)y=x^{3}-36x-70\), via the transformation
\[I_{2}=\frac{4x^{2}+18x-41}{(x+4)^{2}},\quad J_{2}=-\frac{28(x+11)(2y+x+1)}{(x+4 )^{3}}.\]
The curve \(\mathcal{C}\) corresponding to the fourth-order \(\mathcal{G}_{2}\)-invariant ODE for \(f(\tau)\) is rational (we do not present it explicitly due to its complexity).
**Example 8: Newform of weight 2 on \(\Gamma_{0}(15)\).** A unique cusp form of weight 2 on the congruence subgroup \(\Gamma_{0}(15)\), labelled as 15.2.a.a in [25],
\[f(\tau)=q\prod_{n=1}^{\infty}(1-q^{n})(1-q^{3n})(1-q^{5n})(1-q^{15n}),\]
satisfies the following third-order ODE,
\[J_{2}^{4}+32(I_{2}-5)(I_{2}^{2}+33I_{2}-406)J_{2}^{2}+256(I_{2}+76)(I_{2}+4)(I _{2}^{2}-10I_{2}+89)(I_{2}-5)^{2}=0.\]
The genus of this singular algebraic curve \(C\) is 1, the \(j\)-invariant is \(j=23^{3}73^{3}3^{-2}5^{-8}\) and the holomorphic differential is \({\rm d}I_{2}/J_{2}\). This curve is birationally equivalent to the curve [3, Table 1, case 15F], \(y^{2}+(x+1)y=x^{3}+x^{2}+35x-28\), via the transformation
\[I_{2}=-\frac{4x^{2}+74x+61}{(x-2)^{2}},\quad J_{2}=-\frac{180(x+3)(2y+x+1)}{(x -2)^{3}}.\]
The curve \(\mathcal{C}\) corresponding to the fourth-order \(\mathcal{G}_{2}\)-invariant ODE for \(f(\tau)\) is rational (we do not present it explicitly due to its complexity).
**Example 9: Newform of weight 2 on \(\Gamma_{0}(17)\).** A cusp form of weight 2 on the congruence subgroup \(\Gamma_{0}(17)\) labelled as 17.2.a.a in [25],
\[f(\tau)=q-q^{2}-q^{4}-2q^{5}+4q^{7}+3q^{8}-3q^{9}+O(q^{10}),\]
satisfies the third-order ODE
\[9J_{2}^{8}+96(7I_{2}^{3}+3I_{2}^{2}-1734I_{2}+21340)J_{2}^{6}\] \[+256(73I_{2}^{6}+156I_{2}^{5}-45153I_{2}^{4}+734762I_{2}^{3}-227571 0I_{2}^{2}-12691872I_{2}+175400560)J_{2}^{4}\] \[+8192(28I_{2}^{5}+993I_{2}^{4}-11044I_{2}^{3}+13213I_{2}^{2}+85452 6I_{2}+6877196)(I_{2}^{4}-30I_{2}^{3}+309I_{2}^{2}-584I_{2}+5232)J_{2}^{2}\] \[+65536(16I_{2}^{4}+1132I_{2}^{3}+13477I_{2}^{2}+91338I_{2}+212581) (I_{2}^{4}-30I_{2}^{3}+309I_{2}^{2}-584I_{2}+5232)^{2}=0,\]
which defines a singular algebraic curve \(C\) of genus 1 with the \(j\)-invariant \(j=-3^{3}11^{3}17^{-4}\). Once again, this value coincides with the \(j\)-invariant of the curve \(y^{2}+(x+1)y=x^{3}-x^{2}-x-14\), which can be uniformised by the same cusp form, see [3, Table 1, case 17C]. Thus, both curves are birationally equivalent. The holomorphic differential is again \({\rm d}I_{2}/J_{2}\).
The curve \(\mathcal{C}\) corresponding to the fourth-order \(\mathcal{G}_{2}\)-invariant ODE for \(f(\tau)\) is rational (we do not present it explicitly due to its complexity).
**Example 10: Newform of weight 2 on \(\Gamma_{0}(37)\).** A cusp form of weight 2 on the congruence subgroup \(\Gamma_{0}(37)\) labelled as \(37.2.\)a.b in [25],
\[f(\tau)=q+q^{3}-2q^{4}-q^{7}-2q^{9}+O(q^{10}),\]
satisfies a third-order ODE \(F(I_{37},J_{37})=0\) which defines a singular algebraic curve \(C\) of genus 1 with the \(j\)-invariant \(j=2^{12}3^{3}37^{-1}\). This value coincides with the \(j\)-invariant of the curve \(y^{2}-y=x^{3}-x\), which can be uniformised by the same cusp form, [3, Table 1, case 37A], compare with [41]. Thus, both curves are birationally equivalent. The holomorphic differential is \(\mathrm{d}I_{2}/J_{2}\). Note that in this example the modular curve \(\Gamma_{0}(37)\backslash\mathcal{H}\) has genus two and, according to [42], the map \(\Gamma_{0}(37)\backslash\mathcal{H}\to C\) is a two-sheeted covering.
The curve \(\mathcal{C}\) corresponding to the fourth-order \(\mathcal{G}_{2}\)-invariant ODE for \(f(\tau)\) is rational (we do not present it explicitly due to its complexity).
## 3 Differential equations for Jacobi forms
**Definition 1**.: _A Jacobi form of weight \(k\) and index \(m\) is a holomorphic function \(f\colon\mathcal{H}\times\mathbb{C}\mapsto\mathbb{C}\) with the transformation property_
\[\tilde{\tau}=\frac{a\tau+b}{c\tau+d},\quad\tilde{z}=\frac{z+\lambda\tau+\mu}{ c\tau+d},\quad\tilde{f}=(c\tau+d)^{k}\mathrm{e}^{2\pi im\left(\frac{c(z+ \lambda\tau+\mu)^{2}}{c\tau+d}-\lambda^{2}\tau-2\lambda z-\lambda\mu\right)}f,\]
_where \(\tau\in\mathcal{H}\), \(z\in\mathbb{C}\), \(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\mathrm{SL}(2,\mathbb{Z})\), \(\lambda,\mu\in\mathbb{Z}\), and such that it has a Fourier expansion of the form_
\[f(\tau,z)=\sum_{\begin{subarray}{c}n=0\\ r\in\mathbb{Z},\ r^{2}\leqslant 4nm\end{subarray}}^{\infty}c(n,r)q^{n}\zeta^{r},\]
_where \(q=\mathrm{e}^{2\pi i\tau}\) and \(\zeta=\mathrm{e}^{2\pi iz}\). If \(f\) has a Fourier expansion of the same form but with \(r^{2}<4nm\) then \(f\) is called a Jacobi cusp form of weight \(k\) and index \(m\). If we drop the holomorphicity condition, the function \(f\) is called a weak Jacobi form if \(c(n,r)=0\) unless \(n\geqslant n_{0}\) for some possible negative integer \(n_{0}\)_
Classical examples of Jacobi forms are modular forms, theta series, Fourier coefficients of Siegel modular forms and the Weierstrass \(\wp\)-function. See more details in [15].
Similarly to modular forms, there is a notion of the Rankin-Cohen bracket [8] that assigns to Jacobi forms \(f_{1}\) and \(f_{2}\) of weights \(k_{1}\) and \(k_{2}\) and indices \(m_{1}\) and \(m_{2}\), respectively, a Jacobi form of weight \(k_{1}+k_{2}+2n\) and index \(m_{1}+m_{2}\),
\[[[f_{1},f_{2}]]_{n}:=\sum_{i=0}^{n}(-1)^{i}\binom{k_{1}+n-3/2}{n-i}\binom{k_{2 }+n-3/2}{i}m_{1}^{n-i}m_{2}^{i}L_{m_{1}}^{i}(f_{1})L_{m_{2}}^{n-i}(f_{2}),\]
where \(L_{m}:=8\pi im\,\partial_{\tau}-\partial_{z}^{2}\) is the heat operator. The theory of Jacobi forms features an additional family of Rankin-Cohen operators [9] parametrised by an arbitrary complex number \(X\) and a non-negative integer \(n\),
\[[f_{1},f_{2}]_{X,2n}:=\sum_{r+s+p=n}C_{r,s,p}(k_{1},k_{2})(1+m_{1} X)^{s}(1-m_{2}X)^{r}L_{m_{1}+m_{2}}^{p}(L_{m_{1}}^{r}(f_{1})L_{m_{2}}^{s}(f_{2})),\] \[[f_{1},f_{2}]_{X,2n+1}=m_{1}[f_{1},\partial_{z}f_{2}]_{X,2n}-m_{2} [\partial_{z}f_{1},f_{2}]_{X,2n}\]
where
\[C_{r,s,p}(k_{1},k_{2}):=\frac{(k_{1}+n-3/2)_{s+p}}{r!}\frac{(k_{2}+n-3/2)_{r+p} }{s!}\frac{(3/2-k_{1}-k_{2}-n)_{r+s}}{p!}\]
and \((x)_{l}=\prod\limits_{0\leqslant i\leqslant l-1}(x-i)\).
### Differential invariants of the Jacobi group
Consider the six-dimensional group \(G_{k,m}\) of point transformations called the Jacobi group and acting on a space with coordinates \((\tau,z,f)\),
\[\tilde{\tau}=\frac{a\tau+b}{c\tau+d},\quad\tilde{z}=\frac{z+\lambda\tau+\mu}{ c\tau+d}\quad\tilde{f}=(c\tau+d)^{k}\mathrm{e}^{2\pi im\left(\frac{c(z+\lambda \tau+\mu)^{2}}{c\tau+d}-\lambda^{2}\tau-2\lambda z-\lambda\mu+\kappa\right)}f,\]
where \(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\mathrm{SL}(2,\mathbb{R})\) and \(\lambda,\mu,\kappa\in\mathbb{R}\). In the _generic_ case
\[2kff_{zz}-8m\pi iff_{\tau}-(2k-1)f_{z}^{2}\neq 0\quad\text{and}\quad m\neq 0,\]
its second-order differential invariants are
\[L_{k,m}= \frac{1}{(2kff_{zz}-8m\pi iff_{\tau}-(2k-1)f_{z}^{2})^{2}}\Big{(} 64m^{2}\pi^{2}f^{3}f_{\tau\tau}+32m\pi if^{2}f_{z}f_{\tau z}-4k(k+1)f^{2}f_{ zz}^{2}\] \[+4(8m(k+1)\pi iff_{\tau}+(2k^{2}+k-2)f_{z}^{2})ff_{zz}-(16(2k+3)m \pi iff_{\tau}+(4k^{2}-7)f_{z}^{2})f_{z}^{2}\Big{)},\] \[M_{k,m}= \frac{4m\pi if^{2}f_{\tau z}-ff_{z}f_{zz}-(4m\pi iff_{\tau}-f_{z} ^{2})f_{z}}{(2kff_{zz}-8m\pi iff_{\tau}-(2k-1)f_{z}^{2})^{3/2}},\]
and the third-order differential invariants are
\[N_{k,m}= \frac{1}{(2kff_{zz}-8m\pi iff_{\tau}-(2k-1)f_{z}^{2})^{3}}\Big{(} 512m^{3}\pi^{3}if^{5}f_{\tau\tau\tau}-384m^{2}\pi^{2}f^{4}f_{z}f_{\tau z}-96m \pi if^{3}f_{z}^{2}f_{\tau zz}\] \[+8f^{2}f_{z}^{3}f_{zzz}-96m\pi f^{2}(2(k+2)ff_{zz}-(2k+5)f_{z}^{2 })(2m\pi ff_{\tau\tau}+if_{z}f_{\tau z})\] \[-4(k+2)f^{2}f_{zz}^{2}\left(24(k+1)m\pi iff_{\tau}-2k(k+1)ff_{zz} +3(k-1)(2k+3)f_{z}^{2}\right)\] \[-6(16(2k+3)(k+2)m\pi iff_{\tau}+(4k^{3}+8k^{2}-11k-24)f_{z}^{2}) ff_{z}^{2}f_{zz}\] \[-24(4k^{2}+16k+17)m\pi iff_{\tau}f_{z}^{4}-(8k^{3}+12k^{2}-38k-65) f_{z}^{6}\Big{)},\]
\[P_{k,m}= \frac{1}{(2kff_{zz}{-}8m\pi iff_{\tau}{-}(2k{-}1)f_{z}^{2})^{5/2}} \Big{(}16m^{2}\pi^{2}f^{4}f_{\tau zz}{+}8m\pi if^{3}f_{z}f_{\tau zz}{-}f^{2}f_{ z}^{2}f_{zz}\] \[-16m^{2}\pi^{2}f^{3}f_{z}f_{\tau\tau}{+}4(2(k{+}2)m\pi iff_{zz}{- }(2k{+}7)m\pi if_{z}^{2})f^{2}f_{\tau z}{-}2(k{+}2)f^{2}f_{z}f_{zz}^{2}\] \[-2((4m\pi i(k+2)ff_{\tau}{-}(2k+5)f_{z}^{2})ff_{z}f_{zz}{+}(4m\pi iff _{\tau}{-}f_{z}^{2})(2k{+}5)f_{z}^{3}\Big{)},\] \[Q_{k,m}= \frac{4m\pi i(ff_{\tau zz}{-}2f_{z}f_{\tau z})f^{2}{-}f^{2}(f_{z} f_{zz}{+}f_{zz}^{2}){-}(4m\pi iff_{\tau}{-}5f_{z}^{2})ff_{zz}{+}(8m\pi iff_{ \tau}{-}3f_{z}^{2})f_{z}^{2}}{(2kff_{zz}{-}8m\pi iff_{\tau}{-}(2k{-}1)f_{z}^{2} )^{2}},\] \[R_{k,m}= \frac{f^{2}f_{zzz}{-}3ff_{z}f_{zz}{+}2f_{z}^{3}}{(2kff_{zz}{-}8m \pi if_{\tau}{-}(2k{-}1)f_{z}^{2})^{3/2}}.\]
The Lie algebra of the group \(G_{k,m}\) is spanned by the vector fields
\[\langle\partial_{\tau},\ 2\tau\partial_{\tau}+z\partial_{z},\ \tau^{2}\partial_{\tau}+\tau z \partial_{z}-(2m\pi iz^{2}f+k\tau f)\partial_{f},\ \tau\partial_{z}-4m\pi izf\partial_{f},\ \partial_{z},\ f\partial_{f}\rangle.\]
Note that the invariance of the above expressions under \(G_{k,m}\)-action makes them Jacobi functions, that is, Jacobi forms or weight \(0\) and order \(0\). Furthermore, their numerators and denominators are Jacobi forms of the same weight and index. Therefore, there is an analogy between invariants of the group \(G_{k,m}\) and invariants of the groups \(G_{k}\) and \(\mathcal{G}_{k}\). The expression \(D=2kff_{zz}-8m\pi iff_{\tau}-(2k-1)f_{z}^{2}\) is a conditional invariant of the group \(G_{k,m}\) (that is, its vanishing is an invariant condition). Moreover, given a Jacobi form \(f\) of weight \(k\) and index \(m\), the function \(D\) is a Jacobi form of weight \(2k+2\) and index \(2m\). Indeed, \(mfD=[[f,f^{2}]]_{1}\), or alternatively \(D=\frac{2}{2k-1}[f,f]_{X,2}\), with the right-hand side being actually \(X\)-independent. Introducing
\[M:=\frac{[f,f^{2}]_{\frac{2k}{m(10k-3)},3}}{6k-1},\quad Q:=\frac {(10k-3)[f,M]_{X,1}}{3m^{2}(2k-1)(4k-1)},\] \[L:=\frac{\Big{[}f^{2},[f,f]_{X,2}\Big{]}_{\frac{2k}{m(8k+3)},2}+k( 32k^{3}-14k+3)Q-\frac{(4k+3)(4k+11)}{(2k-1)}[f,f]_{X,2}^{2}}{4(2k-1)(4k-1)(4k+3)},\]
we can write down all the above invariants of the \(G_{k,m}\)-action in terms of Rankin-Cohen brackets,
\[L_{k,m}=\frac{(2k-1)^{2}L}{4[f,f]_{X,2}^{2}},\quad M_{k,m}=-\frac{(10k-3)M}{3 m(2k-1)(4k-1)(6k-1)\left(\frac{2}{2k-1}[f,f]_{X,2}\right)^{3/2}},\]
\[N_{k,m}=\frac{\frac{1}{4(8k-7)}[f^{2},L]_{\frac{-2k}{m(12k+7),2}}-\frac{2(4k^ {2}+3k-1)}{(2k-1)}[f,f]_{X,2}Q-\frac{4k+23}{4(2k-1)}[f,f]_{X,2}L-\frac{2(10k-3) ^{2}}{9m^{2}(2k-1)^{2}(4k-1)}M^{2}}{(4k-1)\frac{8}{(2k-1)^{3}}[f,f]_{X,2}^{3}}\]
\[P_{k,m}=\frac{\frac{8(k+2)(10k-3)}{(2k-1)(4k-1)}\left[f,f^{2}\right]_{\frac{2k} {m(10k-3)},3}[f,f]_{X,2}-\frac{10k-3}{(4k-1)(6k+5)}\left[[f,f^{2}]_{\frac{2k}{ m(10k-3)},3},f^{2}\right]_{\frac{2k}{5m(2k+1)},2}}{3m(2k-1)(4k-1)(6k-1)(\frac{2}{2k-1}[ f,f]_{X,2})^{5/2}},\]
\[Q_{k,m}=\frac{-(2k-1)^{2}Q}{4[f,f]_{X,2}^{2}},\quad R_{k,m}=\frac{8(3k-1)[f,f^{ 2}]_{\frac{1}{4m(3k-1)},3}}{3(2k-1)(4k-1)(6k-1)(\frac{2}{2k-1}[f,f]_{X,2})^{3/2}}.\]
The expressions for the brackets with the unspecified parameter \(X\) do not involve \(X\) in their expanded forms.
Any _generic_ third-order \(G_{k,m}\)-invariant involutive PDE system (governing Jacobi forms) can be obtained by expressing all third-order invariants \(N_{k,m}\), \(P_{k,m}\), \(Q_{k,m}\), \(R_{k,m}\) as functions of the second-order invariants \(L_{k,m},M_{k,m}\), see Examples 11 and 12 where we present such systems for the weak Jacobi forms \(\varphi_{-1,2}(\tau,z)\) and \(\varphi_{-2,1}(\tau,z)\). The action of the group \(G_{k,m}\) on the six-dimensional solution space of any such system is locally transitive (possesses an open orbit).
There also exist two different _non-generic_\(G_{k,m}\)-invariant involutive PDE systems, first of which contain the equation \(D=0\), equivalently,
\[8m\pi if_{\tau}=2kf_{zz}-(2k-1)f_{z}^{2}/f;\] (3.15a) note that the substitution \[f=\mathrm{e}^{\varphi}\] reduces ( 3.15a ) to a potential Burgers equation for \[\varphi\], namely, \[8m\pi i\varphi_{\tau}=2k\varphi_{zz}+\varphi_{z}^{2}\], while the substitution \[f=\psi^{2k}\] linearises equation ( 3.15a ) to \[4m\pi i\psi_{t}=k\psi_{zz}\]. In the non-generic case the above invariants are not defined, and therefore we consider instead \[\mathcal{L}_{k,m}=\frac{1}{L_{k,m}},\quad\mathcal{M}_{k,m}=\frac{ L_{k,m}^{\frac{1}{2}}}{M_{k,m}^{\frac{2}{3}}},\quad\mathcal{N}_{k,m}=\frac{N_{k,m }}{L_{k,m}^{\frac{3}{2}}},\] \[\mathcal{P}_{k,m}=\frac{P_{k,m}}{L_{k,m}^{\frac{1}{2}}M_{k,m}},\quad \mathcal{Q}_{k,m}=\frac{Q_{k,m}}{L_{k,m}},\quad\mathcal{R}_{k,m}=\frac{R_{k,m }}{M_{k,m}}.\] In view of equation ( 3.15a ), the invariants \[\mathcal{L}_{k,m}\], \[\mathcal{Q}_{k,m}\] and \[\mathcal{R}_{k,m}\] have fixed values, \[\mathcal{L}_{k,m}=0\], \[\mathcal{Q}_{k,m}=-\frac{1}{4k}\], \[\mathcal{R}_{k,m}=\frac{1}{k}\], and thus the essential invariants are \[\mathcal{M}_{k,m}\], \[\mathcal{N}_{k,m}\] and \[\mathcal{P}_{k,m}\], which involve \[z\] -derivatives of \[f\] only and are of order 4, 6 and 5, respectively. In particular, to obtain a non-generic \[G_{k,m}\] -invariant involutive PDE system with a transitive \[G_{k,m}\] -action on the solution space, one has to add to equation ( 3.15a ) a sixth-order equation that is a function of \[\mathcal{M}_{k,m}\], \[\mathcal{N}_{k,m}\] and \[\mathcal{P}_{k,m}\]. One particular choice of this kind is \[\left(\frac{(\ln f)_{zzzz}}{(\ln f)_{zzz}}+\frac{6}{k}(\ln f)_{zz}\right)_{z}=0.\] (3.15b) The system ( 3.15 ) is in involution and admits \[G_{k,m}\] as a symmetry group. It can be represented in invariant form as \[\mathcal{L}_{k,m}=0,\quad\mathcal{M}_{k,m}^{3}(\mathcal{N}_{k,m}-2\mathcal{P}_ {k,m})-8=0,\] see Example 13 of Section 3.2 where we obtain a system of this kind for the Jacobi theta functions.
Another type of _non-generic_\(G_{k,m}\)-invariant involutive PDE systems is associated with the value \(m=0\). In this case, the invariants \(L_{k,0}\) and \(M_{k,0}\) are functionally dependent. Moreover, the group \(G_{k,0}\) is five-dimensional since the \(f\)-scalings are no longer admissible. The Lie algebra of the group \(G_{k,0}\) is spanned by the vector fields
\[\langle\partial_{\tau},\ \partial_{z},\ \tau\partial_{z},\ 2\tau\partial_{\tau}+z \partial_{z}-kf\partial_{f},\ \tau^{2}\partial_{\tau}+\tau z\partial_{z}-k\tau f \partial_{f}\rangle.\]
The invariants of the group \(G_{k,0}\) that are necessary for the exposition below are as follows
\[\mathcal{I}_{k}=\frac{(kff_{zz}-(k+1)f_{z}^{2})f_{\tau\tau}-kff_{ \tau z}^{2}+(k+1)f_{\tau}(2f_{z}f_{\tau z}-f_{\tau}f_{zz})}{f^{1+\frac{4}{k}}(kff _{zz}-(k+1)f_{z}^{2})},\] \[\mathcal{J}_{k}=\frac{(kff_{zz}-(k+1)f_{z}^{2})f_{\tau zz}-(kff_{ \tau z}-(k+1)f_{\tau}f_{z}^{2})f_{zzz}+(k+2)f_{zz}(f_{z}f_{\tau z}-f_{\tau}f_{ zz})}{f^{1+\frac{4}{k}}(kff_{zz}-(k+1)f_{z}^{2})},\] \[\mathcal{L}_{k}=\frac{f_{z}}{f^{1+\frac{1}{k}}},\quad\mathcal{M} _{k}=\frac{f_{zz}}{f^{1+\frac{2}{k}}},\quad\mathcal{N}_{k}=\frac{f_{zzz}}{f^{1 +\frac{3}{k}}},\]
see Example 14 of Section 3.2 where derive differential system for the Weierstrass \(\wp\)-function.
### Examples
Here we provide examples of nonlinear involutive third-order PDE systems that characterise Jacobi forms uniquely up to the action of the corresponding six-dimensional symmetry group \(G_{k,m}\). We refer to [2] for an alternative construction of linear modular differential equations satisfied by Jacobi forms.
**Example 11.** The weak Jacobi form \(\varphi_{-1,2}(\tau,z)\) of weight \(-1\) and index \(2\) is defined as \(\varphi_{-1,2}(\tau,z)=\Delta^{-1/8}(\tau)\vartheta_{1}(\tau,2z)\) where \(\Delta\) is the modular discriminant and \(\vartheta_{1}\) is a Jacobi theta function, see [14, formula (4.31)],
\[\vartheta_{1}(\tau,z)=2\sum_{n=0}^{\infty}(-1)^{n+1}\mathrm{e}^{\pi i\left( n+\frac{1}{2}\right)\tau}\sin((2n+1)\pi z).\]
The function \(f(\tau,z)=\varphi_{-1,2}(\tau,z)\) satisfies the following \(G_{-1,2}\)-invariant overdetermined involutive PDE system,
\[2\pi iff_{\tau\tau\tau}=2\pi if_{\tau}f_{\tau\tau}+f_{\tau\tau} f_{zz}-f_{\tau z}^{2},\] \[ff_{\tau\tau z}=3f_{z}f_{\tau\tau}-2f_{\tau}f_{\tau z},\] \[ff_{\tau zz}=8\pi i(ff_{\tau\tau}-f_{\tau}^{2})+2f_{z}f_{\tau z }-f_{\tau}f_{zz},\] \[ff_{zzz}=16\pi i(ff_{\tau z}-f_{\tau}f_{z})+f_{z}f_{zz},\]
which can be obtained from the system (1.2) by the change of variables
\[f(\tau,z)=g(\tilde{\tau},\tilde{z}),\quad\tau=\frac{i\tilde{\tau}}{\pi},\quad z =\frac{\tilde{z}}{2\pi}.\]
Invariant form of the above system is
\[N_{-1,2}=-32M_{-1,2}^{2}+L_{-1,2},\quad P_{-1,2}=-M_{-1,2},\quad Q_{-1,2}=- \frac{1}{4}(L_{-1,2}+1),\quad R_{-1,2}=2M_{-1,2}.\]
**Example 12.** The weak Jacobi form \(\varphi_{-2,1}(\tau,z)\) of weight \(-2\) and index \(1\) is defined as \(\varphi_{-2,1}(\tau,z)=\Delta^{-1/4}(\tau)\vartheta_{1}^{2}(\tau,z)\), see [14, formula (4.29)]. The function \(f(\tau,z)=\varphi_{-2,1}(\tau,z)\)
satisfies the following \(G_{-2,1}\)-invariant overdetermined involutive PDE system,
\[2\pi if^{2}f_{\tau\tau\tau}=(2ff_{\tau\tau}-f_{\tau}^{2})(2\pi if_{ \tau}+f_{zz})-f_{z}^{2}f_{\tau\tau}-2ff_{\tau z}^{2}+2f_{\tau}f_{z}f_{\tau z},\] \[\qquad\qquad\qquad\qquad f^{2}f_{\tau\tau z}=f_{z}(2ff_{\tau\tau }-f_{\tau}^{2}),\] \[f^{2}f_{\tau zz}=2\pi if(ff_{\tau\tau}-f_{\tau}^{2})+f_{z}(2ff_{ \tau z}-f_{\tau}f_{z}),\] \[f^{2}f_{zzz}=4\pi if(ff_{\tau z}-f_{\tau}f_{z})+f_{z}(2ff_{zz}-f_ {z}^{2}),\]
which can be obtained from the system (1.2) by the change of variables
\[f(\tau,z)=g^{2}(\tilde{\tau},\tilde{z}),\quad\tau=\frac{i\tilde{ \tau}}{\pi},\quad z=\frac{\tilde{z}}{\pi}.\]
Invariant form of the above system is
\[N_{-2,1}=-32M_{-2,1}^{2}+2L_{-2,1}+1,\quad P_{-2,1}=0,\quad Q_{-2,1}=-\frac{1}{8}(L_{-2,1}+1),\quad R_{-2,1}=M_{-2,1}.\]
**Example 13.** The Jacobi theta function \(\vartheta_{1}\) is a Jacobi form of index \(\frac{1}{2}\) and weight \(\frac{1}{2}\). It satisfies the heat equation
\[4\pi i(\vartheta_{1})_{\tau}=(\vartheta_{1})_{zz},\] (3.16a) which coincides with equation ( 3.15a ) for \[k=m=\frac{1}{2}\], as well as a sixth-order equation involving \[z\] -derivatives of \[\vartheta_{3}\] only, \[\left(\frac{(\ln\vartheta_{1})_{zzzz}}{(\ln\vartheta_{1})_{zzz}}+12(\ln \vartheta_{1})_{zz}\right)_{z}=0,\] (3.16b) which coincides with ( 3.15b ) for \[k=\frac{1}{2}\]. The system ( 3.16 ) is in involution and admits the Jacobi group \[G_{\frac{1}{2};\frac{1}{2}}\] as a symmetry group. It also holds for Jacobi theta functions \[\vartheta_{2}\], \[\vartheta_{3}\] and \[\vartheta_{4}\] due to the fact that the transformations between different theta functions belong to the group \[G_{\frac{1}{2},\frac{1}{2}}\]. In a somewhat different form, the system ( 3.16 ) for Jacobi theta functions was obtained in [11, Example 4.1]. We also refer to [4, 36] for other differential systems satisfied by the Jacobi theta functions.
The system (3.16) can be written in invariant form as
\[\mathcal{L}_{\frac{1}{2},\frac{1}{2}}=0,\quad\mathcal{M}_{\frac{ 1}{2},\frac{1}{2}}^{3}(\mathcal{N}_{\frac{1}{2},\frac{1}{2}}-2\mathcal{P}_{ \frac{1}{2},\frac{1}{2}})-8=0.\]
**Example 14.** The Weierstrass \(\wp\)-function is a Jacobi form of weight \(2\) and index \(0\),
\[\wp(\tau,z)=\frac{1}{z^{2}}+\sum_{\omega\in L/\{0\}}\left(\frac{ 1}{(z-\omega)^{2}}-\frac{1}{\omega^{2}}\right),\]
where the lattice \(L=\mathbb{Z}+\mathbb{Z}\tau\). Using differentiation rules from [4, Formulae (37), (100) and (101)], we find that the Weierstrass \(\wp\)-function satisfies the involutive \(G_{2,0}\)-invariant system
of differential equations
\[\wp_{\tau\tau}=\frac{\frac{1}{16}\phi+3\pi^{2}(2\wp\wp_{\tau z}^{2}-6 \wp_{\tau}\wp_{z}\wp_{\tau z}+3\wp_{\tau}^{2}\wp_{zz})}{3\pi^{2}(2\wp\wp_{zz}-3 \wp_{z}^{2})},\] \[\wp_{\tau zz}=\frac{\frac{i}{4}\phi+\pi(36\wp\wp_{z}(2\wp\wp_{ \tau z}-3\wp_{\tau}\wp_{z})-12\wp_{zz}(\wp_{z}\wp_{\tau z}-\wp_{\tau}\wp_{zz} ))}{3\pi(2\wp\wp_{zz}-3\wp_{z}^{2})},\] \[\wp_{zzz}=12\wp\wp_{z},\quad\text{where}\quad\phi:=16\wp_{zz}^{ 3}-72\wp^{2}\wp_{zz}^{2}-216\wp\wp_{z}^{2}\wp_{zz}+54\wp_{z}^{4}+864\wp^{3} \wp_{z}^{2}.\]
Given \(\wp_{z}^{2}=4\wp^{3}-g_{2}(\tau)\wp-g_{3}(\tau)\) where \(g_{2}(\tau)=\frac{4}{3}\pi^{4}E_{4}(\tau)\) and \(g_{3}(\tau)=\frac{8}{27}\pi^{6}E_{6}(\tau)\), we obtain \(\phi=-2(g_{2}^{3}-27g_{3}^{2})=-2(2\pi)^{12}\Delta(\tau)\). Note that the first of the above equations can be written in a symmetric Monge-Ampere form,
\[2\wp(\wp_{zz}\wp_{\tau\tau}-\wp_{z\tau}^{2})=3\wp_{z}^{2}\wp_{\tau\tau}-6 \wp_{\tau}\wp_{z}\wp_{\tau z}+3\wp_{\tau}^{2}\wp_{zz}-\frac{(2\pi)^{10}}{6} \Delta(\tau).\]
The invariant form of the above system is
\[\mathcal{N}_{2}=12\mathcal{L}_{2},\quad\mathcal{J}_{2}=-4\pi i \mathcal{I}_{2},\] \[24\pi^{2}(2\mathcal{M}_{2}-3\mathcal{L}_{2}^{2})\mathcal{I}_{2} =8\mathcal{M}_{2}^{3}-36\mathcal{M}_{2}^{2}-108\mathcal{L}_{2}^{2} \mathcal{M}_{2}+27\mathcal{L}_{2}^{2}(\mathcal{L}_{2}^{2}+16).\]
## 4 Concluding remarks
* The results of this paper can be extended to other types of modular forms (such as Siegel modular forms, Picard modular forms, etc), namely, every modular form \(f\) on a discrete subgroup \(\Gamma\) of a Lie group \(G\) should solve a nonlinear PDE system \(\Sigma\) such that:
* system \(\Sigma\) is involutive (compatible);
* system \(\Sigma\) is of finite type (has finite-dimensional solution space);
* system \(\Sigma\) is \(G\)-invariant, furthermore, the Lie group \(G\) acts on the solution space of \(\Sigma\) locally transitively and with an open orbit (thus, the dimension of the solution space of \(\Sigma\) equals \(\dim G\));
* the modular form \(f\) is a generic solution of system \(\Sigma\) (\(f\) belongs to the open orbit), in particular, solution \(f\) has discrete stabiliser \(\Gamma\);
* system \(\Sigma\) is expressible via algebraic relations among differential invariants of a suitable action of \(G\). In the case of classical modular forms \(f\) considered in this paper, we have: \(\Gamma=\mathrm{SL}(2,\mathbb{Z})\), \(G=\mathrm{SL}(2,\mathbb{R})\), and system \(\Sigma\) is a third-order nonlinear \(\mathrm{SL}(2,\mathbb{R})\)-invariant ODE for \(f\). In particular, involutive differential systems for Siegel modular forms should be based on differential invariants of the symplectic group \(\mathrm{Sp}(2g)\). Some results in this direction are already available, thus, differential systems for theta constants were discussed in [33] (genus \(g=2\)) and [43] (general \(g\)).
* Although modular forms provide _generic_ solutions of the ODEs discussed in this paper, the same ODEs possess _non-generic_ rational solutions which may also be of interest. Thus, as already mentioned in the introduction, the integrability condition for the Lagrangian density \(u_{x}u_{y}f(u_{t})\) is the fourth-order ODE (2.14) for \(f(\tau)\). The generic solution of this ODE is the Eisenstein series, \(f(\tau)=E_{1,3}(\tau)\), however, it also possesses a simple non-generic solution \(f(\tau)=\tau\) which corresponds to the Lagrangian density \(u_{x}u_{y}u_{t}\) (with interesting properties, see [18]). Same applies to differential systems for Jacobi forms.
* Every third-order ODE with \(G_{k}\) symmetry can be linearised by a standard procedure as discussed, e.g., in [10]: the general solution \(f(\tau)\) of any such equation can be represented parametrically as \[\tau=\frac{\tilde{w}}{w},\qquad f=\frac{w^{k}}{W^{k/2}}\] (4.17) where \(w(s)\) and \(\tilde{w}(s)\) are two linearly independent solutions of a second-order linear equation \(w_{ss}+pw_{s}+qw=0\), and \(W=\tilde{w}_{s}w-w_{s}\tilde{w}\) is the Wronskian of \(w\) and \(\tilde{w}\). Here the coefficients \(p(s)\) and \(q(s)\) depend on the third-order ODE and can be efficiently reconstructed. The details are as follows. Differentiating the second equation (4.17) with respect to \(\tau\) using the relations \(\frac{ds}{d\tau}=\frac{w^{2}}{W}\) and \(W_{s}=-pW\), one obtains \[I_{k}=(2\pi ik)^{2}\delta\quad\text{and}\quad J_{k}=(2\pi ik)^{3}\delta_{s}\] where \(\delta=\frac{1}{2}p_{s}-q+\frac{1}{4}p^{2}\) and \(I_{k},J_{k}\) are the invariants from Section 2.1. Thus, for a given third-order ODE \(F(I_{k},J_{k})=0\), one has to choose the coefficients \(p(s),\,q(s)\) such that \(F((2\pi ik)^{2}\delta,\,(2\pi ik)^{3}\delta_{s})=0\). This linearisation procedure leads to familiar parametrisations of modular forms by hypergeometric functions.
## Acknowledgements
We thank F. Clery, G. van der Geer, M. Pavlov, R.O. Popovych, A. Prendergast-Smith, F. Stromberg, A. Veselov, C. Wuthrich and V. Zudilin for useful discussions. The research of SO was supported by the NSERC Postdoctoral Fellowship program.
|
2309.06860 | PSF-based Analysis for Detecting Unresolved Wide Binaries | Wide binaries play a crucial role in analyzing the birth environment of stars
and the dynamical evolution of clusters. When wide binaries are located at
greater distances, their companions may overlap in the observed images,
becoming indistinguishable and resulting in unresolved wide binaries, which are
difficult to detect using traditional methods. Utilizing deep learning, we
present a method to identify unresolved wide binaries by analyzing the
point-spread function (PSF) morphology of telescopes. Our trained model
demonstrates exceptional performance in differentiating between single stars
and unresolved binaries with separations ranging from 0.1 to 2 physical pixels,
where the PSF FWHM is ~2 pixels, achieving an accuracy of 97.2% for simulated
data from the Chinese Space Station Telescope. We subsequently tested our
method on photometric data of NGC 6121 observed by the Hubble Space Telescope.
The trained model attained an accuracy of 96.5% and identified 18 wide binary
candidates with separations between 7 and 140 au. The majority of these wide
binary candidates are situated outside the core radius of NGC 6121, suggesting
that they are likely first-generation stars, which is in general agreement with
the results of Monte Carlo simulations. Our PSF-based method shows great
promise in detecting unresolved wide binaries and is well suited for
observations from space-based telescopes with stable PSF. In the future, we aim
to apply our PSF-based method to next-generation surveys such as the China
Space Station Optical Survey, where a larger-field-of-view telescope will be
capable of identifying a greater number of such wide binaries. | You Wu, Jiao Li, Chao Liu, Yi Hu, Long Xu, Tanda Li, Xuefei Chen, Zhanwen Han | 2023-09-13T10:09:38Z | http://arxiv.org/abs/2309.06860v1 | # PSF-based Analysis for Detecting Unresolved Wide Binaries
###### Abstract
Wide binaries play a crucial role in analyzing the birth environment of stars and the dynamical evolution of clusters. When wide binaries are located at greater distances, their companions may overlap in the observed images, becoming indistinguishable and resulting in unresolved wide binaries, which are difficult to detect using traditional methods. Utilizing deep learning, we present a method to identify unresolved wide binaries by analyzing the point-spread function (PSF) morphology of telescopes. Our trained model demonstrates exceptional performance in differentiating between single stars and unresolved binaries with separations ranging from 0.1 to 2 physical pixels, where the PSF FWHM is \(\sim\)2 pixels, achieving an accuracy of 97.2% for simulated data from the Chinese Space Station Telescope. We subsequently tested our method on photometric data of NGC 6121 observed by the Hubble Space Telescope. The trained model attained an accuracy of 96.5% and identified 18 wide binary candidates with separations between 7 and 140 au. The majority of these wide binary candidates are situated outside the core radius of NGC 6121, suggesting that they are likely first-generation stars, which is in general agreement with the results of Monte Carlo simulations. Our PSF-based method shows great promise in detecting unresolved wide binaries and is well suited for observations from space-based telescopes with stable PSF. In the future, we aim to apply our PSF-based method to next-generation surveys such as the China Space Station Optical Survey, where a larger-field-of-view telescope will be capable of identifying a greater number of such wide binaries.
\({}^{1}\) Key Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, People's Republic of China; \({}^{2}\) National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, People's Republic of China; \({}^{3}\) Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, People's Republic of China; \({}^{4}\) Department of Astronomy, Beijing Normal University, Beijing 100875, People's Republic of China
\({}^{5}\) Yunnan Observatories, CAS, P.O. Box 110, Kunming 650011, Yunnan, People's Republic of China
\({}^{6}\) Center for Astronomical Mega-Science, Chinese Academy of Science, Beijing 100012, People's Republic of China Received 2022 September 1; revised 2023 July 7; accepted 2023 July 20; published 2023 September 11
Binary stars (154); Globular star clusters (656); Convolutional neural networks (1938); Photometry (1234)
## 1 Introduction
Binary systems are a ubiquitous product of the star formation process. Roughly half of all main-sequence (MS) stars exist in multiple systems, in which two or more stellar components are commonly involved (Duquennoy & Mayor, 1991; Raghavan et al., 2010; Moe & Di Stefano, 2017). The orbital separation of these systems is distributed over a wide range (Duquennoy & Mayor, 1991; Duchene & Kraus, 2013). A complete statistical analysis for companions to solar-type stars was presented by Raghavan et al. (2010), which revealed that almost half of solar-type stars within 25 pc of the Sun have companions and the median of period distribution is \(\sim\)300 yr. Tokovinin & Lepine (2012) investigated the multiplicity of solar-type dwarfs within 67 pc of the Sun and found that the proportion of binaries with a separation between 2 and 64 kau is higher than 4.4%.
Wide binaries, being weakly gravitationally bound, provide a sensitive probe for astrophysics to study the Galactic dynamical evolution (e.g., Weinberg et al., 1987; Jiang & Tremaine, 2010; Deacon & Kraus, 2020). Since the components of wide binaries are not expected to interact with each other during their life, they can be thought to have evolved in isolation, and tracing their origins can contribute to our understanding of the environment in which stars were born (Retterer & King, 1982; Sterzik et al., 2003). The current mechanisms of wide binary formation support the idea that the components of wide binaries are essentially coeval with similar chemical abundances (e.g., Kouwenhoven et al., 2010; Moeckel & Clarke, 2011; Tokovinin, 2017; Hawkins et al., 2020; Nelson et al., 2021), which allows them to contain information that places potent constraints on stellar physics (Andrews et al., 2018). For example, wide binaries have been utilized to calibrate metallicity indicators (e.g., Bonfils et al., 2005; Rojas-Ayala et al., 2010) and age-rotation relations (e.g., Barnes, 2007; Chaname & Ramirez, 2012; Goddy-Rivera & Chaname, 2018) and to constrain the initial-to-final mass relation for white dwarfs (e.g., Catalan et al., 2008; Andrews et al., 2015; Barrientos & Chaname, 2021). In addition, the properties of wide binaries in clusters are decisive for investigating the initial density and dynamical processes of clusters (e.g., Scally et al., 1999; Parker et al., 2009).
Currently, several primary techniques are used to identify binaries. The first method is by measuring their radial velocity variability (e.g., Duquennoy & Mayor, 1991; Nidever et al., 2002; Geller et al., 2015), which is affected by the eccentricity of the orbit and is only applicable to short-period binaries. The second method derives from the photometric variables (e.g., Carney, 1983; Strassmeier et al., 1989; Prsa et al., 2011). One can deduce whether two stars orbit each other in a tight orbit by
observing how light varies periodically with time, but this method is affected by the orbital inclination and tends to find binaries with short periods. The third approach is feasible only for binaries with the largest separations, i.e., by measuring their common proper motion across the sky, provided that the components can be resolved and distinguished in the observational data. (e.g., Luyten 1971; Wasserman & Weinberg 1991; Chaname & Gould 2004; Oh et al. 2017; El-Badry & Rix 2018; Hartman & Lepine 2020). The fourth method, employing the astrometric technique, involves accurately measuring the positions and motions of stars in the sky. In instances where an unseen companion star is present, the observable star exhibits a periodic wobble in its motion, referred to as the astrometric signature. By analyzing this signature, we can deduce the existence of the companion star and determine the properties of the binary system (Halbwachs et al. 2023; Penoyre et al. 2022a, 2022b). In particular, for clusters, there is a statistical method to investigate the populations of binaries by analyzing binary sequences located on the red side of the single MS (e.g., Romani & Weinberg 1991; Bellazzini et al. 2002; Richer et al. 2004; Sollima et al. 2007a; Milone et al. 2009). This method is independent of orbital period and inclination but is affected by photometric accuracy (Milone et al. 2012).
For wide binaries, neither the variables of radial velocity nor photometric variables can be easily observed owing to their long period (\(P_{\rm orb}\) \(\sim\) 10\({}^{2}\)-10\({}^{6}\) yr or more). In situations where wide binaries are located at significant distances from us, the components might overlap in the observed images, resulting in the appearance of a single point source, despite the fact that they are considerably separated from each other.
Consider, for instance, a wide binary comprising two equal-mass components of 1 \(M_{\odot}\) each, with a 100 yr orbital period and a semimajor axis of 27.14 au. If this binary were located at a distance of 1 kpc, it would not be spatially resolved as two isolated points by a telescope with a spatial resolution limit of \(R>27\) mas. This limitation arises from the spatial resolution capabilities of telescopes, which are fundamentally determined by diffraction effects and constrain the ability to distinguish two close objects. When the angular separation between two stars in a binary is smaller than the spatial resolution limit of a telescope, the light from each star is combined, resulting in the binary appearing as an unresolved point source in astronomical observations. Therefore, it is impossible to identify such wide binaries by proper motions and astrometric parallaxes. Several methods are available to identify unresolved binaries, e.g., the direct spectral detection method (Gullikson et al. 2016), diffraction-limited imaging (Hubrig et al. 2001), and a Bayesian model in color-magnitude space (Widmark et al. 2018), but they are based on, e.g., spectral analysis, searching for specific types of systems, or analysis of the color-magnitude diagram (CMD), none of which is performed directly on photometric images. Furthermore, these techniques may require multiple observations and extensive telescope time to achieve. The development of appropriate approaches to detect unresolved binaries is still a challenge that remains to be addressed. Indeed, determining whether a point source in the photometric image is a single star or not carries significant importance, as this enables us to better understand the multiplicity of stars and assign additional observation constraints on the binary distributions.
With the advent of deep learning, a data-driven approach to image analysis has been implemented that relies not on manual extraction of features but on automatic learning of deep abstract features by machines (LeCun et al. 2015; Goodfellow et al. 2016). Deep learning has been proven to be a significant success in computer vision tasks such as identification, generation, and classification (Voulodimos et al. 2018). It is also proving to be outstanding in many areas of astronomy. For example, deep learning has been applied to searching special objects from spectral surveys (e.g., Parks et al. 2018; Shallue & Vanderburg 2018), determining the stellar parameters (e.g., Fabbro et al. 2018; Leung & Bovy 2019), detecting extreme-ultraviolet waves from solar bursts (e.g., Xu et al. 2020), and analyzing the morphological structures of galaxies (e.g., Dominguez Sanchez et al. 2018; Barchi et al. 2020). Considering the advantages of deep learning in representative feature extraction, this could be a promising tool to provide a solution to study unresolved pairs by further learning multiple levels of data representations.
Another crucial consideration in the analysis of unresolved binaries is how to obtain high-quality images. During the process of imaging astronomical data, there are varying degrees of distortion owing to the effects of the point-spread function (PSF), photoelectric noise, background noise, etc. (Starck & Murtagh 2013). The PSF dominates the morphology of astronomical images and is generally influenced by factors such as aberration, diffraction limit, and atmospheric disturbances (Racine 1996). Ground-based telescope imaging is most affected by the atmospheric disturbance that adds stochasticity to the PSF (Perrin et al. 2003). Although adaptive optics systems can help with wave front correction (Beckers 1993), the loss of detail due to image degradation cannot be entirely avoided in view of the limitations and complexity of the system. (Davies & Kasper 2012). Space-based telescopes, on the contrary, are deployed above the atmosphere to avoid atmospheric disturbances, and their PSF is mainly influenced by the diffraction limit of the instrument itself, resulting in high photometric accuracy and image quality with a good PSF shape. To specifically distinguish between unresolved pairs and individual luminous stars, a sufficient number of high-quality images are required. Such photometric data are available from large surveys of space-based observatories with high spatial resolution imaging.
One prospective mission that will achieve this goal will be the Chinese Space Station Telescope (CSST; Zhan 2011; Cao et al. 2018). CSST is a space telescope with an aperture of 2 m, scheduled for launch in 2024. It is capable of conducting photometric surveys and slitless grating spectroscopic surveys over an area of 17,500 deg\({}^{2}\) of sky, with wavelength coverage ranging from near-ultraviolet to near-infrared (255-1000 nm). CSST has a large field of view of \(\sim\)1 deg\({}^{2}\) with a high spatial resolution of \(\sim\)0\(\aas@@fstack{\prime\prime}\)15 (80% energy concentration region). The survey instruments will obtain billions of photometric data of stars and galaxies, as well as hundreds of millions of spectra over a 10 yr period. We therefore expect that an abundant sample of wide binaries would be identified from the high spatial resolution images by performing an analysis of CSST data.
The primary goal of this study is to develop a valuable and computationally efficient method for detecting unresolved wide binaries. We present the first application of a deep-learning approach for this purpose, analyzing photometric images from space-based observatories. This work serves as a preparatory step toward future programs with the CSST. In this regard, we
conduct experiments on simulated data from CSST and assess the feasibility of the proposed method using real observational data obtained from the Hubble Space Telescope (HST). Our analysis in this work involves the robustness and limitations of unresolved binaries detection, which also prepares for the next generation of surveys.
The remainder of this paper is structured as follows. In Section 2, we outline the methods used for detecting wide binaries, including the construction of the training set, as well as the implementation and optimization of the deep-learning model. Section 3 presents the performance of our model for CSST and details the experiments of the factors that affect the model. We validate our approach in HST data and present a discussion in Section 4, in which we describe how to generate simulated HST data with the PSF and show the results of a search for wide binaries in NGC 6121 using our method. Finally, in Section 5, we conclude with a brief summary, followed by a discussion of future prospects.
## 2 Method
### Contamination of Chance Alignments
Before discussing the detection of wide binaries, it is necessary to assess the contamination from chance alignments that affects the feasibility essentially. Chance alignments are not physically bound binaries, whose effects have been considered in many investigations searching for samples of wide binaries (e.g., Hawkins et al., 2020; Tian et al., 2020; El-Badry et al., 2021). El-Badry et al. (2021) used two methods to estimate the contamination of chance alignments and show that their contribution is dominant when binary separations are greater than 30,000 au. Despite that these estimations do not take into account the case of chance alignments with closer angular separations, it is apparent that the contamination rate increases with increasing binary separations (see Figure 3 of El-Badry et al., 2021).
In our study, chance alignments refer to unresolved stars in which two stars happen to be in the same line of sight and overlap each other in the image. To statistically estimate their contamination, we designed an experiment based on the observational characteristics of CSST. We generated varying numbers of single stars at regular intervals in a 1 deg\({}^{2}\) patch of the sky (equivalent to CSST's field of view) using Monte Carlo simulations, ranging from 10\({}^{6}\) to 10\({}^{7}\) stars. We assumed that any two stars within 0\(\farcs\)15 are chance alignments, which is in accordance with the spatial resolution of CSST. In other words, CSST can only distinguish two stars if their angular separation is greater than 0\(\farcs\)15. This conservative assumption estimates the upper limit of chance alignment. The reason for making this assumption is that our ultimate goal is to detect unresolved binary stars utilizing CSST, so we need to take into account the effect of chance alignments among these unresolved binary stars. We defined the contamination rate as the number of chance alignments divided by the total number of stars. We performed 10 experiments for each stellar density and took the average as the contamination rate, which was plotted in Figure 1, along with the standard deviation as the error bars.
Clearly, the contamination rate demonstrates a positive correlation with the increase in stellar density, exhibiting a linear relationship. It is important to note that the error bars for the contamination rate are quite small, indicating a high level of confidence in the observed trend. At a stellar density of 10 million stars per square degree of sky, the contamination rate reaches approximately 2.5%. Meanwhile, for the stellar density \(<\)10\({}^{6}\) deg\({}^{-2}\), the contamination rate of chance alignments remains below 0.3%.7 This trend indicates that a higher stellar density can affect the accuracy of identifying physically bound binaries to a certain extent, while the contamination from chance alignments remains relatively insignificant at lower stellar densities. Consequently, it is crucial to account for the contamination rates arising from chance alignments when analyzing binaries in regions of varying stellar densities.
Footnote 7: In order to provide a reference for the scale of stellar density in different regions of the Milky Way, we used Gaia DR3 data to roughly calculate the stellar density per square degree of sky in various locations across the Milky Way disk and halo. Based on these estimates, the stellar density in the vicinity of the Milky Way disk is approximately 10\({}^{6}\) deg\({}^{-2}\), whereas in the high galactic latitude regions of the halo the stellar density is less than 10\({}^{6}\) deg\({}^{-2}\).
### Data Set Generation for CSST
Obtaining a sufficient number of high-quality training samples is crucial for the successful implementation of deep-learning approaches. The CSST has a multicolor imaging capability covering a wavelength range from 0.255 to 1.0 \(\mu\)m, consisting of eight bands in total. Generally, shorter wavelengths yield higher spatial resolutions of a telescope, while longer wavelengths result in lower resolutions. As our method is based on the analysis of one-band images and this study is primarily aimed at validation, we chose to utilize the PSF of \(u\) band (effective wavelength approximately 463.1 nm) for generating mock point-source images in our experiments. The FWHM of the PSF for the \(u\) band in the CSST is approximately 0\(\farcs\)15. The original CSST PSF is represented by a \(256\times 256\) matrix (see Figure 2(a)). To generate higher-quality training samples, we subdivided each pixel of the original CSST PSF into smaller subpixels, resulting in a PSF model with a more detailed structure. Convolution operations are then applied to the PSF model, thereby enabling the simulation of point-source imaging at arbitrary locations, including subpixel locations. The resulting virtual point-source images exhibited increased
Figure 1: Comparison of contamination rate of chance alignments and total number of stars for 1 deg\({}^{2}\) of sky. The black circles represent the contamination rates at each stellar density. The line plot formed by connecting these circles with a black line highlights the trend of the data. The error bars in the upper left region of the figure represent the maximum and minimum standard deviations of the contamination rates.
fidelity, which can be used as high-quality training samples for deep learning.
Our data sets contain two classes, namely, single stars and binaries. For the single-star class, we generated each mock image by convolving the PSF model with a point source, which is positioned at diverse locations throughout the image to ensure that the generated point sources can appear anywhere, not exclusively at the center. Each point source is assigned a flux, a variable that governs the signal-to-noise ratio (S/N). To prevent introducing implicit bias into the model during the training process, the fluxes8 of single stars follow a uniform distribution in the range of 1000 to 25,000. In terms of background estimation, some studies, such as the one conducted by He et al. (2021), utilize generative adversarial networks to estimate backgrounds in wide-field observational
Figure 2: Examples of the PSF and simulated images for CSST. (a) Log image of the original PSF for the wavelength of 463.1 nm. (b) Log image of a single star and its corresponding RGB value matrix (flux of star: \(F=10,000\)). (c) Log image of a binary with 0.5 physical pixel separation and its corresponding RGB value matrix (fluxes of the two stars: \(F_{1}=F_{2}=5000\)). (d) Log image of a binary with 1 physical pixel separation and its corresponding RGB value matrix (fluxes of the two stars: \(F_{1}=F_{2}=5000\)).
images. However, our methodology differs owing to our focus on individual sources within relatively smaller regions, which significantly simplifies the task of background estimation. Specifically, for our CSST mock images, we include both Gaussian noise and Poisson noise as part of the backgrounds in the generation process. This approach is intended to better approximate the conditions of real-world observations, enhancing both the robustness and the generalizability of our model and facilitating a more convenient analysis of the \(\mathrm{S/N}\). Here the \(\mathrm{S/N}\) is defined as
\[\mathrm{S/N}(F,\,\sigma)=\frac{\sum_{i}^{\eta}(F_{i}+\eta_{i})}{\sqrt{\sum_{i} ^{\eta}F_{i}+\sigma^{2}}},\,\eta\sim N(0,\,\sigma^{2}), \tag{1}\]
where \(F_{i}\) is the photon flux of pixels and \(\eta_{i}\) is the noise of pixel \(i\) that obeys a Gaussian distribution with a mean of 0 and standard deviation of \(\sigma\). In Section 3.2, we conduct an experiment to compare the impact of varying \(\mathrm{S/N}\) on model performance.
For the class of binaries, two point sources are generated in each image, with each point source being assigned a flux value and a set of coordinates, following a process similar to the one described for single stars. By freely adjusting the positions of each point source, we can obtain simulated images of two point sources' various separations. Due to the pixel size and spatial resolution of the photometric instrument in CSST, approximately 0\(\farcs\)075 and 0\(\farcs\)15, respectively, stars within 2 pixels9 of each other are unresolvable. Furthermore, Hu et al. (2011) employed the artificial-star test technique, demonstrating that a separation of 2 pixels between stars is the minimum distance required for them to be detected as separate objects by the photometry. Consequently, we set the maximum separation for binary stars at 2 pixels and the minimum separation at 0.1 pixels.10 The minimum separation is chosen to account for the significant uncertainty in detecting binaries at such close proximity.
Footnote 9: In our study, the pixels we refer to are physical pixels, which are the smallest discrete elements of the imaging detector used to measure separation. The physical pixel is designed to match the spatial resolution of the imaging system, but it is important to note that it does not determine this resolution itself. The spatial resolution of a space telescope is fundamentally dictated by the diffraction limit, which is determined by the diameter of the mirror and the wavelength of the observed light.
Footnote 10: The separation referred to here is the distance between the centers of two stars after imaging. Within a spatial region represented by 1 pixel, multiple objects or structures can be contained. For CSST, 1 pixel is 0\(\farcs\)075 in size. Therefore, when the distance between two stars is less than 1 pixel, e.g., 0.1 pixel, it implies that the angular separation of the two stars is 0\(\farcs\)0075. Consequently, the imaging system captures both stars within the same pixel, resulting in a blending of image information that makes it arduous to distinguish between them.
To eliminate the model bias caused by an uneven distribution of binary separations, which can result in a model being biased toward predicting more frequent values and performing poorly for less frequent values during training, we balanced the binary separation to be uniformly distributed over a range from 0.1 to 2 physical pixels in steps of 0.1 pixels. This ensured that the model was exposed to a variety of binaries with a diverse range of separations, enabling it to effectively learn their characteristics. Additionally, we have carefully controlled the total flux of binaries to be uniformly distributed within the same range as that of the single-star class, which is set between 1000 and 25,000. This minimizes the impact of flux while maintaining consistency between the single-star and binary-star classes. By doing so, we are able to provide a fair evaluation of our model's performance on both classes, without any undue influence from flux variations.
For all mock images, we only retained the central window of 14 \(\times\) 14 pixel areas for two reasons. The profiles of the stars are mainly concentrated on a few pixels. In addition, due to the smaller image size, the computation speed of the training process can be accelerated.
Finally, we generated about 72,840 mock images for CSST, with half being single stars and the other half binaries. The choice of 72,840 images balances the need for a sufficiently large data set to train our model while maintaining a manageable computational workload. Figure 2 illustrates the original PSF of CSST (panel (a)) and shows three examples of mock images from left to right along with their corresponding RGB value matrices, namely, a single star (panel (b)), a binary with 0.5-pixel separation (panel (c)), a binary with 1-pixel separation (panel (d)). Please note that the background level and total flux are the same in these three examples. These images are derived from single-band observations, and while they do not contain color information, their representation in RGB format helps to highlight intensity differences, thereby enhancing feature recognition. As displayed in Figure 2, the stars appear to be compact owing to the concentration of the PSF energy in a small region at the center. By examining the RGB value matrices, we can observe the subtle changes in values between the different cases. For instance, in the case of a binary with 0.5-pixel separation (panel (c)), the centers of the two stars are situated halfway between adjacent pixels, resulting in the values of the central region being affected by the combined flux of both stars. In the case of a binary with 1-pixel separation (panel (d)), the influence of each star is more distinct. These subtle differences in the RGB value matrices, although not easily discernible to the human eye, provide the foundation for a deep-learning approach to extract information hidden in noisy images and further distinguish between single stars and binaries.
### Data Augmentation and Preprocessing
Data augmentation is currently the most effective preprocessing technique applied to deep-learning models (Perez & Wang 2017). It is a regularization technique that performs various kinds of transformations on the image, such as geometric transformations, color space transformations, and generative modeling augmentation (Shorten & Khoshgoftaar 2019). Appropriate image transformation can greatly improve the generalization of the model and help introduce more diversity into the training set (Wong et al. 2016; Mikolajczyk & Grochowski 2018). Given that, we adopted the following data augmentation techniques:
1. Flip. Randomly reverse the rows or columns of pixels horizontally or vertically.
2. Rotation. Apply a random rotation to each image by up to 30\(\lx@math@degree\).
3. Zoom. Randomly zoom each image with multiples between 1.0 and 1.5.
4. Warp. Randomly change the perspective at which the image is viewed.
5. Brightness and Contrast. Randomly change the amount of light and the contrast of each image.
Data augmentation is a technique that, while not altering the number of samples in the original data set, presents the model with various modified versions of the training data during each epoch. In our case, the original data set consists of the 72,840 mock images generated using the CSST \(u\)-band PSF before the application of data augmentation. This approach is commonly employed during the training phase to enhance the model's performance by broadening the range of input data variations. To further improve the performance, we extended the application of data augmentation techniques to the inference process, adopting the test time augmentation (TTA) technique (Ayhan and Berens, 2018; Radosavovic et al., 2018). TTA facilitates accurate image classification by generating multiple augmented variants of each test image and subsequently aggregating the predictions to produce a final output. This process results in smoother and more robust predictions without requiring additional model training.
### Network Architecture
A convolutional neural network (CNN) is a multilayer neural network designed using the Back-Propagation algorithm (LeCun et al., 1989; Lecun et al., 1998). The basic functional structure of CNN typically consists of three parts, namely the convolution layers, the pooling layers, and the fully connected layers (O'Shea and Nash, 2015). It is a widely used architecture in many computer vision tasks since the convolution kernels are well adapted for feature extraction and significantly reduce the complexity of the model (Razavian et al., 2014). The network is generally extended by adding the number of layers or increasing the width of the network to achieve better performance, as in some popular CNN architectures, e.g., AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan and Zisserman, 2014), GoogLeNet (Szegedy et al., 2015), and ResNet (He et al., 2016). In addition, the larger image resolution can help improve the accuracy as well (Huang et al., 2019).
Balancing the scaling of network width, depth, and resolution is essential to achieve high accuracy and efficiency in CNNs. EfficientNet, proposed by Tan and Le (2019), innovatively addresses this challenge through the implementation of compound scaling and utilization of Mobile Inverted Residual Bottleneck Convolution (MBConv) blocks (Sandler et al., 2018). This effective strategy yields state-of-the-art performance across numerous tasks and data sets. While the paradigm of CNNs continues to evolve and transformer-based models like the Vision Transformer (ViT) model (Dosovitskiy et al., 2020) have also shown excellent performance in image classification tasks, models based on the EfficientNet framework are still among the top-ranked models on mainstream data sets (Tan and Le, 2021) such as ImageNet (Deng et al., 2009), CIFAR-10, CIFAR-100 (Krizhevsky, 2009), and ImageNet21K. Additionally, VIT requires a significant amount of self-attention computation, resulting in greater computational resources and time during training and inference. Taking into account both model performance and computational complexity, we used EfficientNet-B3 architecture as our backbone but included several modifications.
We implemented two fully connected blocks as the header of the model, and we included two additional batch normalization layers after the pooling layer and before the final output, respectively. Batch normalization is a useful regularization method to adjust the distribution of feature maps and smooth losses by calculating the mean and variance of the input features, thereby improving the performance of the model (Ioffe and Szegedy, 2015). For the final output layer, we specified the softmax function to efficiently produce normalized probabilities of each class. The structure of the model architecture is illustrated at the top of Figure 3. The input layer takes the raw image and feeds it into a series of MBConv blocks, which are responsible for extracting relevant features. This critical architectural component of EfficientNet, depicted at the bottom of Figure 3, is specifically designed to enhance the representation of the network. Each MBConv block first decomposes standard convolutions into a point-wise convolution and a depth-wise convolution. This innovative approach substantially reduces the computational demands and the model parameters while preserving a robust representational capacity. Following this, the squeeze-and-excitation mechanism is applied to recalibrate the feature maps, enabling the network to focus more effectively on informative features. Finally, the Projection Layer and Residual Connection work together to merge the transformed and original features, thus maintaining representational capacity. The orchestrated operations within each MBConv block not only enhance performance but also promote efficiency. In order to highlight the advantages of the EfficientNet model built on MBConv blocks, we conducted a comparative study with ResNet, a renowned model in the field of deep learning for image classification. ResNet, constructed based on residual blocks, has seen wide adoption owing to its robust yet simple design and its exceptional performance on various image classification tasks, establishing it as a standard baseline in the field. Given this, we utilized ResNet-34 as our model's backbone for the comparative experiments. The results of this comparative analysis are detailed in Section 3.1.
To train the model, it is necessary to specify a suitable error function that minimizes the difference between the true value and the predicted value for each input image (Hastie et al., 2009). We defined the cross-entropy loss with the label smoothing method (Szegedy et al., 2016) as our loss function. The cross-entropy loss is defined as
\[\mathcal{L}=-\frac{1}{N}\left[\sum_{i=1}^{N}y_{i}\log(p_{i})+(1-y_{i})\log(1-p_{ i})\right], \tag{2}\]
where \(y_{i}\) and \(p_{i}\) are label vector and softmax probability of sample \(i\), respectively. \(N\) is the size of the batches. The original \(y_{i}\) is a one-hot encoded vector taking a value of either 1 or 0, representing binary and single-star labels, respectively. This encoding typically leads to the widest possible gap between the logits of each class, thus causing the model to become overconfident in its label predictions. Instead of using the one-hot encoded label vector, label smoothing introduces noise by soft label assignment, where the labels are mixed with the uniform distribution. Parameter \(y_{i}\) is then given by
\[y_{i}=\left\{\begin{array}{ll}1-\epsilon,&\text{if }i\in\mathcal{C},\\ \frac{\epsilon}{K-1}&\text{otherwise},\end{array}\right. \tag{3}\]
where \(C\) is the correct class and \(K\) is the number of classes. Parameter \(\epsilon\) is the label smoothing factor, set to 0.1 by default. Label smoothing is a regularization method that prevents the model from overfitting and poor generalization, thus improving the performance of the model (Muller et al., 2019).
For performance metrics, we used accuracy and area under the curve (AUC) to assess the reliability and validity of the
model. The accuracy is the ratio of correctly classified samples to total samples. The AUC is defined as the area under the receiver operating characteristic (ROC) curve, which calculates the true positive rates (TPR) and the false positive rates (FPR) using different probability thresholds (Bradley 1997). TPR and FPR are given by
\[\text{TPR}=\frac{\text{TP}}{\text{TP}+\text{FN}} \tag{4}\]
\[\text{FPR}=\frac{\text{FP}}{\text{FP}+\text{TN}}, \tag{5}\]
where true positives (TP) are correctly classified binaries and false negatives (FN) are the ones that are incorrectly classified as binaries. False positives (FP) denote the single stars classified as binaries. Correctly classified single stars are true negatives (TN).
A great feature of the ROC curve is that it remains consistent even when the distribution of positive and negative samples in the test set changes (Fawcett 2006). The AUC scores generally range between 0.5 and 1. \(\text{AUC}=0.5\) means that the model performs identically to a random classifier, while the model is able to correctly distinguish all positive and negative classes when \(\text{AUC}=1\). The closer the AUC score is to 1 and the closer the ROC curve is to the upper left corner, the better the performance of the model.
### Training Process
In the training phase, we divided our data sets into two parts: 80% of the sample as the training set and 20% reserved for validation; we also maintained a balance of the number of
Figure 3: Top panel: the framework for binary detection. The network architecture uses EfficientNet-B3 as the backbone, supplemented with fully connected layers and batch normalization layers. Bottom panel: illustration of the MBConv6 block in the EfficientNet-B3 model, which is composed of three main components: depth-wise separable convolution, expansion convolution, and squeeze-and-excitation module.
single stars and binaries in both training and validation sets. The implementation of the deep-learning model in this paper is done through the deep-learning library of _PyTorch 1.9.0_(Paszke et al., 2019).
In practice, we trained our model in batches of 64 images, and the entire sample of the training set was passed through the network once as one epoch. The initial learning rate is set to 0.004, and the model is trained to minimize the loss function using a gradient-based optimizer called the Adam (Kingma and Ba, 2014) optimizer. To train our model efficiently, we performed a learning rate schedule called One cycle policy(Smith and Toppin, 2019), which contains two learning rate steps: one is to increase the learning rate, and the other reduces the learning rate. After this, the learning rate decreases further over the iterations, several orders of magnitude below the initial value. The One cycle policy helps the model converge to the global optimum quickly and shorten the training time. The training process11 is repeated for several epochs until the loss of the model converges in both the training and validation sets with no further improvement in precision. This process was executed on four Nvidia Tesla P100 GPUs, requiring approximately 50 GB of GPU memory and around 180 million FLOPs in computational overhead.
Footnote 11: Our code is available at [https://github.com/seasu/Unresolved-Wide-Binaries.git](https://github.com/seasu/Unresolved-Wide-Binaries.git), while the sample images utilized can be accessed at [https://deepSolar.quickconnect.cn/sharing/PM4p3ccam](https://deepSolar.quickconnect.cn/sharing/PM4p3ccam).
## 3 Results
### Model Performance
As we described in Section 2.5, we retained 20% of the data set for validation, that is, 14,568 images, about half of which are single stars and half are binaries. After 30 epochs of training with the model based on the ResNet-34 backbone, the accuracy converged to 86%, with an AUC score of 0.944. When using our model, which is based on the EfficientNet-B3 backbone leveraging MBConv blocks, the model achieved an accuracy of 97.2% and an AUC score of 0.997 after 30 epochs of training. This represents an increase of 11.2% in accuracy and a significant improvement in the AUC score when compared with the results from the model based on the ResNet-34 backbone, demonstrating the superior performance of our proposed model for this task. The corresponding ROC curve and confusion matrix of our model based on the EfficientNet-B3 backbone are displayed in Figure 4. Our model reached a high-performance level for mock images of CSST, as demonstrated by the ROC curve, which lies close to the upper left corner. Almost all samples in the validation set are correctly classified. Specifically, only 2.3% of single stars are erroneously classified, and 3.2% of binaries are mistaken for single stars. This also implies that the capabilities of the model to distinguish between single stars and binaries appear to be balanced.
To understand how data features affect model predictions, we applied a feature attribution technique known as Occlusion(Zeiler and Fergus, 2014) to determine whether the model truly identifies the binaries in the image. The important feature regions that influence the probability scores of the classification can be visualized using the Occlusion technique. By means of occluding different regions in the original image, the Occlusion technique quantifies the attribution on the model decision with a given stride and sliding window (Ancona et al., 2018). It is an iterative process where each region is assigned a value that can be interpreted as the importance score for prediction until all regions of the image are completely covered.
We employed the _Captum_ package (Kokhlikyan et al., 2020) to investigate model interpretability, with the results of this analysis being presented in Figure 5, where the left panel shows a binary with a separation of 1 pixel and the corresponding occlusion-based attribution map is exhibited in the right panel. The left panel of Figure 5 represents a 14 \(\times\) 14 physical pixel region on the CCD, displayed at a 224 \(\times\) 224 pixel resolution. The right panel shows the occlusion-based attribution map at the same 224 \(\times\) 224 resolution. The attribution map distinctly highlights the image regions that contribute to the model's classification decision, using varying colors to represent the importance score of each region. The color bar reflects the
Figure 4: Top panel: ROC curve of the model based on the EfficientNet-B3 backbone for CSST data. The yellow dotted line represents the random classifier, and AUC is derived by calculating the area under the ROC curve (pink line). Bottom panel: confusion matrix of the model. The \(x\)-axis and the \(y\)-axis are the predicted label and true label, respectively. The color bar and value in each box correspond to the number of the sample.
degree of importance associated with individual regions, with lighter shades signifying greater relevance of the contained features for making accurate predictions. As can be observed in Figure 5, the regions with high importance scores in the attribute map are concentrated at the locations of the binary in the original image, meaning that the regions most relevant to the prediction are mainly in pixels occupied by the binary, which is also consistent with our expectation. Furthermore, the model is able to identify high-frequency information such as the edge or center of the binary. This visualization intuitively interprets the intricate decision-making process of the model and assesses the impact of each pixel on the model's output.
### Impact of Noise
\(\rm S/N\) is a crucial factor that influences the performance of a network. In this study, we conduct an experiment to systematically examine the relationship between the accuracy of the network and the \(\rm S/N\) of the input images. As described in Section 2.2, the backgrounds of the mock images comprise Gaussian and Poisson noise, while the \(\rm S/N\) can be controlled by assigning a specific flux to each star. Utilizing Equation (1), we generated six supplementary test sets containing mock images with distinct \(\rm S/Ns\), specifically \(\rm S/N=30\), 55, 80, 105, 130, 155. Each test set encompasses 6000 images, evenly divided between single stars and binaries. Subsequently, the trained model is employed to predict the labels of the images in each of these test sets, and the accuracy of the model is computed, as illustrated in Figure 6. By testing our model on these supplementary test sets, we can better understand its performance under a variety of observational conditions and demonstrate its robustness and generalizability.
We see that the accuracy is about 80% at the lowest \(\rm S/N\) (30), which rapidly increases to above 93% when the \(\rm S/N\) is greater than 55. As for a very high \(\rm S/N\) (155), the accuracy of the model can approach 99%. This is attributed to the fact that the capacity of the model to capture image features is significantly enhanced as the \(\rm S/N\) increases. Our model is robust and maintains a high level of performance even at a very low \(\rm S/N\), and the benefit of its increase is marginal to accuracy when the \(\rm S/N\) exceeds 80.
### Impact of Binary Separation
Our model is intended to detect binaries with a separation within 0.1-2 physical pixels, so the sensitivity of the model to binaries with varying separations is necessary to explore. Following the prescription in Section 2.2, we also generated an additional batch of test sets to evaluate the performance of the model for binaries with varying separations. These test sets contain binaries with separations from 0.1 to 2 physical pixels (with step of 0.1 pixels), respectively, and each test set consists of 3000 images that maintain the same uniform distribution of \(\rm S/N\) as the training set. We then infer those test sets through the
Figure 5: Left panel: example of a binary with 1.0 physical pixel separation. Right panel: the occlusion-based attribution map is employed to visualize the features that our model relies on for classification tasks. Features that make a significant positive contribution to the model’s decision are denoted by lighter shades, while those that negatively impact the model’s decision are illustrated with darker shades.
Figure 6: Accuracy as a function of \(\rm S/N\). The pink line links the circles, which reflect the accuracy at various \(\rm S/Ns\).
trained model, and the accuracy of each test set is shown in Figure 7.
The accuracy of our model clearly declines as the binary separation decreases, with a noticeable drop when the separation is less than 0.5 pixels. However, the accuracy remains at 77% as the binary separation approaches 0.1 pixels. In contrast, the accuracy tends to plateau for separations greater than 1.2 pixels, achieving its peak accuracy of 98.4% when nearing 2 pixels. The observed trend in our model's performance across various binary separations aligns with expectations, as the distinguishing features of binary stars become more apparent and easier to detect when they are farther apart. However, as the separation between binary stars narrows, their observable features increasingly overlap and become indistinguishable from those of single stars, making their identification more difficult. As a result, relying solely on image-based methods for detecting unresolved binaries with separations of 0.1 physical pixels or less may not produce reliable outcomes.
## 4 Validation
To verify our method, we now apply our method to HST WFC3, starting by generating mock images of HST to train the model and then performing the well-trained model to detect binaries in NGC 6121.
### Photometry on NGC 6121
Clusters are an ideal environment to validate our method for detecting wide binaries, and the surviving binaries in the dense environment also reveal the origin and evolution of clusters. As previously mentioned, the components of binaries may overlap in the observed images when they are close enough to be indistinguishable through telescopic observation. In such cases, these binaries will be treated as single point sources in photometry, leading to the formation of binary sequences. Binary sequences are evident in the CMD of many clusters, where they are brighter than the single MS (e.g., Bellazzini et al. 2002; Ivanova et al. 2005; Hu et al. 2010). Due to the superposition of two components, the magnitude of these binaries can be estimated by
\[m_{b}=m_{1}-\,2.5\log_{10}\!\left(1+\frac{F_{2}}{F_{1}}\right)\!, \tag{6}\]
where \(m_{1}\) is the magnitude of one component and \(m_{b}\) is the magnitude of an unresolved binary. \(F_{1}\) and \(F_{2}\) are the flux of each component, which are positively correlated with mass in the case of MS-MS binary stars. When they are equal-mass binaries, the \(m_{b}\) will reach a maximum of 0.75 mag brighter than a single star. Otherwise, they lie mainly between the single MS and equal-mass binary lines in the CMD. Thus, it is expected that by our method wide binaries would be more easily identified in the binary sequences, and we can also estimate their binary separations if the distance of the clusters is known.
NGC 6121 (M4) is a globular cluster with an age of 12.7 Gyr (see Hansen et al. 2002) and the closest globular cluster to Earth, with a distance of 1.72 kpc (Peterson et al. 1995). It has been well studied with abundant observation data (e.g., Richer et al. 2004; Marino et al. 2008). It is noteworthy that although NGC 6121 appears to have surpassed its dynamical relaxation time, there is no evidence of a central brightness cusp (Trager et al. 1995), implying that NGC 6121 has not entered the phase of core collapse. This phenomenon may be attributed to the interaction of the central binaries preventing the core collapse (Cote & Fischer 1996). Milone et al. (2012) have confirmed that a relatively large proportion of stars in NGC 6121 are binaries (approximately 10%-15%). Consequently, this cluster serves as an ideal subject for our study.
To derive the photometry on NGC 6121, we selected the F467M and F775W filters of the ultraviolet and visual (UVIS) channel of the Wide Field Camera 3 (WFC3) instrument on board HST, using data from the program GO-12911: "A search for binaries with massive companions in the core of the closest globular cluster M4" (PI: Bedin). The description of program GO-12911 is summarized in Bedin et al. (2013). These two filters are chosen because of their relatively stable PSF and better astrometric characteristics. The WFC3/UVIS has two \(2051\times 4096\) pixel CCDs with a field of view of \(160\arcsec\times 160\arcsec\) and a very high spatial resolution (PSF FWHM \(\sim 0\aas@@fstack{\prime\prime}08\)). Our analysis is performed on individual flat-fielded images ("flt" type), as these are obtained from direct observations, which is necessary for subsequent PSF study, whereas the resampled data after the Drizzle procedure are unsuitable for high-precision analysis of the PSF.
We first used the DAOFIND (Stetson 1987) algorithm to detect star sources with the DAOFIND MMM routine for background estimation in the field. Then, we performed crowded-field photometry using the Photutils package (v1.2.0; Bradley et al. 2021) provided by Astropy (Astropy Collaboration et al. 2013, 2018). Since the goal of this process is to isolate the binary-sequence stars from the single MS stars in the CMD for subsequent model inference, it is not necessary to independently calibrate the magnitude of each star. Finally, the instrumental CMD is derived after matching the stellar coordinates of each filter in a single field.
Figure 8 compared the instrumental magnitude \(m_{\rm F775W}\) versus \(m_{\rm F467M}-m_{\rm F775W}\) CMD obtained from the photometry. We can clearly see that the stars are clustered in two sequences (left panel), a single MS at a lower magnitude and a broadening of about 0.75 mag above the single MS, which is the binary
Figure 7: Accuracy as a function of binary separations. The circles represent the accuracy of the separation in 0.1 physical pixel steps, from 0.1 to 1 physical pixel, and are connected by pink lines.
sequence. As a validation, we removed uncertain sources with fainter magnitudes and selected a segment of the single MS and binary sequence, from instrumental magnitude \(m_{\rm F775W}\sim-9\) up to \(m_{\rm F775W}\sim-11\), for applying our method to identify wide binaries and comparing their proportions between the binary sequence and single MS. The single MS and binary-sequence regions of concern to us are shown in the right panel of Figure 8, containing 1848 and 102 sources, respectively.
### Data Set Generation for HST
In this study, we used images from the F775W band for our validation, necessitating the generation of training data for this band. To simulate the mock images of WFC3/UVIS, we constructed the PSF models using the effective PSF (ePSF) technique developed by Anderson & King (2000) and Anderson (2016). This approach allows us to reproduce real stars as accurately as possible. The ePSF is a smooth continuous function extracted from the grid points of each star. Due to the undersampling of WFC3/UVIS, the grid points can be obtained by a \(\times 4\) oversampling of the image pixels. To accomplish this, we selected 25 isolated stars in the single MS regions of the CMD of NGC 6121, which have good-quality profiles with clean backgrounds. Each star is cut to \(25\times 25\) pixels in the F775W band image. We then constructed the ePSF model as a \(101\times 101\) grid by performing the EPSFBuilder procedure (part of the Photutils software package).
After extracting the ePSF model in the F775W band, we generated the mock images of HST by convolving the point source with the ePSF, and we cropped the images to a size of \(14\times 14\) pixels. Considering the instrumental PSF FWHM (approximately \(0\farcs 08\)) and the physical pixel size (approximately \(0\farcs 04\)) of WFC3/UVIS, we assumed that any two stars within 2 physical pixels of each other are considered to be unresolved binaries. Essentially, the generation details are the same as those performed for the CSST data set (Section 2.2). The separation of binaries ranges from 0.1 to 2 physical pixels evenly, and the fluxes of both single stars and binaries are uniformly distributed between 5000 and 25,000. Note that we estimated the background level from the real HST images of the F775W band.
Finally, we obtained about 54,880 mock images of single stars and the same number of binaries. The constructed ePSF model in the F775W band and examples of mock images can be seen in Figure 9, i.e., the ePSF model of F775W (panel (a)), a single star (panel (b)), a binary with 0.5-pixel separation (panel (c)), and a binary with 1-pixel separation (panel (d)). We also kept the background level and total fluxes consistent in panels (b), (c), and (d). The ePSF model consists of a \(101\times 101\) metric, which has a lower concentration of energy
Figure 8: Left panel: CMD of NGC 6121 using instrumental magnitude \(m_{\rm F775W}\) vs. \(m_{\rm F467A}-m_{\rm F775W}\). Red circles are objects tagged as MS stars, and cyan circles are objects tagged as binary-sequence stars; Right panel: same as the left panel, but zooming in around the MS and binary-sequence regions. The wide binary candidates are indicated by the black star symbols (see Section 4.3).
than the PSF of CSST, so that the stellar profiles are significantly larger than those of the CSST images.
### Detecting Wide Binaries in NGC 6121
In this section, we intend to identify wide binaries in NGC 6121 based on the deep-learning approach. The network is trained on the mock images of HST, with 20% of the data retained for validation. The training process follows that in Section 2; we repeated the entire procedure, including the data augmentation, network structure, loss functions, training strategies, etc. We trained the model for 35 epochs and achieve an accuracy of 96.5%, which is slightly lower compared to the model trained on CSST. The ROC curve and confusion matrix are presented in Figure 10. the AUC score is 0.996, and the ROC curve reaches the upper left corner of the plot, indicating the good performance of the model. The confusion matrix shows that the model is not biased toward recognizing a particular class, with approximately 3.2% of the single stars and 3.5% of the binaries being erroneously identified.
Consequently, the model appears to be highly confident in the prediction that it can be used for practical applications. We first tagged all the sources within the red and blue regions in Figure 8 onto the F775W band image of NGC 6121 and then applied the well-trained model to infer the images of these stars for predicting whether they are wide binaries. Ultimately, our model detected 8 wide binaries out of a total of 102 stars in the red regions of Figure 8, as well as 10 wide binaries out of a total of 1848 stars in the blue regions. These wide binary candidates are marked with black star symbols in the right panel of Figure 8; we can see that the wide binary candidates in the binary sequence are more pronounced and appear to be distributed at the edges of the binary sequence. Taking the uncertainties into account, the proportion of wide binaries in the binary sequence is approximately \(7.84\%\pm 2.66\%\), in contrast to the \(0.54\%\pm 0.17\%\) observed in the single MS. Since the capability of our model is to detect binaries within 0.1-2 physical pixels apart, which translates into an angular separation of \(0\farcs 004\)-\(0\farcs 08\) for WFC\(3/\)UVIS, the physical separation of these binaries can therefore be estimated to be around 7-140 au in light of the distance of 1.72 kpc for NGC 6121 (Peterson et al., 1995).
For the contamination of chance alignments, we calculated the stellar density in the HST field of view by counting the sources of NGC 6121 in the F775W band, and then we implemented the Monte Carlo simulations as in Section 2.1. We estimated the contamination of chance alignments to be about 0.3%, where a chance alignment here is defined as any two stars with an angular separation of less than \(0\farcs 08\). Among
Figure 10: Similar to Figure 4, but for the F775W band of HST WFC\(3/\)UVIS. Top: ROC curve of the model; bottom: confusion matrix of the validation set.
Figure 9: Examples of the ePSF and simulated images for the F775W band of HST WFC\(3/\)UVIS. (a) A log image of the ePSF model in the F775W band. (b) Log image of a single star (flux of star \(F=10\),000). (c) Log image of a binary with 0.5 physical pixel separation (\(F_{1}=F_{2}=5000\)). (d) Log image of a binary with 1 physical pixel separation (\(F_{1}=F_{2}=5000\)).
the detected wide binary candidates, eight are located within the binary sequence. Considering the contamination of chance alignments, the number of chance alignments in the binary sequence is estimated to be \(0.3\pm 0.6\).12 This low probability of chance alignments in the binary sequence allows us to confidently assert that the eight candidates are not chance alignments. In contrast, 10 wide binary candidates are found within the single MS, where the number of chance alignments is estimated to be \(5.6\pm 2.4\). While the number of these candidates exceeds the expected quantity for chance alignments, it is still consistent with a \(3\sigma\) error. These candidates within the single MS could potentially represent wide binaries with large luminosity ratio disparities, making them difficult to distinguish from single stars. Nevertheless, we cannot completely exclude the possibility that they are chance alignments. This observation supports the theoretical expectation that wide binaries should be more frequently encountered in the binary sequence. The presence of a higher proportion of wide binary candidates within the binary sequence, coupled with the low probability of chance alignments in this region, provides indirect evidence of the effectiveness of our method in detecting unresolved wide binaries. This suggests that our approach could be successful in identifying unresolved wide binaries. However, further investigation and validation are necessary to confirm these findings.
Footnote 12: Please note that in cases where the expected counts are small, as is the case here, the corresponding standard deviation can indeed be large.
### Statistical Assessment of the Results
To have a further validation of our results, we calculated the positions of all the sources we obtained relative to the cluster center. Figure 11 presents the spatial distribution of all sources. The core radius area of NGC 6121 is about 0.53 pc (Trager et al., 1993), indicated by the black circle. We found that the fraction of wide binary candidates is \(0.49\%\pm 0.2\%\) inside the cluster core and \(1.65\%\pm 0.47\%\) outside the core radius. A statistical test (\(Z\)-test) reveals a significant difference between these proportions (\(Z=2.58\), \(P=0.00989\)), indicating that the proportion of wide binaries (7 au \(<a<140\) au) in the core of NGC 6121 may be lower than the proportion outside the core. Considering the binary proportion reported by Milone et al. (2012), it is possible that the majority of binaries observed in the core of NGC 6121 consist of close binaries with smaller separations. It is important to note that this conclusion is subject to the limitations and constraints of our study, particularly the incompleteness of our wide binary candidate sample, which is primarily due to the parameter space limitations, such as the magnitude range and the separation range covered in our study. We focused on separations from 0.1 to 2 physical pixels (corresponding to 7-140 au) and a specific magnitude range owing to the constraints of our deep-learning method. Therefore, our results do not encompass the complete binary population, especially those with separations beyond our studied range or outside our magnitude limitations. By acknowledging these limitations, we emphasize that our findings should be interpreted with caution, and future research may shed more light on the true proportion of wide binaries in NGC 6121.
In the top panel of Figure 12, we present the cumulative distribution function (CDF) of the distance from the center of NGC 6121 for wide binary candidates and all sources. The CDF of wide binary candidates (pink line) appears shifted to the right of the CDF of all sources (gray line), indicating that the former sample tends to have larger distances from the center of the cluster. To evaluate the statistical significance of this difference, we employed a two-sample Kolmogorov-Smirnov test to assess the significance of the differences in their respective distributions. The result was a D statistic of 0.41 and a \(p\)-value of 0.3%, where the D statistic measures the maximum difference between the two CDFs and the probability represents the likelihood of obtaining such a difference by chance. The observed large value of the D statistic and small \(p\)-value provide compelling evidence to reject the null hypothesis of identical underlying distributions between the two samples.
To further visualize the difference between the two samples, we created a quantile-quantile plot (\(Q\)-\(Q\) plot) in the bottom panel of Figure 12, where the quantiles of the two samples are visualized against each other. If the samples correspond to the same probability distribution, the points should be along the \(45^{\circ}\) line (yellow dotted line). Obviously, the \(Q\)-\(Q\) plot shows that the quantiles of wide binary candidates are quite different from the quantiles of all sources, indicating that the two samples come from different distributions. This difference in distributions can indicate that wide binary candidates may have a different origin or formation history than other sources in NGC 6121, and further analysis and investigation may be needed to understand the underlying reasons for this difference.
Many studies of multiple populations in clusters reveal the existence of primordial differences between first-generation (FG) and second-generation (SG) stars (e.g., Carretta et al., 2009; Milone et al., 2017; Tailo et al., 2019). As both observational and theoretical studies have shown, SG stars form in the central regions of clusters and are concentrated in the dense environments during their evolution, while FG stars are mainly distributed in the sparse environments (e.g., Sollima et al., 2007b; D'Ercole et al., 2008; Richer et al., 2013). Vesperini et al. (2011) and Hong et al. (2015, 2016)
Figure 11: Gray circles show the spatial distribution of all sources in the regions of our concern in NGC 6121, and the red star symbols represent the wide binary candidates. The black circle indicates the cluster core of NGC 6121 (radius \(\sim 0.53\) pc).
investigated the evolution of binaries in clusters by means of \(N\)-body simulations, and the results showed that the disruption rate of SG binaries is significantly higher than that of the FG binaries.
Environment conditions play a crucial role in the survival and distribution of wide binaries in clusters (Hong et al., 2019). FG binaries, which are born in more relaxed and sparse environments, are more likely to be found outside the core radius, allowing wide binaries to survive in larger numbers. This contrasts with the more concentrated SG population that experiences a higher disruption rate owing to dynamical processes in the dense cluster center (Calura et al., 2019). The latest Monte Carlo simulations of binaries in globular clusters performed by Sollima et al. (2022) showed that wide SG binaries (\(a>10\) au) are immediately destroyed during the cluster evolution, but FG binaries with a separation in the range of 10-100 au can survive even after 12 Gyr of evolution, with a binary fraction of 8%.
Although our sample of wide binaries is incomplete, considering the spatial distribution of the binaries we detected in NGC 6121 and their separations (7 au \(<a<140\) au), it suggests that these wide binaries might primarily be FG binaries. If this is the case, our results could potentially hint at an intrinsically different binary fraction between FG and SG stars in NGC 6121, which is also supported by studies of multiple populations on NGC 6121. For instance, D'Orazi et al. (2010) found a binary fraction of 12% for FG stars and 1% for SG stars based on a study of radial velocity variations, while Milone et al. (2020) analyzed the multiple populations of binaries in NGC 6121 and showed that FG binaries dominate the binary populations. However, further research with a more comprehensive sample is needed to confirm this possibility.
It is important to emphasize that our study does not attempt to determine the binary fraction in NGC 6121, as achieving this would require a comprehensive investigation, taking into account factors such as contamination from field stars and completeness analysis. Such a detailed inquiry surpasses the scope of our current research and calls for further exploration with a more extensive sample. Despite these limitations, our findings still provide preliminary insights into the distribution and properties of such wide binaries and serve as a starting point for further research. The primary objective of our work is to demonstrate the feasibility of our approach in detecting unresolved wide binaries, which has been partially validated to some extent in observational and theoretical studies, aligning with existing findings in the field. We will continue to refine our deep-learning method and expand the parameter space to better capture the complete wide binary population in future studies. It is anticipated that the sample of wide binaries detected using our method will offer valuable contributions to the nature of multiple populations and the evolution of wide binaries within globular clusters.
## 5 Conclusion
In this paper, we have proposed a promising method to search for wide binaries that are unresolved by current observational techniques. Our method analyzes the morphology of PSF using a deep-learning technique, which takes advantage of space-based telescopes for high-resolution imaging to learn observable features of the data. We generated mock images of CSST as the training set and constructed a neural network for the purpose of identifying unresolved wide binaries, while evaluating the effect of \(\mathrm{S/N}\) and binary separations on the results. As a validation, we explored the possibilities of applying our method to HST by analyzing photometric data of NGC 6121. Our main results are summarized below.
1. For CSST, the training set consists of 72,840 mock images generated by the PSF in the \(u\) band, which takes into account Gaussian and Poisson noises, and is divided into single stars and binaries with a separation of 0.1-2 physical pixels. We then developed a CNN network based on EfficientNet to identify unresolved binaries. The trained model achieves a high-accuracy
Figure 12: Top panel: CDF of the distance from the center of NGC 6121 for all sources (gray line) and wide binary candidates (pink line). Bottom panel: the corresponding quantile–quantile plot, with a 45\({}^{\circ}\) line (yellow dotted line) plotted for reference.
performance of 97.2% on the mock data set of CSST, with a value of 0.997 for the AUC score.
2. We found that the accuracy of our method for detecting binaries increased with increasing binary separation or \(\mathrm{S/N}\), and the accuracy significantly depends on both factors when the \(\mathrm{S/N}\) is below 80 or the binary separation is less than 0.5 physical pixels. As a limit of detection, the accuracy is 77% for binaries with a 0.1 physical pixel separation and 80% for images with an \(\mathrm{S/N}\) of 30.
3. The training set of HST is generated using the ePSF, which is constructed from photometric data in the F775W band. We generated 54,880 mock images with real background noise used for the background levels. We also adopted EfficientNet-B3 as the backbone of the model, with some modifications. The trained model is capable of identifying unresolved binaries with an accuracy of 96.5% and an AUC score of 0.996.
4. We performed predictions for point sources on the single MS and binary sequence of NGC 6121 using HST data to validate our method. Eighteen wide binary candidates are identified out of a total of 1950 sources, with the separations ranging from 7 to 140 au. Such binaries are detected more frequently in binary sequences, while binaries in the single MS cannot be excluded as chance alignments. Their proportion is \(0.49\%\pm 0.2\%\) inside the core radius of NGC 6121 and \(1.65\%\pm 0.47\%\) outside the core radius. Our results are consistent with the current studies of multiple populations of clusters.
With the advent of the next generation of space-based surveys (JWST and CSST), an increasing number of high-quality data sets are becoming available. This necessitates the development of innovative methods, such as the PSF-based binary detection proposed here, to fully exploit these data. Our method holds the potential to become a powerful tool for analyzing unresolved binaries. Our approach exhibits very high accuracy in identifying binaries with separations ranging from 0.1 to 2 physical pixels, effectively addressing the challenge of dealing with previously unavailable unresolved wide binaries. In the future, our method can be employed to process space-based observations and analyze data from large surveys.
Although Gaia's spatial resolution is slightly lower than that of CSST (approximately \(0\farcs 4\)), it provides a substantial number of wide binary samples through astrometry and proper-motion analysis. However, most of these samples consist of wide binaries with separations larger than 100 au, highlighting the fact that the current wide binary science based on Gaia data is somewhat limited in scope. For example, Hwang et al. (2022) used Gaia DR3 to analyze the relationship between binary separations and eccentricity distribution, but due to Gaia systematics affecting binaries beyond 100 au, their analysis excluded wide binaries with separations within \(1\farcs 5\). Moreover, the research of El-Badry et al. (2019) on twin binaries using Gaia data suggests that excess twins likely form at separations below 100 au. Therefore, obtaining additional wide binary samples with separations of less than 100 au is crucial for further investigation. Our method directly analyzes images from CSST survey data, enabling us to obtain wide binary samples with separations between tens and hundreds of au. This unique contribution that CSST can make in the field of wide binary research allows for a more comprehensive understanding of these binaries. This includes investigating the relationship between eccentricity distribution and binary separations for this range of wide binaries, exploring the formation mechanisms and evolution of excess twins, as well as comparing the wide binary fraction in star clusters, low-density star-forming regions, and the field. This research can provide valuable constraints on models for binary dynamical evolution and enhance our understanding of stellar multiplicity, as well as the environmental conditions conducive to binary formation.
In future work, we aim to refine and extend our method for validation across a broader range of clusters and more complex scenarios, with the goal of increasing the generality and accuracy of our model predictions. This involves modifying our network and expanding our training set to handle images with three or more point sources. Furthermore, we will incorporate more sophisticated and accurate PSF models. Through these enhancements, we anticipate that our model will become more robust and versatile, in preparation for processing photometric observations from larger upcoming surveys.
## Acknowledgments
We thank the anonymous referees for their constructive comments and suggestions that helped us to improve this work. We also thank Hao Tian, Xin-Ze Zhang, and Wen-Qin Sun for valuable discussion. This work is supported by the National Natural Science Foundation of China under grants (Nos. 12103064, 12125303, 12288102, 12090040/3, 12090043, 11873016), the Science Research grants from the China Manned Space Project (Nos. CMS-CSST-2021-A10, CMS-CSST-2021-A08, CMS-CSST-2021-B05), and the Joint Research Fund in Astronomy (U2031203) under cooperative agreement between the NSFC and Chinese Academy of Sciences.
## ORCID iDs
You Wu [https://orcid.org/0000-0002-3616-9268](https://orcid.org/0000-0002-3616-9268)
Jiao Li https:orcid.org/0000-0002-2577-1990
Chao Liu https:orcid.org/0000-0002-1802-6917
Yi Hu [https://orcid.org/0000-0003-3317-4771](https://orcid.org/0000-0003-3317-4771)
Long Xu [https://orcid.org/0000-0002-9286-2876](https://orcid.org/0000-0002-9286-2876)
Tanda Li [https://orcid.org/0000-0002-5469-5149](https://orcid.org/0000-0002-5469-5149)
Xuefei Chen [https://orcid.org/0000-0001-5284-8001](https://orcid.org/0000-0001-5284-8001)
Zhanwen Han [https://orcid.org/00000-0001-9204-7778](https://orcid.org/00000-0001-9204-7778)
|